WorldWideScience

Sample records for repeated measures approach

  1. The Multilevel Approach to Repeated Measures for Complete and Incomplete Data

    NARCIS (Netherlands)

    Maas, CJM; Snijders, TAB

    2003-01-01

    Repeated measurements often are analyzed by multivariate analysis of variance (MANOVA). An alternative approach is provided by multilevel analysis, also called the hierarchical linear model (HLM), which makes use of random coefficient models. This paper is a tutorial which indicates that the HLM can

  2. Alcohol intake and colorectal cancer: a comparison of approaches for including repeated measures of alcohol consumption

    DEFF Research Database (Denmark)

    Thygesen, Lau Caspar; Wu, Kana; Grønbaek, Morten

    2008-01-01

    BACKGROUND: In numerous studies, alcohol intake has been found to be positively associated with colorectal cancer risk. However, the majority of studies included only one exposure measurement, which may bias the results if long-term intake is relevant.METHODS: We compared different approaches...... for including repeated measures of alcohol intake among 47,432 US men enrolled in the Health Professionals Follow-up Study. Questionnaires including questions on alcohol intake had been completed in 1986, 1990, 1994, and 1998. The outcome was incident colorectal cancer during follow-up from 1986 to 2002.RESULTS......: During follow-up, 868 members of the cohort experienced colorectal cancer. Baseline, updated, and cumulative average alcohol intakes were positively associated with colorectal cancer, with only minor differences among the approaches. These results support moderately increased risk for intake >30 g...

  3. Power analysis for multivariate and repeated measures designs: a flexible approach using the SPSS MANOVA procedure.

    Science.gov (United States)

    D'Amico, E J; Neilands, T B; Zambarano, R

    2001-11-01

    Although power analysis is an important component in the planning and implementation of research designs, it is often ignored. Computer programs for performing power analysis are available, but most have limitations, particularly for complex multivariate designs. An SPSS procedure is presented that can be used for calculating power for univariate, multivariate, and repeated measures models with and without time-varying and time-constant covariates. Three examples provide a framework for calculating power via this method: an ANCOVA, a MANOVA, and a repeated measures ANOVA with two or more groups. The benefits and limitations of this procedure are discussed.

  4. Analysis of repeated measures data

    CERN Document Server

    Islam, M Ataharul

    2017-01-01

    This book presents a broad range of statistical techniques to address emerging needs in the field of repeated measures. It also provides a comprehensive overview of extensions of generalized linear models for the bivariate exponential family of distributions, which represent a new development in analysing repeated measures data. The demand for statistical models for correlated outcomes has grown rapidly recently, mainly due to presence of two types of underlying associations: associations between outcomes, and associations between explanatory variables and outcomes. The book systematically addresses key problems arising in the modelling of repeated measures data, bearing in mind those factors that play a major role in estimating the underlying relationships between covariates and outcome variables for correlated outcome data. In addition, it presents new approaches to addressing current challenges in the field of repeated measures and models based on conditional and joint probabilities. Markov models of first...

  5. Assessing variability and comparing short-term biomarkers of styrene exposure using a repeated measurements approach.

    Science.gov (United States)

    Fustinoni, S; Manini, P; Campo, L; De Palma, G; Andreoli, R; Mutti, A; Bertazzi, P A; Rappaport, S M

    2010-01-15

    The aim of this work is to compare several short-term biomarkers of styrene exposure, namely urinary styrene (StyU), mercapturic acids (M1+M2), mandelic acid (MA), phenylglyoxylic acid (PGA), phenylglycine (PHG), and 4-vinylphenol conjugates (VP), for use as biomarkers of exposure in epidemiologic studies. A repeated measurements protocol (typically 4 measurements per worker over 6 weeks) was applied to measure airborne styrene (StyA) and urinary biomarkers in 10 varnish and 8 fiberglass reinforced plastic workers. Estimated geometric mean personal exposures to StyA were 2.96mg/m(3) in varnish workers and 15.7mg/m(3) in plastic workers. The corresponding levels of StyU, M1+M2, MA, PGA, MA+PGA, PHG and VP were 5.13microg/L, 0.111, 38.2, 22.7, 62.6, 0.978, and 3.97mg/g creatinine in varnish workers and 8.38microg/L, 0.303, 146, 83.4, 232, 2.85 and 3.97mg/g creatinine in plastic workers. Within-worker (sigma(wY)(2)) and between-worker (sigma(bY)(2)) variance components were estimated from the log-transformed data as were the corresponding fold ranges containing 95% of the respective lognormal distributions of daily levels ((w)R(0.95)) and subject-specific mean levels ((b)R(0.95)). Estimates of (w)R(0.95) (range: 4-26) were generally smaller than those of (b)R(0.95) (range: 5-790) for both environmental and biological markers; this indicates that exposures varied much more between workers than within workers in these groups. Since attenuation bias in an estimated exposure-response relationship increases with the variance ratio lambda=sigma(wY)(2)/sigma(bY)(2), we estimated values of lambda for all exposure measures in our study. Values of lambda were typically much less than one (median=0.220) and ranged from 0.089 for M1+M2 in plastic workers to 1.38 for PHG in varnish workers. Since values of lambda were 0.147 and 0.271 for StyA in varnish workers and plastic workers, respectively, compared to 0.178 and 0.210 for MA in the same groups, our results suggest that either

  6. Global sensitivity analysis for repeated measures studies with informative drop-out: A semi-parametric approach.

    Science.gov (United States)

    Scharfstein, Daniel; McDermott, Aidan; Díaz, Iván; Carone, Marco; Lunardon, Nicola; Turkoz, Ibrahim

    2017-05-23

    In practice, both testable and untestable assumptions are generally required to draw inference about the mean outcome measured at the final scheduled visit in a repeated measures study with drop-out. Scharfstein et al. (2014) proposed a sensitivity analysis methodology to determine the robustness of conclusions within a class of untestable assumptions. In their approach, the untestable and testable assumptions were guaranteed to be compatible; their testable assumptions were based on a fully parametric model for the distribution of the observable data. While convenient, these parametric assumptions have proven especially restrictive in empirical research. Here, we relax their distributional assumptions and provide a more flexible, semi-parametric approach. We illustrate our proposal in the context of a randomized trial for evaluating a treatment of schizoaffective disorder. © 2017, The International Biometric Society.

  7. Selecting a linear mixed model for longitudinal data: repeated measures analysis of variance, covariance pattern model, and growth curve approaches.

    Science.gov (United States)

    Liu, Siwei; Rovine, Michael J; Molenaar, Peter C M

    2012-03-01

    With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.

  8. Detection of quasi-periodic processes in repeated measurements: New approach for the fitting and clusterization of different data

    Science.gov (United States)

    Nigmatullin, R.; Rakhmatullin, R.

    2014-12-01

    Many experimentalists were accustomed to think that any independent measurement forms a non-correlated measurement that depends weakly from others. We are trying to reconsider this conventional point of view and prove that similar measurements form a strongly-correlated sequence of random functions with memory. In other words, successive measurements "remember" each other at least their nearest neighbors. This observation and justification on real data help to fit the wide set of data based on the Prony's function. The Prony's decomposition follows from the quasi-periodic (QP) properties of the measured functions and includes the Fourier transform as a partial case. New type of decomposition helps to obtain a specific amplitude-frequency response (AFR) of the measured (random) functions analyzed and each random function contains less number of the fitting parameters in comparison with its number of initial data points. Actually, the calculated AFR can be considered as the generalized Prony's spectrum (GPS), which will be extremely useful in cases where the simple model pretending on description of the measured data is absent but vital necessity of their quantitative description is remained. These possibilities open a new way for clusterization of the initial data and new information that is contained in these data gives a chance for their detailed analysis. The electron paramagnetic resonance (EPR) measurements realized for empty resonator (pure noise data) and resonator containing a sample (CeO2 in our case) confirmed the existence of the QP processes in reality. But we think that the detection of the QP processes is a common feature of many repeated measurements and this new property of successive measurements can attract an attention of many experimentalists. To formulate some general conditions that help to identify and then detect the presence of some QP process in the repeated experimental measurements. To find a functional equation and its solution that

  9. Measurement-based quantum repeaters

    CERN Document Server

    Zwerger, M; Briegel, H J

    2012-01-01

    We introduce measurement-based quantum repeaters, where small-scale measurement-based quantum processors are used to perform entanglement purification and entanglement swapping in a long-range quantum communication protocol. In the scheme, pre-prepared entangled states stored at intermediate repeater stations are coupled with incoming photons by simple Bell-measurements, without the need of performing additional quantum gates or measurements. We show how to construct the required resource states, and how to minimize their size. We analyze the performance of the scheme under noise and imperfections, with focus on small-scale implementations involving entangled states of few qubits. We find measurement-based purification protocols with significantly improved noise thresholds. Furthermore we show that already resource states of small size suffice to significantly increase the maximal communication distance. We also discuss possible advantages of our scheme for different set-ups.

  10. How integrated are behavioral and endocrine stress response traits? A repeated measures approach to testing the stress-coping style model.

    Science.gov (United States)

    Boulton, Kay; Couto, Elsa; Grimmer, Andrew J; Earley, Ryan L; Canario, Adelino V M; Wilson, Alastair J; Walling, Craig A

    2015-02-01

    It is widely expected that physiological and behavioral stress responses will be integrated within divergent stress-coping styles (SCS) and that these may represent opposite ends of a continuously varying reactive-proactive axis. If such a model is valid, then stress response traits should be repeatable and physiological and behavioral responses should also change in an integrated manner along a major axis of among-individual variation. While there is some evidence of association between endocrine and behavioral stress response traits, few studies incorporate repeated observations of both. To test this model, we use a multivariate, repeated measures approach in a captive-bred population of Xiphophorus birchmanni. We quantify among-individual variation in behavioral stress response to an open field trial (OFT) with simulated predator attack (SPA) and measure waterborne steroid hormone levels (cortisol, 11-ketotestosterone) before and after exposure. Under the mild stress stimulus (OFT), (multivariate) behavioral variation among individuals was consistent with a strong axis of personality (shy-bold) or coping style (reactive-proactive) variation. However, behavioral responses to a moderate stressor (SPA) were less repeatable, and robust statistical support for repeatable endocrine state over the full sampling period was limited to 11-ketotestosterone. Although post hoc analysis suggested cortisol expression was repeatable over short time periods, qualitative relationships between behavior and glucocorticoid levels were counter to our a priori expectations. Thus, while our results clearly show among-individual differences in behavioral and endocrine traits associated with stress response, the correlation structure between these is not consistent with a simple proactive-reactive axis of integrated stress-coping style. Additionally, the low repeatability of cortisol suggests caution is warranted if single observations (or indeed repeat measures over short sampling

  11. Teaching renewable energy using online PBL in investigating its effect on behaviour towards energy conservation among Malaysian students: ANOVA repeated measures approach

    Science.gov (United States)

    Nordin, Norfarah; Samsudin, Mohd Ali; Hadi Harun, Abdul

    2017-01-01

    This research aimed to investigate whether online problem based learning (PBL) approach to teach renewable energy topic improves students’ behaviour towards energy conservation. A renewable energy online problem based learning (REePBaL) instruction package was developed based on the theory of constructivism and adaptation of the online learning model. This study employed a single group quasi-experimental design to ascertain the changed in students’ behaviour towards energy conservation after underwent the intervention. The study involved 48 secondary school students in a Malaysian public school. ANOVA Repeated Measure technique was employed in order to compare scores of students’ behaviour towards energy conservation before and after the intervention. Based on the finding, students’ behaviour towards energy conservation improved after the intervention.

  12. Multivariate linear models and repeated measurements revisited

    DEFF Research Database (Denmark)

    Dalgaard, Peter

    2009-01-01

    Methods for generalized analysis of variance based on multivariate normal theory have been known for many years. In a repeated measurements context, it is most often of interest to consider transformed responses, typically within-subject contrasts or averages. Efficiency considerations leads...

  13. On balanced minimal repeated measurements designs

    Directory of Open Access Journals (Sweden)

    Shakeel Ahmad Mir

    2014-10-01

    Full Text Available Repeated Measurements designs are concerned with scientific experiments in which each experimental unit is assigned more than once to a treatment either different or identical. This class of designs has the property that the unbiased estimators for elementary contrasts among direct and residual effects are obtainable. Afsarinejad (1983 provided a method of constructing balanced Minimal Repeated Measurements designs p < t , when t is an odd or prime power, one or more than one treatment may occur more than once in some sequences and  designs so constructed no longer remain uniform in periods. In this paper an attempt has been made to provide a new method to overcome this drawback. Specifically, two cases have been considered                RM[t,n=t(t-t/(p-1,p], λ2=1 for balanced minimal repeated measurements designs and  RM[t,n=2t(t-t/(p-1,p], λ2=2 for balanced  repeated measurements designs. In addition , a method has been provided for constructing              extra-balanced minimal designs for special case RM[t,n=t2/(p-1,p], λ2=1.

  14. Nonparametric additive regression for repeatedly measured data

    KAUST Repository

    Carroll, R. J.

    2009-05-20

    We develop an easily computed smooth backfitting algorithm for additive model fitting in repeated measures problems. Our methodology easily copes with various settings, such as when some covariates are the same over repeated response measurements. We allow for a working covariance matrix for the regression errors, showing that our method is most efficient when the correct covariance matrix is used. The component functions achieve the known asymptotic variance lower bound for the scalar argument case. Smooth backfitting also leads directly to design-independent biases in the local linear case. Simulations show our estimator has smaller variance than the usual kernel estimator. This is also illustrated by an example from nutritional epidemiology. © 2009 Biometrika Trust.

  15. Repeated measurement sampling in genetic association analysis with genotyping errors.

    Science.gov (United States)

    Lai, Renzhen; Zhang, Hong; Yang, Yaning

    2007-02-01

    Genotype misclassification occurs frequently in human genetic association studies. When cases and controls are subject to the same misclassification model, Pearson's chi-square test has the correct type I error but may lose power. Most current methods adjusting for genotyping errors assume that the misclassification model is known a priori or can be assessed by a gold standard instrument. But in practical applications, the misclassification probabilities may not be completely known or the gold standard method can be too costly to be available. The repeated measurement design provides an alternative approach for identifying misclassification probabilities. With this design, a proportion of the subjects are measured repeatedly (five or more repeats) for the genotypes when the error model is completely unknown. We investigate the applications of the repeated measurement method in genetic association analysis. Cost-effectiveness study shows that if the phenotyping-to-genotyping cost ratio or the misclassification rates are relatively large, the repeat sampling can gain power over the regular case-control design. We also show that the power gain is not sensitive to the genetic model, genetic relative risk and the population high-risk allele frequency, all of which are typically important ingredients in association studies. An important implication of this result is that whatever the genetic factors are, the repeated measurement method can be applied if the genotyping errors must be accounted for or the phenotyping cost is high.

  16. Joint modeling of repeated multivariate cognitive measures and competing risks of dementia and death: a latent process and latent class approach.

    Science.gov (United States)

    Proust-Lima, Cécile; Dartigues, Jean-François; Jacqmin-Gadda, Hélène

    2016-02-10

    Joint models initially dedicated to a single longitudinal marker and a single time-to-event need to be extended to account for the rich longitudinal data of cohort studies. Multiple causes of clinical progression are indeed usually observed, and multiple longitudinal markers are collected when the true latent trait of interest is hard to capture (e.g., quality of life, functional dependency, and cognitive level). These multivariate and longitudinal data also usually have nonstandard distributions (discrete, asymmetric, bounded, etc.). We propose a joint model based on a latent process and latent classes to analyze simultaneously such multiple longitudinal markers of different natures, and multiple causes of progression. A latent process model describes the latent trait of interest and links it to the observed longitudinal outcomes using flexible measurement models adapted to different types of data, and a latent class structure links the longitudinal and cause-specific survival models. The joint model is estimated in the maximum likelihood framework. A score test is developed to evaluate the assumption of conditional independence of the longitudinal markers and each cause of progression given the latent classes. In addition, individual dynamic cumulative incidences of each cause of progression based on the repeated marker data are derived. The methodology is validated in a simulation study and applied on real data about cognitive aging obtained from a large population-based study. The aim is to predict the risk of dementia by accounting for the competing death according to the profiles of semantic memory measured by two asymmetric psychometric tests.

  17. Bayesian model selection of informative hypotheses for repeated measurements

    NARCIS (Netherlands)

    Mulder, Joris; Klugkist, I.G.; Schoot, Rens van de; Meeus, W.H.J.; Selfhout, Maarten; Hoijtink, Herbert

    2010-01-01

    When analyzing repeated measurements data, researchers often have expectations about the relations between the measurement means. The expectations can often be formalized using equality and inequality constraints between (i) the measurement means over time, (ii) the measurement means between

  18. Bayesian model selection of informative hypotheses for repeated measurements

    NARCIS (Netherlands)

    Mulder, J.|info:eu-repo/dai/nl/304823031; Klugkist, I.G.|info:eu-repo/dai/nl/27330089X; Van de Schoot, R.|info:eu-repo/dai/nl/304833207; Meeus, W.H.J.|info:eu-repo/dai/nl/070442215; van Zalk, M.H.W.|info:eu-repo/dai/nl/304836214; Hoijtink, H.J.A.|info:eu-repo/dai/nl/075184427

    2009-01-01

    When analyzing repeated measurements data, researchers often have expectations about the relations between the measurement means. The expectations can often be formalized using equality and inequality constraints between (i) the measurement means over time, (ii) the measurement means between groups,

  19. Analysis of repeated outcome measures from longitudinal studies

    Institute of Scientific and Technical Information of China (English)

    Yuanjia WANG; Naihua DUAN

    2011-01-01

    @@ In many clinical studies repeated measurements of an outcome are collected over time.For example,in an 8-week study of treatment for obsessive compulsive disorder,the severity of the disorder may be measured weekly using the Yale-Brown-Obsessive-Compulsive-Disorder-Scale (YBOCS).For each study participant who completes the study,there will be nine repeated measures of YBOCS (a baseline assessment plus eight assessments during the course of treatment).Such a study in which participants are followed and measured repeatedly over time is called a longitudinal study and the resulting data are called longitudinal data.

  20. Airborne Repeat Pass Interferometry for Deformation Measurements

    NARCIS (Netherlands)

    Groot, J.; Otten, M.; Halsema, E. van

    2000-01-01

    In ground engineering the need for deformation measurements is urgent. SAR interferometry can be used to measure small (sub-wavelength) deformations. An experiment to investigate this for dike deformations was set up, using the C-band SAR system PHARUS (PHased ARray Universal SAR). This paper descri

  1. Discriminant analysis for repeated measures data: a review

    Directory of Open Access Journals (Sweden)

    Lisa Lix

    2010-09-01

    Full Text Available Discriminant analysis (DA encompasses procedures for classifying observations into groups (i.e., predictive discriminative analysis and describing the relative importance of variables for distinguishing amongst groups (i.e., descriptive discriminative analysis. In recent years, a number of developments have occurred in DA procedures for the analysis of data from repeated measures designs. Specifically, DA procedures have been developed for repeated measures data characterized by missing observations and/or unbalanced measurement occasions, as well as high-dimensional data in which measurements are collected repeatedly on two or more variables. This paper reviews the literature on DA procedures for univariate and multivariate repeated measures data, focusing on covariance pattern and linear mixed-effects models. A numeric example illustrates their implementation using SAS software.

  2. Stability of parameters in repeated TVA measures

    DEFF Research Database (Denmark)

    Sørensen, Thomas Alrik

    Several recent studies have explored the limitations of human visual short-term memory or VSTM (e.g. Luck & Vogel, 1997; Wheeler & Treisman, 2002; Alvarez & Cavanagh, 2004). Usually researchers agree that VSTM is limited to a capacity of about 3 to 4 objects at any given moment (Cowan, 2001......). Capacity of short-term memory is measured in a range of studies often using the change detection paradigm (CD). However, the whole report paradigm (WR) may be a more reliable paradigm (Cusack, Lehmann, Veldsman, & Mitchell, 2009). Moreover, each individual WR trial yield more information compared to a CD...

  3. A Simple and Transparent Alternative to Repeated Measures ANOVA

    Directory of Open Access Journals (Sweden)

    James W. Grice

    2015-09-01

    Full Text Available Observation Oriented Modeling is a novel approach toward conceptualizing and analyzing data. Compared with traditional parametric statistics, Observation Oriented Modeling is more intuitive, relatively free of assumptions, and encourages researchers to stay close to their data. Rather than estimating abstract population parameters, the overarching goal of the analysis is to identify and explain distinct patterns within the observations. Selected data from a recent study by Craig et al. were analyzed using Observation Oriented Modeling; this analysis was contrasted with a traditional repeated measures ANOVA assessment. Various pitfalls in traditional parametric analyses were avoided when using Observation Oriented Modeling, including the presence of outliers and missing data. The differences between Observation Oriented Modeling and various parametric and nonparametric statistical methods were finally discussed.

  4. Correct use of repeated measures analysis of variance.

    Science.gov (United States)

    Park, Eunsik; Cho, Meehye; Ki, Chang-Seok

    2009-02-01

    In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).

  5. Bayesian Concordance Correlation Coefficient with Application to Repeatedly Measured Data

    Directory of Open Access Journals (Sweden)

    Atanu BHATTACHARJEE

    2015-10-01

    Full Text Available Objective: In medical research, Lin's classical concordance correlation coefficient (CCC is frequently applied to evaluate the similarity of the measurements produced by different raters or methods on the same subjects. It is particularly useful for continuous data. The objective of this paper is to propose the Bayesian counterpart to compute CCC for continuous data. Material and Methods: A total of 33 patients of astrocytoma brain treated in the Department of Radiation Oncology at Malabar Cancer Centre is enrolled in this work. It is a continuous data of tumor volume and tumor size repeatedly measured during baseline pretreatment workup and post surgery follow-ups for all patients. The tumor volume and tumor size are measured separately by MRI and CT scan. The agreement of measurement between MRI and CT scan is calculated through CCC. The statistical inference is performed through Markov Chain Monte Carlo (MCMC technique. Results: Bayesian CCC is found suitable to get prominent evidence for test statistics to explore the relation between concordance measurements. The posterior mean estimates and 95% credible interval of CCC on tumor size and tumor volume are observed with 0.96(0.87,0.99 and 0.98(0.95,0.99 respectively. Conclusion: The Bayesian inference is adopted for development of the computational algorithm. The approach illustrated in this work provides the researchers an opportunity to find out the most appropriate model for specific data and apply CCC to fulfill the desired hypothesis.

  6. Assessing agreement with repeated measures for random observers.

    Science.gov (United States)

    Chen, Chia-Cheng; Barnhart, Huiman X

    2011-12-30

    Agreement studies are often concerned with assessing whether different observers for measuring responses on the same subject or sample can produce similar results. The concordance correlation coefficient (CCC) is a popular index for assessing the closeness among observers for quantitative measurements. Usually, the CCC is used for data without and with replications based on subject and observer effects only. However, we cannot use this methodology if repeated measurements rather than replications are collected. Although there exist some CCC-type indices for assessing agreement with repeated measurements, there is no CCC for random observers and random time points. In this paper, we propose a new CCC for repeated measures where both observers and time points are treated as random effects. A simulation study demonstrates our proposed methodology, and we use vertebral body data and image data for illustrations.

  7. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  8. Measurement and repeatability of interrupter resistance in unsedated newborn infants.

    Science.gov (United States)

    Adams, A M; Olden, C; Wertheim, D; Ives, A; Bridge, P D; Lenton, J; Seddon, P

    2009-12-01

    Interrupter resistance (R(int)) is a useful measure of airway caliber in young children, but has not been well characterized in infants-in whom there are concerns about the accurate measurement of driving pressure. This study aimed to assess the feasibility and repeatability of measuring R(int) in unsedated newborn infants, and to explore alternative algorithms for calculating driving pressure. R(int) measurement was attempted in 28 healthy term newborn infants during natural sleep using the MicroRint device. Paired R(int) measurements were achieved in 24 infants, but after screening of waveforms only 15 infants had at least 5 technically acceptable waveforms on both measurements. R(int) values obtained were comparable with reported values for airflow resistance in newborns using other methods. However, the repeatability coefficient (CR) was much higher than reported values in preschool children using standard back-extrapolation algorithms, with CR 2.47 KPa L(-1) sec (unscreened) and 2.93 KPa L(-1) sec (screened). Other algorithms gave only marginally better repeatability, with all CR values over 50% of the mean R(int) value. Using current commercially available equipment, R(int) is too poorly repeatable to be a reliable measurement of airflow resistance in newborn infants. Lower deadspace equipment is needed, but anatomical and physiological factors in the infant are also important.

  9. Modeling repeated measurement data for occupational exposure assessment and epidemiology

    NARCIS (Netherlands)

    Peretz, Chava

    2004-01-01

    Repeated measurements designs, occur frequently in the assessment of exposure to toxic chemicals. This thesis deals with the possibilities of using mixed effects models for occupational exposure assessment and in the analysis of exposure response relationships. The model enables simultaneous estima

  10. Applying the General Linear Model to Repeated Measures Problems.

    Science.gov (United States)

    Pohlmann, John T.; McShane, Michael G.

    The purpose of this paper is to demonstrate the use of the general linear model (GLM) in problems with repeated measures on a dependent variable. Such problems include pretest-posttest designs, multitrial designs, and groups by trials designs. For each of these designs, a GLM analysis is demonstrated wherein full models are formed and restrictions…

  11. Positional Repeatability Measurements Of Stepper Motors At Cryogenic Temperatures

    Science.gov (United States)

    Pompea, Stephen M.; Hall, Michael S.; Bartko, Frank; Houck, James R.

    1983-08-01

    Stepper motors operating at liquid helium temperature have multiple applications in cryogenically-cooled telescopes such as the Shuttle Infrared Telescope Facility (SIRTF). These SIRTF applications include driving cryogen flow valves, operating the Multiple Instrument Chamber (MIC) beam splitter mechanism, and operating filters and grating wheel mechanisms in the scientific instruments. The positional repeatability of the beam splitter drive mechanism is especially critical since it feeds the optical beam to the scien-tific instruments. Despite these important applications, no significant data on the positional repeatability of stepper motors at cryogenic temperatures has been available. Therefore, we conducted a series of measurements to determine the positional repeatability of a modified, off-the-shelf Berger/Lahr stepper motor (model RDM 253/25, step angle 3.6°) which had demonstrated excellent performance in previous endurance testing at LHe temperature. These test results indicated that the positional repeatability of the motor was excellent at all temperatures, with somewhat better performance at cryogenic temperatures. Another important result was that the motor could be repeatedly turned off and on while still accurately retaining its rotor position.

  12. Repeatability of OCT lens thickness measures with age and accommodation.

    Science.gov (United States)

    Doyle, Lesley; Little, Julie-Anne; Saunders, Kathryn J

    2013-12-01

    To investigate crystalline lens thickness (LT) across a range of ages and accommodative demands and to evaluate the repeatability of LT measurements using the Visante Anterior Segment Optical Coherence Tomographer (AS-OCT) (Zeiss Meditec, Germany) under non-cycloplegic conditions. Participants were 98 visually normal adults aged 18-75 years, stratified into age groups of 18-29, 30-39, 40-49, 50-59, and 60-75 years of age. Images of the crystalline lens were taken using the Visante AS-OCT during stimulation of accommodation at demands of 0, 1, 2, 3, 4, 5, and 8 D with accommodative response measured in a subgroup of participants. Images were analyzed and LT measured assuming a refractive index of 1.42. Repeat measures were taken from 86 participants for each accommodative demand at a second visit. The mean unaccommodated LT for all participants was 4.07 ± 0.40 mm. An average increase in LT of 20 μm per year was calculated (linear regression, R² = 0.61, F(1,89) = 143.92, p report the repeatability of LT measures using the Visante AS-OCT in the non-cyclopleged eye. It has also demonstrated the ability of the Visante AS-OCT to detect small changes in lens thickness with accommodation.

  13. Measuring Repeatability of the Focus-variable Lenses

    Directory of Open Access Journals (Sweden)

    Jan Řezníček

    2014-12-01

    Full Text Available In the field of photogrammetry, the optical system, usually represented by the glass lens, is used for metric purposes. Therefore, the aberration characteristics of such a lens, inducing deviations from projective imaging, has to be well known. However, the most important property of the metric lens is the stability of its glass and mechanical elements, ensuring long-term reliability of the measured parameters. In case of a focus-variable lens, the repeatability of the lens setup is important as well. Lenses with a fixed focal length are usually considered as “fixed” though, in fact, most of them contain one or more movable glass elements, providing the focusing function. In cases where the lens is not equipped with fixing screws, the repeatability of the calibration parameters should be known. This paper derives simple mathematical formulas that can be used for measuring the repeatability of the focus-variable lenses, and gives a demonstrative example of such measuring. The given procedure has the advantage that only demanded parameters are estimated, hence, no unwanted correlations with the additional parameters exist. The test arrangement enables us to measure each demanded magnification of the optical system, which is important in close-range photogrammetry.

  14. Concordance correlation coefficients estimated by generalized estimating equations and variance components for longitudinal repeated measurements.

    Science.gov (United States)

    Tsai, Miao-Yu

    2017-04-15

    The concordance correlation coefficient (CCC) is a commonly accepted measure of agreement between two observers for continuous responses. This paper proposes a generalized estimating equations (GEE) approach allowing dependency between repeated measurements over time to assess intra-agreement for each observer and inter- and total agreement among multiple observers simultaneously. Furthermore, the indices of intra-, inter-, and total agreement through variance components (VC) from an extended three-way linear mixed model (LMM) are also developed with consideration of the correlation structure of longitudinal repeated measurements. Simulation studies are conducted to compare the performance of the GEE and VC approaches for repeated measurements from longitudinal data. An application of optometric conformity study is used for illustration. In conclusion, the GEE approach allowing flexibility in model assumptions and correlation structures of repeated measurements gives satisfactory results with small mean square errors and nominal 95% coverage rates for large data sets, and when the assumption of the relationship between variances and covariances for the extended three-way LMM holds, the VC approach performs outstandingly well for all sample sizes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Capturing learning effects on eye movements in repeated measures experiments

    DEFF Research Database (Denmark)

    Bagger, Martin; Orquin, Jacob Lund; Fiedler, Susann

    We propose and illustrate that repeated exposure to stimuli sets increases the size of the saccade amplitudes. Saccadic amplitudes are closely related to the perceptual span and therefore used as a measure for the information intake in an experiment. Studies on expertise have shown that experts...... experiment in which 68 participants made choices between four alternatives with three different between subject conditions varying in presentation format (verbal matrix, a pictorial matrix, and a realistic product representation). The results consistently demonstrate an increase of the saccade amplitude over...... the course of the experiment independent of condition. We conclude by discussing our results in the light of the possible increase of the perceptual span and its implications for the research procedure in eye-tracking experiments with a repeated measurement design....

  16. Repeated quantitative perfusion and contrast permeability measurement in the MRI examination of a CNS tumor

    Energy Technology Data Exchange (ETDEWEB)

    Vonken, E.P.A.; Osch, M.J.P. van; Willems, P.W.A.; Zwan, A. van der; Bakker, C.J.G.; Viergever, M.A.; Mali, W.P.T.M. [University Hospital Utrecht (Netherlands)

    2000-09-01

    This study reports on the results of quantitative MRI perfusion and contrast permeability measurement on two occasions in one patient. The measurements were separated 81 days in time. The tumor grew considerably in this period, but no change was found with respect to perfusion and contrast permeability. Non-involved white matter values were reproduced to demonstrate repeatability. The presented approach to dynamic susceptibility contrast MRI allows fast and repeatable quantitative assessment of perfusion and is easily integrated in a conventional brain tumor protocol. (orig.)

  17. Hierarchical linear model: thinking outside the traditional repeated-measures analysis-of-variance box.

    Science.gov (United States)

    Lininger, Monica; Spybrook, Jessaca; Cheatham, Christopher C

    2015-04-01

    Longitudinal designs are common in the field of athletic training. For example, in the Journal of Athletic Training from 2005 through 2010, authors of 52 of the 218 original research articles used longitudinal designs. In 50 of the 52 studies, a repeated-measures analysis of variance was used to analyze the data. A possible alternative to this approach is the hierarchical linear model, which has been readily accepted in other medical fields. In this short report, we demonstrate the use of the hierarchical linear model for analyzing data from a longitudinal study in athletic training. We discuss the relevant hypotheses, model assumptions, analysis procedures, and output from the HLM 7.0 software. We also examine the advantages and disadvantages of using the hierarchical linear model with repeated measures and repeated-measures analysis of variance for longitudinal data.

  18. Hierarchical Linear Model: Thinking Outside the Traditional Repeated-Measures Analysis-of-Variance Box

    Science.gov (United States)

    Lininger, Monica; Spybrook, Jessaca; Cheatham, Christopher C.

    2015-01-01

    Longitudinal designs are common in the field of athletic training. For example, in the Journal of Athletic Training from 2005 through 2010, authors of 52 of the 218 original research articles used longitudinal designs. In 50 of the 52 studies, a repeated-measures analysis of variance was used to analyze the data. A possible alternative to this approach is the hierarchical linear model, which has been readily accepted in other medical fields. In this short report, we demonstrate the use of the hierarchical linear model for analyzing data from a longitudinal study in athletic training. We discuss the relevant hypotheses, model assumptions, analysis procedures, and output from the HLM 7.0 software. We also examine the advantages and disadvantages of using the hierarchical linear model with repeated measures and repeated-measures analysis of variance for longitudinal data. PMID:25875072

  19. A Novel Signal Processing Measure to Identify Exact and Inexact Tandem Repeat Patterns in DNA Sequences

    Directory of Open Access Journals (Sweden)

    Ravi Gupta

    2007-03-01

    Full Text Available The identification and analysis of repetitive patterns are active areas of biological and computational research. Tandem repeats in telomeres play a role in cancer and hypervariable trinucleotide tandem repeats are linked to over a dozen major neurodegenerative genetic disorders. In this paper, we present an algorithm to identify the exact and inexact repeat patterns in DNA sequences based on orthogonal exactly periodic subspace decomposition technique. Using the new measure our algorithm resolves the problems like whether the repeat pattern is of period P or its multiple (i.e., 2P, 3P, etc., and several other problems that were present in previous signal-processing-based algorithms. We present an efficient algorithm of O(NLw logLw, where N is the length of DNA sequence and Lw is the window length, for identifying repeats. The algorithm operates in two stages. In the first stage, each nucleotide is analyzed separately for periodicity, and in the second stage, the periodic information of each nucleotide is combined together to identify the tandem repeats. Datasets having exact and inexact repeats were taken up for the experimental purpose. The experimental result shows the effectiveness of the approach.

  20. Modeling intraindividual variability with repeated measures data methods and applications

    CERN Document Server

    Hershberger, Scott L

    2013-01-01

    This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

  1. Unitarity, Feedback, Interactions -- Dynamics Emergent from Repeated Measurements

    CERN Document Server

    Altamirano, Natacha; Mann, Robert B; Zych, Magdalena

    2016-01-01

    Motivated by the recent efforts to describe the gravitational interaction as a classical channel arising from continuous quantum measurements, we study what types of dynamics can emerge from a model of repeated short interactions of a system with a set of ancillae. We show that contingent on the model parameters the resulting dynamics ranges from exact unitarity to arbitrary fast decoherence (quantum Zeno effect). For a series of measurements the effective dynamics includes feedback-control, which for a composite system yields effective interactions between the subsystems. We quantify the amount of decoherence accompanying such induced interactions, generalizing the lower bound of the gravitational example. However, by allowing multipartite measurements, the interactions can be induced with arbitrary low decoherence. Our results have implications for gravity-inspired decoherence models and the simple framework used in the present study can find applications in devising novel quantum control protocols, or quan...

  2. Detecting tropical forest biomass dynamics from repeated airborne lidar measurements

    Directory of Open Access Journals (Sweden)

    V. Meyer

    2013-08-01

    Full Text Available Reducing uncertainty of terrestrial carbon cycle depends strongly on the accurate estimation of changes of global forest carbon stock. However, this is a challenging problem from either ground surveys or remote sensing techniques in tropical forests. Here, we examine the feasibility of estimating changes of tropical forest biomass from two airborne lidar measurements of forest height acquired about 10 yr apart over Barro Colorado Island (BCI, Panama. We used the forest inventory data from the 50 ha Center for Tropical Forest Science (CTFS plot collected every 5 yr during the study period to calibrate the estimation. We compared two approaches for detecting changes in forest aboveground biomass (AGB: (1 relating changes in lidar height metrics from two sensors directly to changes in ground-estimated biomass; and (2 estimating biomass from each lidar sensor and then computing changes in biomass from the difference of two biomass estimates, using two models, namely one model based on five relative height metrics and the other based only on mean canopy height (MCH. We performed the analysis at different spatial scales from 0.04 ha to 10 ha. Method (1 had large uncertainty in directly detecting biomass changes at scales smaller than 10 ha, but provided detailed information about changes of forest structure. The magnitude of error associated with both the mean biomass stock and mean biomass change declined with increasing spatial scales. Method (2 was accurate at the 1 ha scale to estimate AGB stocks (R2 = 0.7 and RMSEmean = 27.6 Mg ha−1. However, to predict biomass changes, errors became comparable to ground estimates only at a spatial scale of about 10 ha or more. Biomass changes were in the same direction at the spatial scale of 1 ha in 60 to 64% of the subplots, corresponding to p values of respectively 0.1 and 0.033. Large errors in estimating biomass changes from lidar data resulted from the uncertainty in detecting changes at 1 ha from ground

  3. Accuracy and repeatability of a new method for measuring facet loads in the lumbar spine.

    Science.gov (United States)

    Wilson, Derek C; Niosi, Christina A; Zhu, Qingan A; Oxland, Thomas R; Wilson, David R

    2006-01-01

    We assessed the repeatability and accuracy of a relatively new, resistance-based sensor (Tekscan 6900) for measuring lumbar spine facet loads, pressures, and contact areas in cadaver specimens. Repeatability of measurements in the natural facet joint was determined for five trials of four specimens loaded in pure moment (+/- 7.5 N m) flexibility tests in axial rotation and flexion-extension. Accuracy of load measurements in four joints was assessed by applying known compressive loads of 25, 50, and 100 N to the natural facet joint in a materials testing machine and comparing the known applied load to the measured load. Measurements of load were obtained using two different calibration approaches: linear and two-point calibrations. Repeatability for force, pressure, and area (average of standard deviation as a percentage of the mean for all trials over all specimens) was 4-6% for axial rotation and 7-10% for extension. Peak resultant force in axial rotation was 30% smaller when calculated using the linear calibration method. The Tekscan sensor overestimated the applied force by 18 +/- 9% (mean+/-standard deviation), 35 +/- 7% and 50 +/- 9% for compressive loads of 100, 50, and 25 N, respectively. The two-point method overestimated the loads by 35 +/- 16%, 45 +/- 7%, and 56 +/- 10% for the same three loads. Our results show that the Tekscan sensor is repeatable. However, the sensor measurement range is not optimal for the small loads transmitted by the facets and measurement accuracy is highly dependent on calibration protocol.

  4. Repeated High-Precision Gravity and GPS Measurement Techniques

    Science.gov (United States)

    Gettings, P.; Harris, R. N.; Allis, R.; Chapman, D. S.

    2003-12-01

    Repeated high-precision gravity and GPS measurements are becoming a common tool for tracking changes in subsurface reservoirs. Despite this, there is little literature which discusses measurement techniques and the expected errors. Our research has focused on improving measurement techniques to be applied to ground water and geothermal steam reservoirs, including quantifying the minimum error levels with modern equipment. We applied these methods in two studies: ground water monitoring of the southern Salt Lake valley, Utah, USA, and steam monitoring of The Geysers geothermal field, California, USA. Gravity measurements using modern relative high-precision meters, such as Scintrex CG-3Ms or L&R E series, can now be routinely made to an accuracy of 5 μ Gal. Such accuracy requires the use of time series analysis at each station, and non-linear instrument drift functions. Modern computerized meters are capable of internally storing a time series of measurements for each station; older meters can often be fitted to log such data to a field computer. This time series, typically of 10-15 minute duration in our work, can then be analyzed in several ways to produce stable estimates of the gravity reading. In particular, our research has emphasized using a weighted arithmetic average (for long occupations), or a Thiele extrapolation scheme (for shorter station occupations). Instrument drift is removed through a superposition of a linear long-term drift function, and an empirical staircase function formed from differences between repeated station occupations. To achieve high-accuracy GPS measurements while maximizing the number of field stations in a survey, rapid-static measurements are necessary. We have tested the effect of occupation time and processing schemes on the absolute accuracy of the resulting GPS position. Using a post-processing differential method with a fixed (but not necessarily continuous) base station within 15 km, positioning error of <4 cm vertical is

  5. Quasar Variability Measurements With SDSS Repeated Imaging and POSS Data

    CERN Document Server

    Ivezic, Z; Juric, M; Anderson, S; Hall, P B; Richards, G T; Rockosi, C M; Vanden Berk, Daniel E; Turner, E L; Knapp, G R; Gunn, J E; Schlegel, D J; Strauss, M A; Schneider, D P

    2004-01-01

    We analyze the properties of quasar variability using repeated SDSS imaging data in five UV-to-far red photometric bands, accurate to 0.02 mag, for 13,000 spectroscopically confirmed quasars. The observed time lags span the range from 3 hours to over 3 years, and constrain the quasar variability for rest-frame time lags of up to two years, and at rest-frame wavelengths from 1000 Ang. to 6000 Ang. We demonstrate that 66,000 SDSS measurements of magnitude differences can be described within the measurement noise by a simple function of only three free parameters. The addition of POSS data constrains the long-term behavior of quasar variability and provides evidence for a turn-over in the structure function. This turn-over indicates that the characteristic time scale for optical variability of quasars is of the order 1 year.

  6. Dispersion Measure Variation of Repeating Fast Radio Burst Sources

    Science.gov (United States)

    Yang, Yuan-Pei; Zhang, Bing

    2017-09-01

    The repeating fast radio burst (FRB) 121102 was recently localized in a dwarf galaxy at a cosmological distance. The dispersion measure (DM) derived for each burst from FRB 121102 so far has not shown significant evolution, even though an apparent increase was recently seen with newly detected VLA bursts. It is expected that more repeating FRB sources may be detected in the future. In this work, we investigate a list of possible astrophysical processes that might cause DM variation of a particular FRB source. The processes include (1) cosmological scale effects such as Hubble expansion and large-scale structure fluctuations; (2) FRB local effects such as gas density fluctuation, expansion of a supernova remnant (SNR), a pulsar wind nebula, and an H ii region; and (3) the propagation effect due to plasma lensing. We find that the DM variations contributed by the large-scale structure are extremely small, and any observable DM variation is likely caused by the plasma local to the FRB source. In addition to mechanisms that decrease DM over time, we suggest that an FRB source in an expanding SNR around a nearly neutral ambient medium during the deceleration (Sedov–Taylor and snowplow) phases or in a growing H ii region can increase DM. Some effects (e.g., an FRB source moving in an H ii region or plasma lensing) can produce either positive or negative DM variations. Future observations of DM variations of FRB 121102 and other repeating FRB sources can provide important clues regarding the physical origin of these sources.

  7. Coseismic and postseismic velocity changes measured by repeating earthquakes

    Science.gov (United States)

    Schaff, David P.; Beroza, Gregory C.

    2004-10-01

    Repeating earthquakes that rupture approximately the same fault patch and have nearly identical waveforms are a useful tool for measuring temporal changes in wave propagation in the Earth's crust. Since source and path effects are common to all earthquakes in a repeating earthquake sequence (multiplet), differences in their waveforms can be attributed to changes in the characteristics of the medium. We have identified over 20 multiplets containing between 5 and 40 repeating events in the aftershock zones of the 1989 Loma Prieta and 1984 Morgan Hill, California, earthquakes. Postmain shock events reveal delays of phases in the early S wave coda of as much as 0.2 s relative to premain shock events. The delay amounts to a path-averaged coseismic velocity decrease of about 1.5% for P waves and 3.5% for S waves. Since most of the multiplets are aftershocks and follow Omori's law, we have excellent temporal sampling in the immediate postmain shock period. We find that the amplitude of the velocity decrease decays logarithmically in time following the main shock. In some cases it returns to the premain shock values, while in others it does not. Similar results are obtained for the Morgan Hill main shock. Because the fractional change in S wave velocity is greater than the fractional change in P wave velocity, it suggests that the opening or connection of fluid-filled fractures is the underlying cause. The magnitude of the velocity change implies that low effective pressures are present in the source region of the velocity change. Our results suggest that the changes are predominantly near the stations and shallow, but we cannot exclude the possibility that changes occur at greater depth as well. If the variations are shallow, we may be detecting the lingering effects of nonlinearity during main shock strong ground motion. If the variations are deep, it suggests that pore pressures at seismogenic depths are high, which would likely play a key role in the earthquake process.

  8. Matrix-based concordance correlation coefficient for repeated measures.

    Science.gov (United States)

    Hiriote, Sasiprapa; Chinchilli, Vernon M

    2011-09-01

    In many clinical studies, Lin's concordance correlation coefficient (CCC) is a common tool to assess the agreement of a continuous response measured by two raters or methods. However, the need for measures of agreement may arise for more complex situations, such as when the responses are measured on more than one occasion by each rater or method. In this work, we propose a new CCC in the presence of repeated measurements, called the matrix-based concordance correlation coefficient (MCCC) based on a matrix norm that possesses the properties needed to characterize the level of agreement between two p× 1 vectors of random variables. It can be shown that the MCCC reduces to Lin's CCC when p= 1. For inference, we propose an estimator for the MCCC based on U-statistics. Furthermore, we derive the asymptotic distribution of the estimator of the MCCC, which is proven to be normal. The simulation studies confirm that overall in terms of accuracy, precision, and coverage probability, the estimator of the MCCC works very well in general cases especially when n is greater than 40. Finally, we use real data from an Asthma Clinical Research Network (ACRN) study and the Penn State Young Women's Health Study for demonstration.

  9. [Analysis of binary classification repeated measurement data with GEE and GLMMs using SPSS software].

    Science.gov (United States)

    An, Shengli; Zhang, Yanhong; Chen, Zheng

    2012-12-01

    To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.

  10. Rank-Based Analysis of Unbalanced Repeated Measures Data

    Directory of Open Access Journals (Sweden)

    M. Mushfiqur Rashid

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In this article, we have developed a rank (intra-subject based analysis of clinical trials with unbalanced repeated measures data. We assume that the errors within each patient are exchangeable and continuous random variables. This rank-based inference is valid when the unbalanced data are missing either completely at random or by design. A drop in dispersion test is developed for general linear hypotheses. A numerical example is given to illustrate the procedure.

  11. Repeated swim stress alters brain benzodiazepine receptors measured in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Weizman, R.; Weizman, A.; Kook, K.A.; Vocci, F.; Deutsch, S.I.; Paul, S.M.

    1989-06-01

    The effects of repeated swim stress on brain benzodiazepine receptors were examined in the mouse using both an in vivo and in vitro binding method. Specific in vivo binding of (/sup 3/H)Ro15-1788 to benzodiazepine receptors was decreased in the hippocampus, cerebral cortex, hypothalamus, midbrain and striatum after repeated swim stress (7 consecutive days of daily swim stress) when compared to nonstressed mice. In vivo benzodiazepine receptor binding was unaltered after repeated swim stress in the cerebellum and pons medulla. The stress-induced reduction in in vivo benzodiazepine receptor binding did not appear to be due to altered cerebral blood flow or to an alteration in benzodiazepine metabolism or biodistribution because there was no difference in (14C)iodoantipyrine distribution or whole brain concentrations of clonazepam after repeated swim stress. Saturation binding experiments revealed a change in both apparent maximal binding capacity and affinity after repeated swim stress. Moreover, a reduction in clonazepam's anticonvulsant potency was also observed after repeated swim stress (an increase in the ED50 dose for protection against pentylenetetrazol-induced seizures), although there was no difference in pentylenetetrazol-induced seizure threshold between the two groups. In contrast to the results obtained in vivo, no change in benzodiazepine receptor binding kinetics was observed using the in vitro binding method. These data suggest that environmental stress can alter the binding parameters of the benzodiazepine receptor and that the in vivo and in vitro binding methods can yield substantially different results.

  12. Can Repeated Painful Blunt Impact Deter Approach Toward a Goal?

    Science.gov (United States)

    2012-11-29

    paintball, motivation, intrinsic motivation, extrinsic motivation, projectile, human behavioral experimentation, human behavior, Target Behavioral...individuals’ movements & approach/avoidance choices? •A prior study had been conducted that incorporated: •explicit social and monetary  rewards •more layers of...deterrence  •Use greater force, on people who are not  rewarded  for  approaching, in a simple approach task, with more blunt impact hits  on subjects

  13. [Analysis of variance of repeated data measured by water maze with SPSS].

    Science.gov (United States)

    Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang

    2007-01-01

    To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (PSPSS statistical package is available to fulfil this process.

  14. Measuring environmental change in forest ecosystems by repeated soil sampling: a North American perspective

    Science.gov (United States)

    Lawrence, Gregory B.; Fernandez, Ivan J.; Richter, Daniel D.; Ross, Donald S.; Hazlett, Paul W.; Bailey, Scott W.; Oiumet, Rock; Warby, Richard A.F.; Johnson, Arthur H.; Lin, Henry; Kaste, James M.; Lapenis, Andrew G.; Sullivan, Timothy J.

    2013-01-01

    Environmental change is monitored in North America through repeated measurements of weather, stream and river flow, air and water quality, and most recently, soil properties. Some skepticism remains, however, about whether repeated soil sampling can effectively distinguish between temporal and spatial variability, and efforts to document soil change in forest ecosystems through repeated measurements are largely nascent and uncoordinated. In eastern North America, repeated soil sampling has begun to provide valuable information on environmental problems such as air pollution. This review synthesizes the current state of the science to further the development and use of soil resampling as an integral method for recording and understanding environmental change in forested settings. The origins of soil resampling reach back to the 19th century in England and Russia. The concepts and methodologies involved in forest soil resampling are reviewed and evaluated through a discussion of how temporal and spatial variability can be addressed with a variety of sampling approaches. Key resampling studies demonstrate the type of results that can be obtained through differing approaches. Ongoing, large-scale issues such as recovery from acidification, long-term N deposition, C sequestration, effects of climate change, impacts from invasive species, and the increasing intensification of soil management all warrant the use of soil resampling as an essential tool for environmental monitoring and assessment. Furthermore, with better awareness of the value of soil resampling, studies can be designed with a long-term perspective so that information can be efficiently obtained well into the future to address problems that have not yet surfaced.

  15. Estimation of the concordance correlation coefficient for repeated measures using SAS and R.

    Science.gov (United States)

    Carrasco, Josep L; Phillips, Brenda R; Puig-Martinez, Josep; King, Tonya S; Chinchilli, Vernon M

    2013-03-01

    The concordance correlation coefficient is one of the most common approaches used to assess agreement among different observers or instruments when the outcome of interest is a continuous variable. A SAS macro and R package are provided here to estimate the concordance correlation coefficient (CCC) where the design of the data involves repeated measurements by subject and observer. The CCC is estimated using U-statistics (UST) and variance components (VC) approaches. Confidence intervals and standard errors are reported along with the point estimate of the CCC. In the case of the VC approach, the linear mixed model output and variance components estimates are also provided. The performance of each function is shown by means of some examples with real data sets.

  16. Using repeated measures of sleep disturbances to predict future diagnosis-specific work disability

    DEFF Research Database (Denmark)

    Salo, Paula; Vahtera, Jussi; Hall, Martica

    2012-01-01

    It is unknown whether or not measuring sleep disturbances repeatedly, rather than at only one point in time, improves prediction of work disability.......It is unknown whether or not measuring sleep disturbances repeatedly, rather than at only one point in time, improves prediction of work disability....

  17. Intelligence Is in the Eye of the Beholder: Investigating Repeated IQ Measurements in Forensic Psychiatry

    Science.gov (United States)

    Habets, Petra; Jeandarme, Inge; Uzieblo, Kasia; Oei, Karel; Bogaerts, Stefan

    2015-01-01

    Background: A stable assessment of cognition is of paramount importance for forensic psychiatric patients (FPP). The purpose of this study was to compare repeated measures of IQ scores in FPPs with and without intellectual disability. Methods: Repeated measurements of IQ scores in FPPs (n = 176) were collected. Differences between tests were…

  18. Intelligence Is in the Eye of the Beholder: Investigating Repeated IQ Measurements in Forensic Psychiatry

    Science.gov (United States)

    Habets, Petra; Jeandarme, Inge; Uzieblo, Kasia; Oei, Karel; Bogaerts, Stefan

    2015-01-01

    Background: A stable assessment of cognition is of paramount importance for forensic psychiatric patients (FPP). The purpose of this study was to compare repeated measures of IQ scores in FPPs with and without intellectual disability. Methods: Repeated measurements of IQ scores in FPPs (n = 176) were collected. Differences between tests were…

  19. Assessment of bone loss with repeated bone mineral measurements: Application to measurements on the individual patient

    Energy Technology Data Exchange (ETDEWEB)

    Wahner, H.W.

    1987-02-01

    Longitudinal measurements on lumbar spine and mid-radius were made by bone absorptiometry techniques in 139 normal women. Bone mineral was measured every 6 months over an median interval of 2.1 years. The results revealed that bone loss at different skeletal sites is non-uniform with equal bone loss patterns in all patients and relatively small variations in bone loss rate between normal women. For achieving these results there is strong demand on high precision and properly spaced measuring intervals for long-term rate of loss measurements. For exclusion of progressive degenerative disease a radiographic evaluation of the spine in the beginning and at the end of the study is mandatory as compression fractures or trauma reveal bone mineral changes independent from the agerelated bone loss. These repeated bone mineral measurements are useful for monitoring and follow-up studies during different therapeutic regimens.

  20. Multiple-objective response-adaptive repeated measurement designs in clinical trials for binary responses.

    Science.gov (United States)

    Liang, Yuanyuan; Li, Yin; Wang, Jing; Carriere, Keumhee C

    2014-02-20

    A multiple-objective allocation strategy was recently proposed for constructing response-adaptive repeated measurement designs for continuous responses. We extend the allocation strategy to constructing response-adaptive repeated measurement designs for binary responses. The approach with binary responses is quite different from the continuous case, as the information matrix is a function of responses, and it involves nonlinear modeling. To deal with these problems, we first build the design on the basis of success probabilities. Then we illustrate how various models can accommodate carryover effects on the basis of logits of response profiles as well as any correlation structure. Through computer simulations, we find that the allocation strategy developed for continuous responses also works well for binary responses. As expected, design efficiency in terms of mean squared error drops sharply, as more emphasis is placed on increasing treatment benefit than estimation precision. However, we find that it can successfully allocate more patients to better treatment sequences without sacrificing much estimation precision.

  1. Shortening trinucleotide repeats using highly specific endonucleases: a possible approach to gene therapy?

    Science.gov (United States)

    Richard, Guy-Franck

    2015-04-01

    Trinucleotide repeat expansions are involved in more than two dozen neurological and developmental disorders. Conventional therapeutic approaches aimed at regulating the expression level of affected genes, which rely on drugs, oligonucleotides, and/or transgenes, have met with only limited success so far. An alternative approach is to shorten repeats to non-pathological lengths using highly specific nucleases. Here, I review early experiments using meganucleases, zinc-finger nucleases (ZFN), and transcription-activator like effector nucleases (TALENs) to contract trinucleotide repeats, and discuss the possibility of using CRISPR-Cas nucleases to the same end. Although this is a nascent field, I explore the possibility of designing nucleases and effectively delivering them in the context of gene therapy.

  2. Repeatability of gait pattern variables measured by use of extremity-mounted inertial measurement units in nonlame horses during trotting.

    Science.gov (United States)

    Cruz, Antonio M; Maninchedda, Ugo E; Burger, Dominik; Wanda, Sabine; Vidondo, Beatriz

    2017-09-01

    OBJECTIVE To determine repeatability of gait variables measured by use of extremity-mounted inertial measurement units (IMUs) in nonlame horses during trotting under controlled conditions of treadmill exercise. ANIMALS 10 horses. PROCEDURES Six IMUs were strapped to the metacarpal, metatarsal, and distal tibial regions of each horse. Data were collected in a standardized manner (3 measurements/d on 3 d/wk over a 3-week period) while each horse was trotted on a treadmill. Every measurement consisted of a minimum of 20 strides from which a minimum of 10 strides was selected for analysis. Spatial and temporal variables were derived from the IMUs. Repeatability coefficients based on the within-subject SD were computed for each gait analysis variable at each week. RESULTS Most of the temporal and spatial variables had high repeatability (repeatability coefficients variables, specifically the symmetry variables (which were calculated from other variables), had somewhat higher repeatability coefficients (ie, lower repeatability) only in the last week. CONCLUSIONS AND CLINICAL RELEVANCE With the exceptions of some symmetry variables, which may reflect individual variations during movement, the extremity-mounted IMUs provided data with high repeatability for nonlame horses trotting under controlled conditions of treadmill exercise. Repeatability was achieved for each instrumented limb segment with regard to the spatial relationship between 2 adjacent segments (joint angles) and the temporal relationship among all segments (limb phasing). Extremity-mounted IMUs could have the potential to become a method for gait analysis in horses.

  3. Causal inference in longitudinal comparative effectiveness studies with repeated measures of a continuous intermediate variable.

    Science.gov (United States)

    Wang, Chen-Pin; Jo, Booil; Brown, C Hendricks

    2014-09-10

    We propose a principal stratification approach to assess causal effects in nonrandomized longitudinal comparative effectiveness studies with a binary endpoint outcome and repeated measures of a continuous intermediate variable. Our method is an extension of the principal stratification approach originally proposed for the longitudinal randomized study "Prevention of Suicide in Primary Care Elderly: Collaborative Trial" to assess the treatment effect on the continuous Hamilton depression score adjusting for the heterogeneity of repeatedly measured binary compliance status. Our motivation for this work comes from a comparison of the effect of two glucose-lowering medications on a clinical cohort of patients with type 2 diabetes. Here, we consider a causal inference problem assessing how well the two medications work relative to one another on two binary endpoint outcomes: cardiovascular disease-related hospitalization and all-cause mortality. Clinically, these glucose-lowering medications can have differential effects on the intermediate outcome, glucose level over time. Ultimately, we want to compare medication effects on the endpoint outcomes among individuals in the same glucose trajectory stratum while accounting for the heterogeneity in baseline covariates (i.e., to obtain 'principal effects' on the endpoint outcomes). The proposed method involves a three-step model estimation procedure. Step 1 identifies principal strata associated with the intermediate variable using hybrid growth mixture modeling analyses. Step 2 obtains the stratum membership using the pseudoclass technique and derives propensity scores for treatment assignment. Step 3 obtains the stratum-specific treatment effect on the endpoint outcome weighted by inverse propensity probabilities derived from Step 2.

  4. Analysis of repeated measurements from medical research when observations are missing

    OpenAIRE

    Walker, K.

    2007-01-01

    Subject dropout is a common problem in repeated measurements health stud ies. Where dropout is related to the response, the results obtained can be substantially biased. The research in this thesis is motivated by a repeated measurements asthma clinical trial with substantial patient dropout. In practice the extent to which missing observations affect parameter esti mates and their efficiency is not clear. Through extensive simulation studies under various scenarios and missing data mechanism...

  5. Mixed double-embryo transfer: A promising approach for patients with repeated implantation failure.

    Science.gov (United States)

    Stamenov, Georgi Stamenov; Parvanov, Dimitar Angelov; Chaushev, Todor Angelov

    2017-06-01

    The purpose of this study was to evaluate the efficacy of frozen mixed double-embryo transfer (MDET; the simultaneous transfer of day 3 and day 5 embryos) in comparison with frozen blastocyst double-embryo transfer (BDET; transfer of two day 5 blastocysts) in patients with repeated implantation failure (RIF). A total of 104 women with RIF who underwent frozen MDET (n=48) or BDET (n=56) with excellent-quality embryos were included in this retrospective analysis. All frozen embryo transfers were performed in natural cycles. The main outcome measures were the implantation rate, clinical pregnancy rate, multiple pregnancy rate, and miscarriage rate. These measures were compared between the patients who underwent MDET or BDET using the chi-square test or the Fisher exact test, as appropriate. The implantation and clinical pregnancy rates were significantly higher in patients who underwent MDET than in those who underwent BDET (60.4% vs. 39.3%, p=0.03 and 52.1% vs. 30.4%, p=0.05, respectively). A significantly lower miscarriage rate was observed in the MDET group (6.9% vs. 10.7%, p=0.05). In addition, the multiple pregnancy rate was slightly, but not significantly, higher in the MDET group (27.1% vs. 25.0%). MDET was found to be significantly superior to double blastocyst transfer. It could be regarded as an appropriate approach to improve in vitro fertilization success rates in RIF patients.

  6. Repeatability of Ocular Measurements with a Dual-Scheimpflug Analyzer in Healthy Eyes

    Directory of Open Access Journals (Sweden)

    Carmen Lopez de la Fuente

    2014-01-01

    Full Text Available Purpose. To assess the repeatability of the Galilei dual Scheimpflug analyzer (GDSA in anterior segment examination. Methods. Fifty-two eyes from 52 healthy volunteers were prospectively and consecutively recruited. Anatomic, axial, refractive, and instantaneous parameters were measured with GDSA to provide a complete characterization of the anterior segment. Repeatability was assessed calculating intraclass correlation coefficient (ICC, and coefficient of variation (COV. Results. Correlation among repeated measurements showed almost perfect reliability (ICC > 0.81 for all parameters except thinnest central corneal thickness (CCT (0.78, corneal thickness average out (0.79, and posterior axial curvature average out (0.60. Repeatability was excellent (COV < 10% for all parameters except anterior chamber volume and, superior iridocorneal angle and eccentricities. In these last three parameters, repeatability limits were excessively high compared to the mean. Conclusions. GDSA in healthy young persons had an almost perfect correlation in measuring anatomic, axial, instantaneous, and refractive parameters with greater variability for peripheral terms. Repeatability of anatomical parameters like pachymetry, anterior chamber, or iridocorneal angle and eccentricity were limited. In healthy young persons, the other evaluated parameters had very good repeatability and their limits of agreement showed excellent clinical results for this device.

  7. Repeat Purchase Intention of Starbucks Consumers in Indonesia: A Green Brand Approach

    Directory of Open Access Journals (Sweden)

    Naili Farida

    2015-12-01

    Full Text Available This study develops and tests the repeat purchase intention model (with a green brand approach. The model considers four determinants; perceived image, satisfaction, trust, and attitude. The model is tested using data and a survey of 203 Starbucks customers in Indonesia. The analysis was carried out by employing Structural Equation Modeling. The data was processed with AMOS 21. The results confirm that the company’s green brand image is positively and significantly related to consumer satisfaction, trust, and attitude. On the other hand, consumer satisfaction and trust are shown to have insignificant influence on repeat purchase intention.

  8. Performance and physiological responses to repeated-sprint exercise: a novel multiple-set approach.

    Science.gov (United States)

    Serpiello, Fabio R; McKenna, Michael J; Stepto, Nigel K; Bishop, David J; Aughey, Robert J

    2011-04-01

    We investigated the acute and chronic responses to multiple sets of repeated-sprint exercise (RSE), focusing on changes in acceleration, intermittent running capacity and physiological responses. Ten healthy young adults (7 males, 3 females) performed an incremental test, a Yo-Yo intermittent recovery test level1 (Yo-Yo IR1), and one session of RSE. RSE comprised three sets of 5 × 4-s maximal sprints on a non-motorised treadmill, with 20 s of passive recovery between repetitions and 4.5 min of passive recovery between sets. After ten repeated-sprint training sessions, participants repeated all tests. During RSE, performance was determined by measuring acceleration, mean and peak power/velocity. Recovery heart rate (HR), HR variability, and finger-tip capillary lactate concentration ([Lac(-)]) were measured. Performance progressively decreased across the three sets of RSE, with the indices of repeated-sprint ability being impaired to a different extent before and after training. Training induced a significant increase (p RSE. There were strong correlations between Yo-Yo IR1 performance and indices of RSE performance, especially acceleration post-training (r = 0.88, p = 0.004). Repeated-sprint training, comprising only 10 min of exercise overall, effectively improved performance during multiple-set RSE. This exercise model better reflects team-sport activities than single-set RSE. The rapid training-induced improvement in acceleration, quantified here for the first time, has wide applications for professional and recreational sport activities.

  9. The concordance correlation coefficient for repeated measures estimated by variance components.

    Science.gov (United States)

    Carrasco, Josep L; King, Tonya S; Chinchilli, Vernon M

    2009-01-01

    The concordance correlation coefficient (CCC) is an index that is commonly used to assess the degree of agreement between observers on measuring a continuous characteristic. Here, a CCC for longitudinal repeated measurements is developed through the appropriate specification of the intraclass correlation coefficient from a variance components linear mixed model. A case example and the results of a simulation study are provided.

  10. Measurement repeatability of tibial tuberosity-trochlear groove offset distance in red fox (Vulpes vulpes) cadavers

    NARCIS (Netherlands)

    Miles, J.E.; Jensen, B.R.; Kirpensteijn, J.; Svalastoga, E.L.; Eriksen, T.

    2013-01-01

    Abstract OBJECTIVE: To describe CT image reconstruction criteria for measurement of the tibial tuberosity-trochlear groove (TT-TG) offset distance, evaluate intra- and inter-reconstruction repeatability, and identify key sources of error in the measurement technique, as determined in vulpine hind li

  11. Intraexaminer repeatability and agreement in stereoacuity measurements made in young adults

    Directory of Open Access Journals (Sweden)

    Beatriz Antona

    2015-04-01

    Full Text Available AIM: To determine the repeatability and agreement of stereoacuity measurements made using some of the most widely used clinical tests: Frisby, TNO, Randot and Titmus. METHODS: Stereoacuity was measured in two different sessions separated by a time interval of at least 24h but no longer than 1wk in 74 subjects of mean age 20.6y using the four methods. The study participants were divided into two groups: subjects with normal binocular vision and subjects with abnormal binocular vision. RESULTS: Best repeatability was shown by the Frisby and Titmus [coefficient of repeatability (COR: ±13 and ±12s arc respectively] in the subjects with normal binocular vision though a clear ceiling effect was noted. In the subjects with abnormal binocular vision, best repeatability was shown by the Frisby (COR: ±69s arc and Randot (COR: ±72s arc. In both groups, the TNO test showed poorest agreement with the other tests. CONCLUSION: The repeatability of stereoacuity measures was low in subjects with poor binocular vision yet fairly good in subjects with normal binocular vision with the exception of the TNO test. The reduced agreement detected between the tests indicates they cannot be used interchangeably.

  12. Repeatability of the modified Thorington card used to measure far heterophoria.

    Science.gov (United States)

    Cebrian, Jose Luis; Antona, Beatriz; Barrio, Ana; Gonzalez, Enrique; Gutierrez, Angel; Sanchez, Isabel

    2014-07-01

    To determine the interexaminer and intraexaminer repeatability of the modified Thorington test (TH) for distance vision in young adults and to compare these results with those observed for the heterophoria tests most commonly used in clinical practice. Agreement among tests was also assessed. Distance heterophoria was quantified on two separate occasions by two examiners in 110 subjects aged 18 to 32 years (mean, 19.74 years; SD, 2.5 years) using four different tests: cover test (CT) Von Graefe, Maddox rod, and modified TH. The repeatability of the tests and agreement between them was estimated by the Bland and Altman method whereby the mean difference and the 95% limits of agreement were determined as the coefficient of repeatability (COR) and coefficient of agreement. The Thorington test showed best interexaminer repeatability (COR = ±1.43Δ), followed closely by CT (COR = ±1.65Δ), whereas best intraexaminer repeatability was observed for CT (COR = ±1.28Δ) followed by TH (COR = ±1.51Δ). Among the different combinations of tests, TH and CT showed best agreement indicated by the lowest coefficient of agreement (±2.23Δ) and a low mean difference (-0.63Δ) between measurements. Good interexaminer and intraexaminer repeatability was observed for both TH and CT, and agreement between the two tests was also good. Given the simple administration of the TH, we recommend its clinical use to quantify distance horizontal heterophoria.

  13. Australian House Prices: A Comparison of Hedonic and Repeat-sales Measures

    OpenAIRE

    James Hansen

    2006-01-01

    House prices are intrinsically difficult to measure due to changes in the composition of properties sold through time and changes in the quality of housing. I provide an overview of the theoretical nature of these issues and consider how regression-based measures of house prices – hedonic and repeat-sales measures – can control for compositional and quality change. I then explore whether these regression-based alternatives can provide accurate estimates of pure house price changes in the Aust...

  14. Repeatability of cardiac-MRI-measured right ventricular size and function in congenital heart disease

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, Rowan; Salem, Yishay [Mount Sinai School of Medicine, Division of Pediatric Cardiology, New York, NY (United States); Shah, Amee; Lai, Wyman W. [Morgan Stanley Children' s Hospital of New York Presbyterian, New York, NY (United States); Nielsen, James C. [Mount Sinai School of Medicine, Division of Pediatric Cardiology, New York, NY (United States); Mount Sinai Children' s Heart Center, Box 1201, New York, NY (United States)

    2011-08-15

    The measurement error for right ventricular (RV) size and function assessed by cardiac MRI (CMRI) in congenital heart disease has not been fully characterized. As CMRI parameters are being increasingly utilized to make clinical decisions, defining error in the clinical setting is critical. This investigation examines the repeatability of CMRI for RV size and function. Forty consecutive people with congenital heart disease involving the RV were retrospectively identified. Contouring of RV volumes was performed by two expert CMRI clinicians. The coefficient of variability and repeatability coefficients were calculated. Repeatability coefficients were multiplied by the mean value for each group studied to define a threshold beyond which measurement error was unlikely to be responsible. The variability for indexed RV end-diastolic volume = 3.2% and 3.3% for intra- and interobserver comparisons, respectively. The repeatability coefficients were 13.2% and 14.9% for intra- and interobserver comparisons, which yielded threshold values of 15.1 ml/m{sup 2} and 20.2 ml/m{sup 2}, respectively. For RV ejection fraction (EF), the repeatability coefficients for intra- and interobserver comparisons were 5.0% and 6.0%, which resulted in threshold values of 2.6 EF% and 3.0 EF%. The threshold values generated can be used during serial assessment of RV size and function. (orig.)

  15. Accuracy and repeatability of an optical motion analysis system for measuring small deformations of biological tissues.

    Science.gov (United States)

    Liu, Helen; Holt, Cathy; Evans, Sam

    2007-01-01

    Optical motion analysis techniques have been widely used in biomechanics for measuring large-scale motions such as gait, but have not yet been significantly explored for measuring smaller movements such as the tooth displacements under load. In principle, very accurate measurements could be possible and this could provide a valuable tool in many engineering applications. The aim of this study was to evaluate accuracy and repeatability of the Qualisys ProReflex-MCU120 system when measuring small displacements, as a step towards measuring tooth displacements to characterise the properties of the periodontal ligament. Accuracy and repeatability of the system was evaluated using a wedge comparator with a resolution of 0.25 microm to provide measured marker displacements in three orthogonal directions. The marker was moved in ten steps in each direction, for each of seven step sizes (0.5, 1, 2, 3, 5, 10, and 20 microm), repeated five times. Spherical and diamond markers were tested. The system accuracy (i.e. percentage of maximum absolute error in range/measurement range), in the 20-200 microm ranges, was +/-1.17%, +/-1.67% and +/-1.31% for the diamond marker in x, y and z directions, while the system accuracy for the spherical marker was +/-1.81%, +/-2.37% and +/-1.39%. The system repeatability (i.e. maximum standard deviation in the measurement range) measured under the different days, light intensity and temperatures for five times, carried out step up and then step down measurements for the same step size, was +/-1.7, +/-2.3 and +/-1.9 microm for the diamond marker, and +/-2.6, +/-3.9 and +/-1.9 microm for the spherical marker in x, y and z directions, respectively. These results demonstrate that the system suffices accuracy for measuring tooth displacements and could potentially be useful in many other applications.

  16. Repeatability of central corneal thickness measurement with the Pentacam HR system

    Directory of Open Access Journals (Sweden)

    Ruiz Simonato Alonso

    2012-02-01

    Full Text Available PURPOSE: To assess the repeatability of central corneal thickness measurement at the geometrical center (Central Corneal Thickness - CCT given by the Pentacam High Resolution (HR Comprehensive Eye Scanner (Oculus, Wetzlar, Germany over time. METHODS: Prospective, single center, observational study. Two separate CCT measurements were taken by the Pentacam corneal tomography exam (CTm 3 to 12 months apart, and compared. RESULTS: One hundred and sixteen eyes (n=116 of 62 health patients were included in this study. Average CCT in first and last visits was 541.6±37 µm and 543.6±36.9 µm respectively. Mean difference between both measurements was 9.2±6.4 µm, and there was no statistically significant difference in CCT measurement between visits, with good correlation between them (P = 0.057, r² = 0,9209. CONCLUSION: Pentacam (HR CTm gives repeatable CCT measurements over time.

  17. HA novel approach to investigate tissue-specific trinucleotide repeat instability

    Directory of Open Access Journals (Sweden)

    Boily Marie-Josee

    2010-03-01

    Full Text Available Abstract Background In Huntington's disease (HD, an expanded CAG repeat produces characteristic striatal neurodegeneration. Interestingly, the HD CAG repeat, whose length determines age at onset, undergoes tissue-specific somatic instability, predominant in the striatum, suggesting that tissue-specific CAG length changes could modify the disease process. Therefore, understanding the mechanisms underlying the tissue specificity of somatic instability may provide novel routes to therapies. However progress in this area has been hampered by the lack of sensitive high-throughput instability quantification methods and global approaches to identify the underlying factors. Results Here we describe a novel approach to gain insight into the factors responsible for the tissue specificity of somatic instability. Using accurate genetic knock-in mouse models of HD, we developed a reliable, high-throughput method to quantify tissue HD CAG repeat instability and integrated this with genome-wide bioinformatic approaches. Using tissue instability quantified in 16 tissues as a phenotype and tissue microarray gene expression as a predictor, we built a mathematical model and identified a gene expression signature that accurately predicted tissue instability. Using the predictive ability of this signature we found that somatic instability was not a consequence of pathogenesis. In support of this, genetic crosses with models of accelerated neuropathology failed to induce somatic instability. In addition, we searched for genes and pathways that correlated with tissue instability. We found that expression levels of DNA repair genes did not explain the tissue specificity of somatic instability. Instead, our data implicate other pathways, particularly cell cycle, metabolism and neurotransmitter pathways, acting in combination to generate tissue-specific patterns of instability. Conclusion Our study clearly demonstrates that multiple tissue factors reflect the level of

  18. Graphic Methods for Interpreting Longitudinal Dyadic Patterns From Repeated-Measures Actor-Partner Interdependence Models

    DEFF Research Database (Denmark)

    Perry, Nicholas; Baucom, Katherine; Bourne, Stacia

    2017-01-01

    Researchers commonly use repeated-measures actor–partner interdependence models (RM-APIM) to understand how romantic partners change in relation to one another over time. However, traditional interpretations of the results of these models do not fully or correctly capture the dyadic temporal...

  19. Accuracy and repeatability of anthropometric facial measurements using cone beam computed tomography

    NARCIS (Netherlands)

    Fourie, Zacharias; Damstra, Janalt; Gerrits, Peter O.; Ren, Yijin

    2011-01-01

    Objective: The purpose of this study was to determine the accuracy and repeatability of linear anthropometric measurements on the soft tissue surface model generated from cone beam computed tomography scans. Materials and Methods: The study sample consisted of seven cadaver heads. The accuracy and r

  20. Cross-trimester repeated measures testing for Down's syndrome screening: an assessment.

    LENUS (Irish Health Repository)

    Wright, D

    2010-07-01

    To provide estimates and confidence intervals for the performance (detection and false-positive rates) of screening for Down\\'s syndrome using repeated measures of biochemical markers from first and second trimester maternal serum samples taken from the same woman.

  1. REPEATABILITY OF THE SUGAR ABSORPTION TEST, USING LACTULOSE AND MANNITOL, FOR MEASURING INTESTINAL PERMEABILITY FOR SUGARS

    NARCIS (Netherlands)

    VANELBURG, RM; UIL, JJ; KOKKE, FTM; MULDER, AM; VANDEBROEK, WGM; MULDER, CJJ; HEYMANS, HSA

    1995-01-01

    Differential sugar-absorption tests for measuring intestinal permeability for sugars have been studied in a variety of gastrointestinal diseases. Their use in general practice has been hampered by a lack of data on reference values and repeatability of the test and the laboratory assay. In this stud

  2. Approaches to Avoid Immune Responses Induced by Repeated Subcutaneous Injections of Allogeneic Umbilical Cord Tissue-Derived Cells

    Science.gov (United States)

    Lutton, Bram V.; Cho, Patricia S.; Hirsh, Erica L.; Ferguson, Kelly K.; Teague, Alexander G. S.; Hanekamp, John S.; Chi, Nina; Goldman, Stephanie N.; Messina, Darin J.; Houser, Stuart; Yeap, Beow Y.; Popma, Sicco H.; Sachs, David H.; Huang, Christene A.

    2013-01-01

    Background Cellular treatments for repairing diseased tissues represent a promising clinical strategy. Umbilical cord tissue-derived cells (UTC) are a unique source of cells with a low immunogenic profile and potential for tissue repair. By using UTC from miniature swine, we previously demonstrated that despite their low immunogenic phenotype, UTC could induce an immune response under certain inflammatory conditions and after multiple subcutaneous (SC) injections. Given that repeat dosing of cells may be necessary to achieve a lasting therapeutic benefit, in this study, we examined approaches to avoid an immune response to multiple SC injections of UTC. Methods By using in vitro and in vivo measures of sensitization to SC cellular injections, we assessed the effects of varying the location of administration site, prolongation of timing between injections, and use of immunosuppressive treatments on repeated cellular injections in Massachusetts General Hospital major histocompatibility complex-defined miniature swine. Results Although under normal conditions, a single SC injection of major histocompatibility complex-mismatched UTC did not induce a detectable immune response, multiple SC injections of UTC demonstrated rapid humoral and cell-mediated immune responses. Avoidance of an immune response to repeat SC injection was achieved by concurrent immunosuppression with each dose of UTC. Conclusions UTC and other similar cell types believed to be nonimmunogenic have the potential to induce immune responses under certain conditions. These studies provide important considerations and guidelines for preclinical studies investigating allogeneic cellular therapies. PMID:21451445

  3. Incomplete quality of life data in lung transplant research: comparing cross sectional, repeated measures ANOVA, and multi-level analysis

    Directory of Open Access Journals (Sweden)

    van der Bij Wim

    2005-09-01

    Full Text Available Abstract Background In longitudinal studies on Health Related Quality of Life (HRQL it frequently occurs that patients have one or more missing forms, which may cause bias, and reduce the sample size. Aims of the present study were to address the problem of missing data in the field of lung transplantation (LgTX and HRQL, to compare results obtained with different methods of analysis, and to show the value of each type of statistical method used to summarize data. Methods Results from cross-sectional analysis, repeated measures on complete cases (ANOVA, and a multi-level analysis were compared. The scores on the dimension 'energy' of the Nottingham Health Profile (NHP after transplantation were used to illustrate the differences between methods. Results Compared to repeated measures ANOVA, the cross-sectional and multi-level analysis included more patients, and allowed for a longer period of follow-up. In contrast to the cross sectional analyses, in the complete case analysis, and the multi-level analysis, the correlation between different time points was taken into account. Patterns over time of the three methods were comparable. In general, results from repeated measures ANOVA showed the most favorable energy scores, and results from the multi-level analysis the least favorable. Due to the separate subgroups per time point in the cross-sectional analysis, and the relatively small number of patients in the repeated measures ANOVA, inclusion of predictors was only possible in the multi-level analysis. Conclusion Results obtained with the various methods of analysis differed, indicating some reduction of bias took place. Multi-level analysis is a useful approach to study changes over time in a data set where missing data, to reduce bias, make efficient use of available data, and to include predictors, in studies concerning the effects of LgTX on HRQL.

  4. Optimal selection of individuals for repeated covariate measurements in follow-up studies.

    Science.gov (United States)

    Reinikainen, Jaakko; Karvanen, Juha; Tolonen, Hanna

    2016-12-01

    Repeated covariate measurements bring important information on the time-varying risk factors in long epidemiological follow-up studies. However, due to budget limitations, it may be possible to carry out the repeated measurements only for a subset of the cohort. We study cost-efficient alternatives for the simple random sampling in the selection of the individuals to be remeasured. The proposed selection criteria are based on forms of the D-optimality. The selection methods are compared with the simulation studies and illustrated with the data from the East-West study carried out in Finland from 1959 to 1999. The results indicate that cost savings can be achieved if the selection is focused on the individuals with high expected risk of the event and, on the other hand, on those with extreme covariate values in the previous measurements. © The Author(s) 2014.

  5. Intraexaminer repeatability and agreement in stereoacuity measurements made in young adults

    Institute of Scientific and Technical Information of China (English)

    Beatriz; Antona; Ana; Barrio; Isabel; Sanchez; Enrique; Gonzalez; Guadalupe; Gonzalez

    2015-01-01

    AIM: To determine the repeatability and agreement of stereoacuity measurements made using some of the most widely used clinical tests: Frisby, TNO, Randot and Titmus.METHODS: Stereoacuity was measured in two different sessions separated by a time interval of at least 24 h but no longer than 1wk in 74 subjects of mean age 20.6y using the four methods. The study participants were divided into two groups: subjects with normal binocular vision and subjects with abnormal binocular vision.RESULTS: Best repeatability was shown by the Frisby and Titmus [coefficient of repeatability(COR): ±13 and±12s arc respectively] in the subjects with normal binocular vision though a clear ceiling effect was noted.In the subjects with abnormal binocular vision, best repeatability was shown by the Frisby(COR: ±69s arc)and Randot(COR: ±72s arc). In both groups, the TNO test showed poorest agreement with the other tests.CONCLUSION:Therepeatabilityofstereoacuitymeasures was low in subjects with poor binocular vision yet fairly good in subjects with normal binocular vision with the exception of the TNO test. The reduced agreement detected between the tests indicates they cannot be used interchangeably.

  6. Repeatability and reproducibility of optic nerve head perfusion measurements using optical coherence tomography angiography

    Science.gov (United States)

    Chen, Chieh-Li; Bojikian, Karine D.; Xin, Chen; Wen, Joanne C.; Gupta, Divakar; Zhang, Qinqin; Mudumbai, Raghu C.; Johnstone, Murray A.; Chen, Philip P.; Wang, Ruikang K.

    2016-06-01

    Optical coherence tomography angiography (OCTA) has increasingly become a clinically useful technique in ophthalmic imaging. We evaluate the repeatability and reproducibility of blood perfusion in the optic nerve head (ONH) measured using optical microangiography (OMAG)-based OCTA. Ten eyes from 10 healthy volunteers are recruited and scanned three times with a 68-kHz Cirrus HD-OCT 5000-based OMAG prototype system (Carl Zeiss Meditec Inc., Dublin, California) centered at the ONH involving two separate visits within six weeks. Vascular images are generated with OMAG processing by detecting the differences in OCT signals between consecutive B-scans acquired at the same retina location. ONH perfusion is quantified as flux, vessel area density, and normalized flux within the ONH for the prelaminar, lamina cribrosa, and the full ONH. Coefficient of variation (CV) and intraclass correlation coefficient (ICC) are used to evaluate intravisit and intervisit repeatability, and interobserver reproducibility. ONH perfusion measurements show high repeatability [CV≤3.7% (intravisit) and ≤5.2% (intervisit)] and interobserver reproducibility (ICC≤0.966) in all three layers by three metrics. OCTA provides a noninvasive method to visualize and quantify ONH perfusion in human eyes with excellent repeatability and reproducibility, which may add additional insight into ONH perfusion in clinical practice.

  7. Repeatability, variability and reference values of pulsed wave Doppler echocardiographic measurements in healthy Saanen goats

    Directory of Open Access Journals (Sweden)

    Leroux Aurélia A

    2012-10-01

    Full Text Available Abstract Background Pulsed wave (PW Doppler echocardiography has become a routine non invasive cardiac diagnostic tool in most species. However, evaluation of intracardiac blood flow requires reference values, which are poorly documented in goats. The aim of this study was to test the repeatability, the variability, and to establish the reference values of PW measurements in healthy adult Saanen goats. Using a standardised PW Doppler echocardiographic protocol, 10 healthy adult unsedated female Saanen goats were investigated three times at one day intervals by the same observer. Mitral, tricuspid, aortic and pulmonary flows were measured from a right parasternal view, and mitral and aortic flows were also measured from a left parasternal view. The difference between left and right side measurements and the intra-observer inter-day repeatability were tested and then the reference values of PW Doppler echocardiographic parameters in healthy adult female Saanen goats were established. Results As documented in other species, all caprine PW Doppler parameters demonstrated a poor inter-day repeatability and a moderate variability. Tricuspid and pulmonary flows were best evaluated on the right side whereas mitral and aortic flows were best obtained on the left side, and reference values are reported for healthy adult Saanen goats. Conclusions PW Doppler echocardiography allows the measurement of intracardiac blood flow indices in goats. The reference values establishment will help interpreting these indices of cardiac function in clinical cardiac cases and developing animal models for human cardiology research.

  8. Using the Monte Carlo Simulation Methods in Gauge Repeatability and Reproducibility of Measurement System Analysis

    Directory of Open Access Journals (Sweden)

    Tsu-Ming Yeh

    2013-10-01

    Full Text Available Measurements are required to maintain the consistent quality of all finished and semi-finished products in a production line. Many firms in the automobile and general precision industries apply the TS 16949:2009 Technical Specifications and Measurement System Analysis (MSA manual to establish measurement systems. This work is undertaken to evaluate gauge repeatability and reproducibility (GR&R to verify the measuring ability and quality of the measurement frame, as well as to continuously improve and maintain the verification process. Nevertheless, the implementation of GR&R requires considerable time and manpower, and is likely to affect production adversely. In addition, the evaluation value for GR&R is always different owing to the sum of man-made and machine-made variations. Using a Monte Carlo simulation and the prediction of the repeatability and reproducibility of the measurement system analysis, this study aims to determine the distribution of %GR&R and the related number of distinct categories (ndc. This study uses two case studies of an automobile parts manufacturer and the combination of a Monte Carlo simulation, statistical bases, and the prediction of the repeatability and reproducibility of the measurement system analysis to determine the probability density function, the distribution of %GR&R, and the related number of distinct categories (ndc. The method used in this study could evaluate effectively the possible range of the GR&R of the measurement capability, in order to establish a prediction model for the evaluation of the measurement capacity of a measurement system.

  9. A Bayesian model for repeated measures zero-inflated count data with application to outpatient psychiatric service use

    Science.gov (United States)

    Neelon, Brian H.; O’Malley, A. James; Normand, Sharon-Lise T.

    2009-01-01

    In applications involving count data, it is common to encounter an excess number of zeros. In the study of outpatient service utilization, for example, the number of utilization days will take on integer values, with many subjects having no utilization (zero values). Mixed-distribution models, such as the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB), are often used to fit such data. A more general class of mixture models, called hurdle models, can be used to model zero-deflation as well as zero-inflation. Several authors have proposed frequentist approaches to fitting zero-inflated models for repeated measures. We describe a practical Bayesian approach which incorporates prior information, has optimal small-sample properties, and allows for tractable inference. The approach can be easily implemented using standard Bayesian software. A study of psychiatric outpatient service use illustrates the methods. PMID:21339863

  10. Antarctic Ice Sheet Slope and Aspect Based on Icesat's Repeat Orbit Measurement

    Science.gov (United States)

    Yuan, L.; Li, F.; Zhang, S.; Xie, S.; Xiao, F.; Zhu, T.; Zhang, Y.

    2017-09-01

    Accurate information of ice sheet surface slope is essential for estimating elevation change by satellite altimetry measurement. A study is carried out to recover surface slope of Antarctic ice sheet from Ice, Cloud and land Elevation Satellite (ICESat) elevation measurements based on repeat orbits. ICESat provides repeat ground tracks within 200 meters in cross-track direction and 170 meters in along-track direction for most areas of Antarctic ice sheet. Both cross-track and along-track surface slopes could be obtained by adjacent repeat ground tracks. Combining those measurements yields a surface slope model with resolution of approximately 200 meters. An algorithm considering elevation change is developed to estimate the surface slope of Antarctic ice sheet. Three Antarctic Digital Elevation Models (DEMs) were used to calculate surface slopes. The surface slopes from DEMs are compared with estimates by using in situ GPS data in Dome A, the summit of Antarctic ice sheet. Our results reveal an average surface slope difference of 0.02 degree in Dome A. High resolution remote sensing images are also used in comparing the results derived from other DEMs and this paper. The comparison implies that our results have a slightly better coherence with GPS observation than results from DEMs, but our results provide more details and perform higher accuracy in coastal areas because of the higher resolution for ICESat measurements. Ice divides are estimated based on the aspect, and are weakly consistent with ice divides from other method in coastal regions.

  11. Validity and repeatability of three in-shoe pressure measurement systems.

    Science.gov (United States)

    Price, Carina; Parker, Daniel; Nester, Christopher

    2016-05-01

    In-shoe pressure measurement devices are used in research and clinic to quantify plantar foot pressures. Various devices are available, differing in size, sensor number and type; therefore accuracy and repeatability. Three devices (Medilogic, Tekscan and Pedar) were examined in a 2 day×3 trial design, quantifying insole response to regional and whole insole loading. The whole insole protocol applied an even pressure (50-600kPa) to the insole surface for 0-30s in the Novel TruBlue™ device. The regional protocol utilised cylinders with contact surfaces of 3.14 and 15.9cm(2) to apply pressures of 50 and 200kPa. The validity (% difference and Root Mean Square Error: RMSE) and repeatability (Intra-Class Correlation Coefficient: ICC) of the applied pressures (whole insole) and contact area (regional) were outcome variables. Validity of the Pedar system was highest (RMSE 2.6kPa; difference 3.9%), with the Medilogic (RMSE 27.0kPa; difference 13.4%) and Tekscan (RMSE 27.0kPa; difference 5.9%) systems displaying reduced validity. The average and peak pressures demonstrated high between-day repeatability for all three systems and each insole size (ICC≥0.859). The regional contact area % difference ranged from -97 to +249%, but the ICC demonstrated medium to high between-day repeatability (ICC≥0.797). Due to the varying responses of the systems, the choice of an appropriate pressure measurement device must be based on the loading characteristics and the outcome variables sought. Medilogic and Tekscan were most effective between 200 and 300kPa; Pedar performed well across all pressures. Contact area was less precise, but relatively repeatable for all systems.

  12. Measuring Test Measurement Error: A General Approach

    Science.gov (United States)

    Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2013-01-01

    Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…

  13. Predicting seed yield in perennial ryegrass using repeated canopy reflectance measurements and PLSR

    DEFF Research Database (Denmark)

    Gislum, René; Deleuran, Lise Christina; Boelt, Birte

    2009-01-01

    Repeated canopy reflectance measurements together with partial least-squares regression (PLSR) were used to predict seed yield in perennial ryegrass (Lolium perenne L.). The measurements were performed during the spring and summer growing seasons of 2001 to 2003 in three field experiments...... reflectance measurements was from approximately 600 cumulative growing degree-days (CGDD) to approximately 900 CGDD. This is the period just before and at heading of the seed crop. Furthermore, regression coefficients showed that information about N and water is important. The results support the development...

  14. REPEATED MEASURES ANALYSIS OF CHANGES IN PHOTOSYNTHETIC EFFICIENCY IN SOUR CHERRY DURING WATER DEFICIT

    Directory of Open Access Journals (Sweden)

    Marija Viljevac

    2012-06-01

    Full Text Available The objective of this study was to investigate changes in photosynthetic efficiency applying repeated measures ANOVA using the photosynthetic performance index (PIABS of the JIP-test as a vitality parameter in seven genotypes of sour cherry (Prunus cerasus, L. during 10 days of continuous water deficit. Both univariate and multivariate ANOVA repeated measures revealed highly significant time effect (Days and its subsequent interactions with genotype and water deficit. However, the multivariate Pillai’s trace test detected the interaction Time × Genotype × Water deficit as not significant. According to the Tukey’s Studentized Range (HSD test, differences between the control and genotypes exposed to water stress became significant on the fourth day of the experiment, indicating that the plants on the average, began to lose their photosynthetic efficiency four days after being exposed to water shortage. It corroborates previous findings in other species that PIABS is very sensitive tool for detecting drought stress.

  15. Status of a UAV SAR Designed for Repeat Pass Interferometry for Deformation Measurements

    Science.gov (United States)

    Hensley, Scott; Wheeler, Kevin; Hoffman, Jim; Miller, Tim; Lou, Yunling; Muellerschoen, Ron; Zebker, Howard; Madsen, Soren; Rosen, Paul

    2004-01-01

    Under the NASA ESTO sponsored Instrument Incubator Program we have designed a lightweight, reconfigurable polarimetric L-band SAR designed for repeat pass deformation measurements of rapidly deforming surfaces of geophysical interest such as volcanoes or earthquakes. This radar will be installed on an unmanned airborne vehicle (UAV) or a lightweight, high-altitude, and long endurance platform such as the Proteus. After a study of suitable available platforms we selected the Proteus for initial development and testing of the system. We want to control the repeat track capability of the aircraft to be within a 10 m tube to support the repeat deformation capability. We conducted tests with the Proteus using real-time GPS with sub-meter accuracy to see if pilots could fly the aircraft within the desired tube. Our results show that pilots are unable to fly the aircraft with the desired accuracy and therefore an augmented autopilot will be required to meet these objectives. Based on the Proteus flying altitude of 13.7 km (45,000 ft), we are designing a fully polarimetric L-band radar with 80 MHz bandwidth and 16 km range swath. This radar will have an active electronic beam steering antenna to achieve Doppler centroid stability that is necessary for repeat-pass interferometry (RPI). This paper will present are design criteria, current design and expected science applications.

  16. Analysis of oligonucleotide array experiments with repeated measures using mixed models

    Directory of Open Access Journals (Sweden)

    Getchell Thomas V

    2004-12-01

    Full Text Available Abstract Background Two or more factor mixed factorial experiments are becoming increasingly common in microarray data analysis. In this case study, the two factors are presence (Patients with Alzheimer's disease or absence (Control of the disease, and brain regions including olfactory bulb (OB or cerebellum (CER. In the design considered in this manuscript, OB and CER are repeated measurements from the same subject and, hence, are correlated. It is critical to identify sources of variability in the analysis of oligonucleotide array experiments with repeated measures and correlations among data points have to be considered. In addition, multiple testing problems are more complicated in experiments with multi-level treatments or treatment combinations. Results In this study we adopted a linear mixed model to analyze oligonucleotide array experiments with repeated measures. We first construct a generalized F test to select differentially expressed genes. The Benjamini and Hochberg (BH procedure of controlling false discovery rate (FDR at 5% was applied to the P values of the generalized F test. For those genes with significant generalized F test, we then categorize them based on whether the interaction terms were significant or not at the α-level (αnew = 0.0033 determined by the FDR procedure. Since simple effects may be examined for the genes with significant interaction effect, we adopt the protected Fisher's least significant difference test (LSD procedure at the level of αnew to control the family-wise error rate (FWER for each gene examined. Conclusions A linear mixed model is appropriate for analysis of oligonucleotide array experiments with repeated measures. We constructed a generalized F test to select differentially expressed genes, and then applied a specific sequence of tests to identify factorial effects. This sequence of tests applied was designed to control for gene based FWER.

  17. Analysis of oligonucleotide array experiments with repeated measures using mixed models.

    Science.gov (United States)

    Li, Hao; Wood, Constance L; Getchell, Thomas V; Getchell, Marilyn L; Stromberg, Arnold J

    2004-12-30

    Two or more factor mixed factorial experiments are becoming increasingly common in microarray data analysis. In this case study, the two factors are presence (Patients with Alzheimer's disease) or absence (Control) of the disease, and brain regions including olfactory bulb (OB) or cerebellum (CER). In the design considered in this manuscript, OB and CER are repeated measurements from the same subject and, hence, are correlated. It is critical to identify sources of variability in the analysis of oligonucleotide array experiments with repeated measures and correlations among data points have to be considered. In addition, multiple testing problems are more complicated in experiments with multi-level treatments or treatment combinations. In this study we adopted a linear mixed model to analyze oligonucleotide array experiments with repeated measures. We first construct a generalized F test to select differentially expressed genes. The Benjamini and Hochberg (BH) procedure of controlling false discovery rate (FDR) at 5% was applied to the P values of the generalized F test. For those genes with significant generalized F test, we then categorize them based on whether the interaction terms were significant or not at the alpha-level (alphanew = 0.0033) determined by the FDR procedure. Since simple effects may be examined for the genes with significant interaction effect, we adopt the protected Fisher's least significant difference test (LSD) procedure at the level of alphanew to control the family-wise error rate (FWER) for each gene examined. A linear mixed model is appropriate for analysis of oligonucleotide array experiments with repeated measures. We constructed a generalized F test to select differentially expressed genes, and then applied a specific sequence of tests to identify factorial effects. This sequence of tests applied was designed to control for gene based FWER.

  18. A Network-Based Algorithm for Clustering Multivariate Repeated Measures Data

    Science.gov (United States)

    Koslovsky, Matthew; Arellano, John; Schaefer, Caroline; Feiveson, Alan; Young, Millennia; Lee, Stuart

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Astronaut Corps is a unique occupational cohort for which vast amounts of measures data have been collected repeatedly in research or operational studies pre-, in-, and post-flight, as well as during multiple clinical care visits. In exploratory analyses aimed at generating hypotheses regarding physiological changes associated with spaceflight exposure, such as impaired vision, it is of interest to identify anomalies and trends across these expansive datasets. Multivariate clustering algorithms for repeated measures data may help parse the data to identify homogeneous groups of astronauts that have higher risks for a particular physiological change. However, available clustering methods may not be able to accommodate the complex data structures found in NASA data, since the methods often rely on strict model assumptions, require equally-spaced and balanced assessment times, cannot accommodate missing data or differing time scales across variables, and cannot process continuous and discrete data simultaneously. To fill this gap, we propose a network-based, multivariate clustering algorithm for repeated measures data that can be tailored to fit various research settings. Using simulated data, we demonstrate how our method can be used to identify patterns in complex data structures found in practice.

  19. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices

    Science.gov (United States)

    Harrar, Solomon W.; Kong, Xiaoli

    2015-01-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results. PMID:26778861

  20. On the repeated measures designs and sample sizes for randomized controlled trials.

    Science.gov (United States)

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials.

  1. Repeatability and Comparison of Keratometry Values Measured with Potec PRK-6000 Autorefractometer, IOLMaster, and Pentacam

    Directory of Open Access Journals (Sweden)

    Adem Türk

    2014-05-01

    Full Text Available Objectives: To research the repeatability and intercompatibility of keratometry values measured with Potec PRK-6000 autorefractometer, IOL Master, and Pentacam. Materials and Methods: In this prospective study, consecutive measurements were performed in two different sessions with the mentioned three devices on 110 eyes of 55 subjects who had no additional ocular pathology except for refraction error. The consistency of flat and steep keratometry, average keratometry, and corneal astigmatism values obtained in both sessions was compared by using intraclass correlation coefficient (ICC. The measurement differences between the devices were statistically compared as well. Results: The mean age of the study subjects was 23.05±3.01 (18-30 years. ICC values of average keratometry measurements obtained in the sessions were 0.996 for Potec PRK-6000 autorefractometer, 0.997 for IOL Master, and 0.999 for Pentacam. There was high compatibility between the three devices in terms of average keratometry values in Bland-Altman analysis. However, there were statistically significant differences between the devices in terms of parameters other than corneal astigmatism. Conclusion: The repeatability of the three devices was found considerably high in keratometry measurements. However, it is not appropriate for these devices to be substituted for each other in keratometry measurements. (Turk J Ophthalmol 2014; 44: 179-83

  2. Repeatability and Reproducibility of Compression Strength Measurements Conducted According to ASTM E9

    Science.gov (United States)

    Luecke, William E.; Ma, Li; Graham, Stephen M.; Adler, Matthew A.

    2010-01-01

    Ten commercial laboratories participated in an interlaboratory study to establish the repeatability and reproducibility of compression strength tests conducted according to ASTM International Standard Test Method E9. The test employed a cylindrical aluminum AA2024-T351 test specimen. Participants measured elastic modulus and 0.2 % offset yield strength, YS(0.2 % offset), using an extensometer attached to the specimen. The repeatability and reproducibility of the yield strength measurement, expressed as coefficient of variations were cv(sub r)= 0.011 and cv(sub R)= 0.020 The reproducibility of the test across the laboratories was among the best that has been reported for uniaxial tests. The reported data indicated that using diametrically opposed extensometers, instead of a single extensometer doubled the precision of the test method. Laboratories that did not lubricate the ends of the specimen measured yield stresses and elastic moduli that were smaller than those measured in laboratories that lubricated the specimen ends. A finite element analysis of the test specimen deformation for frictionless and perfect friction could not explain the discrepancy, however. The modulus measured from stress-strain data were reanalyzed using a technique that finds the optimal fit range, and applies several quality checks to the data. The error in modulus measurements from stress-strain curves generally increased as the fit range decreased to less than 40 % of the stress range.

  3. The effect of repeated measurements and working memory on the most comfortable level in the ANL test

    DEFF Research Database (Denmark)

    Brännström, K Jonas; Olsen, Steen Østergaard; Holm, Lucas

    2014-01-01

    interleaved methodology during one session using a non-semantic version. Phonological (PWM) and visuospatial working memory (VSWM) was measured. STUDY SAMPLE: Thirty-two normal-hearing adults. RESULTS: Repeated measures ANOVA, intraclass correlations, and the coefficient of repeatability (CR) were used...

  4. Effect of repeated contact on adhesion measurements involving polydimethylsiloxane structural material

    Science.gov (United States)

    Kroner, E.; Maboudian, R.; Arzt, E.

    2009-09-01

    During the last few years several research groups have focused on the fabrication of artificial gecko inspired adhesives. For mimicking these structures, different polymers are used as structure material, such as polydimethylsiloxanes (PDMS), polyurethanes (PU), and polypropylene (PP). While these polymers can be structured easily and used for artificial adhesion systems, the effects of repeated adhesion testing have never been investigated closely. In this paper we report on the effect of repeated adhesion measurements on the commercially available poly(dimethylsiloxane) polymer kit Sylgard 184 (Dow Corning). We show that the adhesion force decreases as a function of contact cycles. The rate of change and the final value of adhesion are found to depend on the details of the PDMS synthesis and structuring.

  5. Effect of repeated contact on adhesion measurements involving polydimethylsiloxane structural material

    Energy Technology Data Exchange (ETDEWEB)

    Kroner, E; Arzt, E [INM-Leibniz Institute for New Materials, Campus D2 2, 66125 Saarbruecken (Germany); Maboudian, R, E-mail: elmar.kroner@inm-gmbh.de [Department of Chem. Eng., 201 Gilman Hall, University of California, Berkeley, CA 94720-1462 (United States)

    2009-09-15

    During the last few years several research groups have focused on the fabrication of artificial gecko inspired adhesives. For mimicking these structures, different polymers are used as structure material, such as polydimethylsiloxanes (PDMS), polyurethanes (PU), and polypropylene (PP). While these polymers can be structured easily and used for artificial adhesion systems, the effects of repeated adhesion testing have never been investigated closely. In this paper we report on the effect of repeated adhesion measurements on the commercially available poly(dimethylsiloxane) polymer kit Sylgard 184 (Dow Corning). We show that the adhesion force decreases as a function of contact cycles. The rate of change and the final value of adhesion are found to depend on the details of the PDMS synthesis and structuring.

  6. [Repeated measurement of memory with valenced test items: verbal memory, working memory and autobiographic memory].

    Science.gov (United States)

    Kuffel, A; Terfehr, K; Uhlmann, C; Schreiner, J; Löwe, B; Spitzer, C; Wingenfeld, K

    2013-07-01

    A large number of questions in clinical and/or experimental neuropsychology require the multiple repetition of memory tests at relatively short intervals. Studies on the impact of the associated exercise and interference effects on the validity of the test results are rare. Moreover, hardly any neuropsychological instruments exist to date to record the memory performance with several parallel versions in which the emotional valence of the test material is also taken into consideration. The aim of the present study was to test whether a working memory test (WST, a digit-span task with neutral or negative distraction stimuli) devised by our workgroup can be used with repeated measurements. This question was also examined in parallel versions of a wordlist learning paradigm and an autobiographical memory test (AMT). Both tests contained stimuli with neutral, positive and negative valence. Twenty-four participants completed the memory testing including the working memory test and three versions of a wordlist and the AMT at intervals of a week apiece (measuring points 1. - 3.). The results reveal consistent performances across the three measuring points in the working and autobiographical memory test. The valence of the stimulus material did not influence the memory performance. In the delayed recall of the wordlist an improvement in memory performance over time was seen. The tests on working memory presented and the parallel versions for the declarative and autobiographical memory constitute informal economic instruments within the scope of the measurement repeatability designs. While the WST and AMT are appropriate for study designs with repeated measurements at relatively short intervals, longer intervals might seem more favourable for the use of wordlist learning paradigms. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    Science.gov (United States)

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement.

  8. Accuracy and repeatability of an inertial measurement unit system for field-based occupational studies.

    Science.gov (United States)

    Schall, Mark C; Fethke, Nathan B; Chen, Howard; Oyama, Sakiko; Douphrate, David I

    2016-04-01

    The accuracy and repeatability of an inertial measurement unit (IMU) system for directly measuring trunk angular displacement and upper arm elevation were evaluated over eight hours (i) in comparison to a gold standard, optical motion capture (OMC) system in a laboratory setting, and (ii) during a field-based assessment of dairy parlour work. Sample-to-sample root mean square differences between the IMU and OMC system ranged from 4.1° to 6.6° for the trunk and 7.2°-12.1° for the upper arm depending on the processing method. Estimates of mean angular displacement and angular displacement variation (difference between the 90th and 10th percentiles of angular displacement) were observed to change IMU system may serve as an acceptable instrument for directly measuring trunk and upper arm postures in field-based occupational exposure assessment studies with long sampling durations. Practitioner Summary: Few studies have evaluated inertial measurement unit (IMU) systems in the field or over long sampling durations. Results of this study indicate that the IMU system evaluated has reasonably good accuracy and repeatability for use in a field setting over a long sampling duration.

  9. An L-band SAR for repeat pass deformation measurements on a UAV platform

    Science.gov (United States)

    Hensley, Scott; Lou, Yunling; Rosen, Paul; Wheeler, Kevin; Zebker, Howard; Madsen, Soren; Miller, Tim; Hoffman, Jim; Farra, Don

    2003-01-01

    We are proposing to develop a miniaturized polarimetric L-band synthetic aperture radar (SAR) for repeat-pass differential interferometric measurements of deformation for rapidly deforming surfaces of geophysical interest such as volcanoes or earthquakes that is to be flown on a unmanned aerial vehicle (UAV) or minimally piloted vehicle (MPV). Upon surveying the capabilities and availabilities of such aircraft, the Proteus aircraft and the ALTAIR UAV appear to meet our criteria in terms of payload capabilities, flying altitude, and endurance. To support the repeat pass deformation capability it is necessary to control flight track capability of the aircraft to be within a specified 10 m tube with a goal of 1 m. This requires real-time GPS control of the autopilot to achieve these objectives that has not been demonstrated on these aircraft. Based on the Proteus and ALTAIR's altitude of 13.7 km (45,000 ft), we are designing a fully polarimetric L-band radar with 80 MHz bandwidth and a 16 km range swath. The radar will have an active electronic beam steering antenna to achieve a Doppler centroid stability that is necessary for repeat-pass interferometry. This paper presents some of the trade studies for the platform, instrument and the expected science.

  10. Measurement of bedform migration rates on the Lower Missouri River in Missouri, USA using repeat measurements with a multibeam echosounder

    Science.gov (United States)

    Elliott, Caroline M.; Jacobson, Robert B.

    2016-01-01

    High-resolution repeat multibeam echosounder measurements on the Lower Missouri River near Boonville, Missouri, USA show bedform movement and sand storage patterns over daily to seasonal time scales and a range of discharges. Higher flows are frequently, but not always, associated with larger bedforms, higher bedform movement rates, and higher bedload transport rates. Measurements of the temporal and spatial variability in sand dune sizes, transport rates, and sand storage across the river channel have increased understanding of the dynamics of habitats utilized by benthic organisms over multiple life stages and daily to seasonal time scales.

  11. Specific Measurement of Tethered Running Kinetics and its Relationship to Repeated Sprint Ability

    Directory of Open Access Journals (Sweden)

    Sousa Filipe

    2015-12-01

    Full Text Available Repeated sprint ability has been widely studied by researchers, however, analysis of the relationship between most kinetic variables and the effect of fatigue is still an ongoing process. To search for the best biomechanical parameter to evaluate repeated sprint ability, several kinetic variables were measured in a tethered field running test and compared regarding their sensitivity to fatigue and correlation with time trials in a free running condition. Nine male sprint runners (best average times: 100 m = 10.45 ± 0.07 s; 200 m = 21.36 ± 0.17 s; 400 m = 47.35 ± 1.09 s completed two test sessions on a synthetic track. Each session consisted of six 35 m sprints interspersed by 10 s rest under tethered field running or free running conditions. Force, power, work, an impulse and a rate of force development were all directly measured using the sensors of a new tethered running apparatus, and a one-way ANOVA with Scheffé post-hoc test used to verify differences between sprints (p < 0.05. Pearson product-moment correlation measured the relationship between mechanical variables and free running performance. A total impulse, the rate of force development and maximum force did not show significant differences for most sprints. These three variables presented low to moderate correlations with free running performance (r between 0.01 and -0.35. Maximum and mean power presented the strongest correlations with free running performance (r = -0.71 and -0.76, respectively; p < 0.001, followed by mean force (r = -0.61; p < 0.001 and total work (r = -0.50; p < 0.001. It was concluded that under a severe work-to-rest ratio condition, power variables were better suited to evaluating repeated sprint ability than the other studied variables.

  12. Characterisation and measurement of signals generated by DVB-H 'GAP-filler' repeaters.

    Science.gov (United States)

    Baldini, M; Barellini, A; Bogi, L; Licitra, G; Silvi, A M; Zari, A

    2009-12-01

    DVB-H (Digital Video Broadcasting Handheld) is the standard developed by DVB Project and approved by ETSI with the aim of providing the reception of DVB signals even in mobility but also data transfers and multimedia services. The introduction and development of the DVB-H system is still ongoing. In this context, this work focuses on the temporal trend of electromagnetic impact of an urban DVB-H repeater (called 'gap-filler') for exposure assessment purposes; it also describes a method for its measurement by means of narrow band instrumental chains.

  13. Measuring the Activity of Leucine-Rich Repeat Kinase 2: A Kinase Involved in Parkinson's Disease

    Science.gov (United States)

    Lee, Byoung Dae; Li, Xiaojie; Dawson, Ted M.; Dawson, Valina L.

    2015-01-01

    Mutations in the LRRK2 (Leucine-Rich Repeat Kinase 2) gene are the most common cause of autosomal dominant Parkinson's disease. LRRK2 has multiple functional domains including a kinase domain. The kinase activity of LRRK2 is implicated in the pathogenesis of Parkinson's disease. Developing an assay to understand the mechanisms of LRRK2 kinase activity is important for the development of pharmacologic and therapeutic applications. Here, we describe how to measure in vitro LRRK2 kinase activity and its inhibition. PMID:21960214

  14. Iterative Weighted Semiparametric Least Squares Estimation in Repeated Measurement Partially Linear Regression Models

    Institute of Scientific and Technical Information of China (English)

    Ge-mai Chen; Jin-hong You

    2005-01-01

    Consider a repeated measurement partially linear regression model with an unknown vector pasemiparametric generalized least squares estimator (SGLSE) ofβ, we propose an iterative weighted semiparametric least squares estimator (IWSLSE) and show that it improves upon the SGLSE in terms of asymptotic covariance matrix. An adaptive procedure is given to determine the number of iterations. We also show that when the number of replicates is less than or equal to two, the IWSLSE can not improve upon the SGLSE.These results are generalizations of those in [2] to the case of semiparametric regressions.

  15. Measuring Starlight Deflection during the 2017 Eclipse: Repeating the Experiment that made Einstein Famous

    Science.gov (United States)

    Bruns, Donald

    2016-05-01

    In 1919, astronomers performed an experiment during a solar eclipse, attempting to measure the deflection of stars near the sun, in order to verify Einstein's theory of general relativity. The experiment was very difficult and the results were marginal, but the success made Albert Einstein famous around the world. Astronomers last repeated the experiment in 1973, achieving an error of 11%. In 2017, using amateur equipment and modern technology, I plan to repeat the experiment and achieve a 1% error. The best available star catalog will be used for star positions. Corrections for optical distortion and atmospheric refraction are better than 0.01 arcsec. During totality, I expect 7 or 8 measurable stars down to magnitude 9.5, based on analysis of previous eclipse measurements taken by amateurs. Reference images, taken near the sun during totality, will be used for precise calibration. Preliminary test runs performed during twilight in April 2016 and April 2017 can accurately simulate the sky conditions during totality, providing an accurate estimate of the final uncertainty.

  16. Repeatability and Accuracy of Exoplanet Eclipse Depths Measured with Post-Cryogenic Spitzer

    CERN Document Server

    Ingalls, James G; Carey, S J; Stauffer, John R; Lawrence, Patrick J; Grillmair, Carl J; Buzasi, Derek; Deming, Drake; Diamond-Lowe, Hannah; Evans, Thomas M; Morello, G; Stevenson, Kevin B; Wong, Ian; Capak, Peter; Glaccum, William; Laine, Seppo; Surace, Jason; Storrie-Lombardi, Lisa

    2016-01-01

    We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. We have re-analyzed an existing 4.5{\\mu}m dataset, consisting of 10 observations of the XO-3 b system during secondary eclipse, using 7 different techniques for removing correlated noise. We find that, on average, for a given technique the eclipse depth estimate is repeatable from epoch to epoch to within 150 parts per million (ppm). Most techniques derive eclipse depths that do not vary by more than a factor 2 of the photon noise limit. Nearly all methods accurately assess their own errors: for these methods the individual measurement uncertainties are comparable to the scatter in eclipse depths over the 10-epoch sample. To assess the accuracy of the techniques as well as clarify the difference between instrumental and other sources of measurement error, we have also analyzed a simulated datas...

  17. Repeatability and reproducibility of horizontal corneal diameter and anterior corneal power measurements using the Oculus Keratograph 4

    Directory of Open Access Journals (Sweden)

    Khathutshelo P. Mashige

    2016-03-01

    Full Text Available Purpose: To evaluate the repeatability and reproducibility of horizontal corneal diameter (HCD and anterior corneal power (ACP measurements obtained with the Oculus Keratograph 4 (OCULUS Optikgeräte GmbH.Methods: These parameters (HCD and ACP were prospectively measured in quick succession three times in each of the right eyes of 40 healthy subjects, aged 18–28 years, with normal vision (6/6 or better visual acuity in the first session by a single examiner. Measurements were then repeated in the second session scheduled 1 week later by the same examiner using the same instrument. Repeatability and reproducibility of HCD and ACP measurements was assessed based on the intra-session and intersession within-subject standard deviation (sw, repeatability (2.77sw, coefficient of variation (CoV and intra-class correlation coefficient (ICC.Results: Intra-session repeatability and intersession reproducibility of all measured parameters showed a repeatability (2.77sw of 0.35 mm or less for HCD and 0.35 D or less for ACP, a CoV of 0.30% or less and an ICC of more than 0.9.Conclusion: HCD and ACP measurements obtained using an Oculus Keratograph 4 show good repeatability and reproducibility in healthy eyes; therefore, these parameters can be used for longitudinal follow-up when measured with this device.

  18. Measuring the Dynamic Soil Response During Repeated Wheeling Using Seismic Methods

    DEFF Research Database (Denmark)

    Keller, Thomas; Carizzon, Marco; Berisso, Feto Esimo

    2013-01-01

    Our understanding of soil deformation processes, especially its dynamics, remains limited. This hampers accurate predictions of the impact of soil management practices such as agricultural field traffic on (physical) soil functions. The main objective of this study was to investigate whether...... seismic measurements could be used to assess the dynamic soil behavior during repeated loading. Moreover, we aimed at linking the velocity of P-waves, Vp, to traditionally measured soil properties associated with soil compaction, namely bulk density (ρb) and penetrometer resistance. A wheeling experiment......, and simulated the evolution of bulk density due to the wheeling using a soil compaction model. The dynamic soil response during loading–unloading–reloading cycles could be well captured with the seismic method. The measured Vp related to bulk density, and the compaction-induced increase in Vp correlated...

  19. Information Contents of a Signal at Repeated Positioning Measurements of the Coordinate Measuring Machine (CMM by Laser Interferometer

    Directory of Open Access Journals (Sweden)

    Stejskal Tomáš

    2016-10-01

    Full Text Available The input of this paper lies in displaying possibilities how to determine the condition of a coordinate measuring machine (CMM based on a large number of repeated measurements. The number of repeated measurements exceeds common requirements for determining positioning accuracy. The total offset in the accuracy of spatial positioning consists of partial inaccuracies of individual axes. 6 basic errors may be defined at each axis. In a triaxial set, that translates into 18 errors, to which an offset from the perpendicularity between the axial pairs must be added. Therefore, the combined number of errors in a single position is 21. These errors are systemic and stem from the machine’s geometry. In addition, there are accidental errors to account for as well. Accidental errors can be attributed to vibrations, mass inertness, passive resistance, and in part to fluctuations in temperature. A peculiar set of systemic errors are time-varying errors. The nature of those errors may be reversible, for instance if they result from influence of temperature or elastic deformation. They can be also irreversible, for example as a result of wear and tear or line clogging, due to loosened connection or permanent deformation of a part post collision. A demonstration of thermal equalizing of the machine’s parts may also be observed in case of failure to adhere to a sufficient time interval from the moment the air-conditioning is turned on. Repeated measurements done on a selected axis with linear interferometer can provide complex information on the CMM condition and also on the machine’s interaction with the given technical environment.

  20. Specific Measurement of Tethered Running Kinetics and its Relationship to Repeated Sprint Ability.

    Science.gov (United States)

    Sousa, Filipe; Dos Reis, Ivan; Ribeiro, Luiz; Martins, Luiz; Gobatto, Claudio

    2015-12-22

    Repeated sprint ability has been widely studied by researchers, however, analysis of the relationship between most kinetic variables and the effect of fatigue is still an ongoing process. To search for the best biomechanical parameter to evaluate repeated sprint ability, several kinetic variables were measured in a tethered field running test and compared regarding their sensitivity to fatigue and correlation with time trials in a free running condition. Nine male sprint runners (best average times: 100 m = 10.45 ± 0.07 s; 200 m = 21.36 ± 0.17 s; 400 m = 47.35 ± 1.09 s) completed two test sessions on a synthetic track. Each session consisted of six 35 m sprints interspersed by 10 s rest under tethered field running or free running conditions. Force, power, work, an impulse and a rate of force development were all directly measured using the sensors of a new tethered running apparatus, and a one-way ANOVA with Scheffé post-hoc test used to verify differences between sprints (p sprints. These three variables presented low to moderate correlations with free running performance (r between 0.01 and -0.35). Maximum and mean power presented the strongest correlations with free running performance (r = -0.71 and -0.76, respectively; p sprint ability than the other studied variables.

  1. On the Analysis of a Repeated Measure Design in Genome-Wide Association Analysis

    Directory of Open Access Journals (Sweden)

    Young Lee

    2014-11-01

    Full Text Available Longitudinal data enables detecting the effect of aging/time, and as a repeated measures design is statistically more efficient compared to cross-sectional data if the correlations between repeated measurements are not large. In particular, when genotyping cost is more expensive than phenotyping cost, the collection of longitudinal data can be an efficient strategy for genetic association analysis. However, in spite of these advantages, genome-wide association studies (GWAS with longitudinal data have rarely been analyzed taking this into account. In this report, we calculate the required sample size to achieve 80% power at the genome-wide significance level for both longitudinal and cross-sectional data, and compare their statistical efficiency. Furthermore, we analyzed the GWAS of eight phenotypes with three observations on each individual in the Korean Association Resource (KARE. A linear mixed model allowing for the correlations between observations for each individual was applied to analyze the longitudinal data, and linear regression was used to analyze the first observation on each individual as cross-sectional data. We found 12 novel genome-wide significant disease susceptibility loci that were then confirmed in the Health Examination cohort, as well as some significant interactions between age/sex and SNPs.

  2. Role of Repeat Muscle Compartment Pressure Measurements in Chronic Exertional Compartment Syndrome of the Lower Leg

    Science.gov (United States)

    van Zantvoort, Aniek P. M.; de Bruijn, Johan A.; Winkes, Michiel B.; Hoogeveen, Adwin R.; Teijink, Joep A. W.; Scheltinga, Marc R.

    2017-01-01

    Background: The diagnostic gold standard for diagnosing chronic exertional compartment syndrome (CECS) is a dynamic intracompartmental pressure (ICP) measurement of the muscle. The potential role of a repeat ICP (re-ICP) measurement in patients with persistent lower leg symptoms after surgical decompression or with ongoing symptoms after an earlier normal ICP is unknown. Purpose: To study whether re-ICP measurements in patients with persistent CECS-like symptoms of the lower leg may contribute to the diagnosis of CECS after both surgical decompression and a previously normal ICP measurement. Study Design: Case series; Level of evidence, 4. Methods: Charts of patients who underwent re-ICP measurement of lower leg compartments (anterior [ant], deep posterior [dp], and/or lateral [lat] compartments) between 2001 and 2013 were retrospectively studied. CECS was diagnosed on the basis of generally accepted cutoff pressures for newly onset CECS (Pedowitz criteria: ICP at rest ≥15 mmHg, ≥30 mmHg after 1 minute, or ≥20 mmHg 5 minutes after a provocative test). Factors predicting recurrent CECS after surgery or after a previously normal ICP measurement were analyzed. Results: A total of 1714 ICP measurements were taken in 1513 patients with suspected CECS over a 13-year observation period. In all, 201 (12%) tests were re-ICP measurements for persistent lower leg symptoms. Based on the proposed ICP cutoff values, CECS recurrence was diagnosed in 16 of 62 previously operated compartments (recurrence rate, 26%; 53 patients [64% female]; median age, 24 years; age range, 15-78 years). Recurrence rates were not different among the 3 lower leg CECS compartments (ant-CECS, 17%; dp-CECS, 33%; lat-CECS, 30%; χ2 = 1.928, P = .381). Sex (χ2 = 0.058, P = .810), age (U = 378, z = 1.840, P = .066), bilaterality (χ2 = 0.019, P = .889), and prefasciotomy ICP did not predict recurrence. Re-ICP measurements evaluating 20 compartments with previously normal ICP measurements (15

  3. Role of Repeat Muscle Compartment Pressure Measurements in Chronic Exertional Compartment Syndrome of the Lower Leg.

    Science.gov (United States)

    van Zantvoort, Aniek P M; de Bruijn, Johan A; Winkes, Michiel B; Hoogeveen, Adwin R; Teijink, Joep A W; Scheltinga, Marc R

    2017-06-01

    The diagnostic gold standard for diagnosing chronic exertional compartment syndrome (CECS) is a dynamic intracompartmental pressure (ICP) measurement of the muscle. The potential role of a repeat ICP (re-ICP) measurement in patients with persistent lower leg symptoms after surgical decompression or with ongoing symptoms after an earlier normal ICP is unknown. To study whether re-ICP measurements in patients with persistent CECS-like symptoms of the lower leg may contribute to the diagnosis of CECS after both surgical decompression and a previously normal ICP measurement. Case series; Level of evidence, 4. Charts of patients who underwent re-ICP measurement of lower leg compartments (anterior [ant], deep posterior [dp], and/or lateral [lat] compartments) between 2001 and 2013 were retrospectively studied. CECS was diagnosed on the basis of generally accepted cutoff pressures for newly onset CECS (Pedowitz criteria: ICP at rest ≥15 mmHg, ≥30 mmHg after 1 minute, or ≥20 mmHg 5 minutes after a provocative test). Factors predicting recurrent CECS after surgery or after a previously normal ICP measurement were analyzed. A total of 1714 ICP measurements were taken in 1513 patients with suspected CECS over a 13-year observation period. In all, 201 (12%) tests were re-ICP measurements for persistent lower leg symptoms. Based on the proposed ICP cutoff values, CECS recurrence was diagnosed in 16 of 62 previously operated compartments (recurrence rate, 26%; 53 patients [64% female]; median age, 24 years; age range, 15-78 years). Recurrence rates were not different among the 3 lower leg CECS compartments (ant-CECS, 17%; dp-CECS, 33%; lat-CECS, 30%; χ(2) = 1.928, P = .381). Sex (χ(2) = 0.058, P = .810), age (U = 378, z = 1.840, P = .066), bilaterality (χ(2) = 0.019, P = .889), and prefasciotomy ICP did not predict recurrence. Re-ICP measurements evaluating 20 compartments with previously normal ICP measurements (15 patients [53% female]; mean age, 31 ± 10 years

  4. ±25ppm repeatable measurement of trapezoidal pulses with 5MHz bandwidth

    CERN Document Server

    AUTHOR|(SzGeCERN)712364; Arpaia, Pasquale; Cerqueira Bastos, Miguel; Martino, Michele

    2015-01-01

    High-quality measurements of pulses are nowadays widely used in fields such as radars, pulsed lasers, electromagnetic pulse generators, and particle accelerators. Whilst literature is mainly focused on fast systems for nanosecond regime with relaxed metrological requirements, in this paper, the high-performance measurement of slower pulses in microsecond regime is faced. In particular, the experimental proof demonstration for a 15 MS/s,_25 ppm repeatable acquisition system to characterize the flat-top of 3 ms rise-time trapezoidal pulses is given. The system exploits a 5MHz bandwidth circuit for analogue signal processing based on the concept of flat-top removal. The requirements, as well as the conceptual and physical designs are illustrated. Simulation results aimed at assessing the circuit performance are also presented. Finally, an experimental case study on the characterization of a pulsed power supply for the klystrons modulators of the Compact Linear Collider (CLIC) under study at CERN is reported. In ...

  5. Addressing the need for repeat prostate biopsy: new technology and approaches.

    Science.gov (United States)

    Blute, Michael L; Abel, E Jason; Downs, Tracy M; Kelcz, Frederick; Jarrard, David F

    2015-08-01

    No guidelines currently exist that address the need for rebiopsy in patients with a negative diagnosis of prostate cancer on initial biopsy sample analysis. Accurate diagnosis of prostate cancer in these patients is often complicated by continued elevation of serum PSA levels that are suggestive of prostate cancer, resulting in a distinct management challenge. Following negative initial findings of biopsy sample analysis, total serum PSA levels and serum PSA kinetics are ineffective indicators of a need for a repeat biopsy; therefore, patients suspected of having prostate cancer might undergo several unnecessary biopsy procedures. Several alternative strategies exist for identifying men who might be at risk of prostate cancer despite negative findings of biopsy sample analysis. Use of other serum PSA-related measurements enables more sensitive and specific diagnosis and can be combined with knowledge of clinicopathological features to improve outcomes. Other options include the FDA-approved Progensa(®) test and prostate imaging using MRI. Newer tissue-based assays that measure methylation changes in normal prostate tissue are currently being developed. A cost-effective strategy is proposed in order to address this challenging clinical scenario, and potential directions of future studies in this area are also described.

  6. Designed ankyrin repeat proteins: a new approach to mimic complex antigens for diagnostic purposes?

    Directory of Open Access Journals (Sweden)

    Stefanie Hausammann

    Full Text Available Inhibitory antibodies directed against coagulation factor VIII (FVIII can be found in patients with acquired and congenital hemophilia A. Such FVIII-inhibiting antibodies are routinely detected by the functional Bethesda Assay. However, this assay has a low sensitivity and shows a high inter-laboratory variability. Another method to detect antibodies recognizing FVIII is ELISA, but this test does not allow the distinction between inhibitory and non-inhibitory antibodies. Therefore, we aimed at replacing the intricate antigen FVIII by Designed Ankyrin Repeat Proteins (DARPins mimicking the epitopes of FVIII inhibitors. As a model we used the well-described inhibitory human monoclonal anti-FVIII antibody, Bo2C11, for the selection on DARPin libraries. Two DARPins were selected binding to the antigen-binding site of Bo2C11, which mimic thus a functional epitope on FVIII. These DARPins inhibited the binding of the antibody to its antigen and restored FVIII activity as determined in the Bethesda assay. Furthermore, the specific DARPins were able to recognize the target antibody in human plasma and could therefore be used to test for the presence of Bo2C11-like antibodies in a large set of hemophilia A patients. These data suggest, that our approach might be used to isolate epitopes from different sets of anti-FVIII antibodies in order to develop an ELISA-based screening assay allowing the distinction of inhibitory and non-inhibitory anti-FVIII antibodies according to their antibody signatures.

  7. Impact of repeated measures and sample selection on genome-wide association studies of fasting glucose

    Science.gov (United States)

    Rasmussen-Torvik, Laura J.; Alonso, Alvaro; Li, Man; Kao, Wen; Köttgen, Anna; Yan, Yuer; Couper, David; Boerwinkle, Eric; Bielinski, Suzette J.; Pankow, James S.

    2010-01-01

    Although GWAS have been performed in longitudinal studies, most used only a single trait measure. GWAS of fasting glucose have generally included only normoglycemic individuals. We examined the impact of both repeated measures and sample selection on GWAS in ARIC, a study which obtained four longitudinal measures of fasting glucose and included both individuals with and without prevalent diabetes. The sample included Caucasians and the Affymetrix 6.0 chip was used for genotyping. Sample sizes for GWAS analyses ranged from 8372 (first study visit) to 5782 (average fasting glucose). Candidate SNP analyses with SNPs identified through fasting glucose or diabetes GWAS were conducted in 9133 individuals, including 761 with prevalent diabetes. For a constant sample size, smaller p-values were obtained for the average measure of fasting glucose compared to values at any single visit, and two additional significant GWAS signals were detected. For four candidate SNPs (rs780094, rs10830963, rs7903146, and rs4607517), the strength of association between genotype and glucose was significantly (p-interaction fasting glucose candidate SNPs (rs780094, rs10830963, rs560887, rs4607517, rs13266634) the association with measured fasting glucose was more significant in the smaller sample without prevalent diabetes than in the larger combined sample of those with and without diabetes. This analysis demonstrates the potential utility of averaging trait values in GWAS studies and explores the advantage of using only individuals without prevalent diabetes in GWAS of fasting glucose. PMID:20839289

  8. Simple and Practical Approaches for Upgrading Installed Electronic-Repeater-Based Fiber Systems to Optically Amplified Systems

    Institute of Scientific and Technical Information of China (English)

    Pasu; Kaewplung; Wadis; Kasantikul

    2003-01-01

    We propose simple and practical approaches to upgrade electronic-repeated systems by using optical amplifiers and zero-dispersion wavelength transmission. Possibility of increasing data rate from 560 Mbit/s to 80 Gbit/s in 1,318-km-long Thai-Malaysia system is demonstrated.

  9. A novel PCR-based approach for the detection of the Huntington disease associated trinucleotide repeat expansion.

    Science.gov (United States)

    Panagopoulos, I; Lassen, C; Kristoffersson, U; Aman, P

    1999-01-01

    Huntington disease (HD) is an autosomal dominant neurodegenerative disorder associated with expansions of an unstable CAG trinucleotide repeat in exon 1 of the IT15 gene. In normal individuals, IT15 contains up to 35 CAG repeats, while in affected the repeat length is >36. Polymerase chain reaction (PCR) is used to estimate the number of CAG repeats but may be inefficient in long repeats because of the high C+G content of the HD locus. We present a novel PCR approach for the diagnosis of HD, which permits direct visualization of the amplified products on agarose gel, using ethidium bromide. It is based on the methylation-sensitive conversion of C residues to U by bisulfite treatment of single-stranded DNA and subsequent amplification of the sense strand with specific primers. The bisulfite treatment dramatically reduces the C + G content of the region; thus, the high Tm and stable secondary structures are no longer obstacles to PCR. In both normal and affected individuals, UAG repeats (5'- CAG-3', before bisulfite treatment) in the sense strand can easily be amplified and visualized on a gel by ethidium bromide staining. The method has considerable advantages compared with other described PCR-based diagnostic tests for HD.

  10. Graphic Methods for Interpreting Longitudinal Dyadic Patterns From Repeated-Measures Actor-Partner Interdependence Models

    DEFF Research Database (Denmark)

    Perry, Nicholas; Baucom, Katherine; Bourne, Stacia

    2017-01-01

    Researchers commonly use repeated-measures actor–partner interdependence models (RM-APIM) to understand how romantic partners change in relation to one another over time. However, traditional interpretations of the results of these models do not fully or correctly capture the dyadic temporal...... patterns estimated in RM-APIM. Interpretation of results from these models largely focuses on the meaning of single-parameter estimates in isolation from all the others. However, considering individual coefficients separately impedes the understanding of how these associations combine to produce...... to improve the understanding and presentation of dyadic patterns of association described by standard RM-APIMs. The current article briefly reviews the conceptual foundations of RM-APIMs, demonstrates how change-as-outcome RM-APIMs and VFDs can aid interpretation of standard RM-APIMs, and provides a tutorial...

  11. Inmate responses to prison-based drug treatment: a repeated measures analysis.

    Science.gov (United States)

    Welsh, Wayne N

    2010-06-01

    Using a sample of 347 prison inmates and general linear modeling (GLM) repeated measures analyses, this paper examined during-treatment responses (e.g., changes in psychological and social functioning) to prison-based TC drug treatment. These effects have rarely been examined in previous studies, and never with a fully multivariate model accounting for within-subjects effects (changes over time), between-subjects effects (e.g., levels of risk and motivation), and within/between-subjects interactions (timexriskxmotivation). The results provide evidence of positive inmate change in response to prison TC treatment, but the patterns of results varied depending upon: (a) specific indicators of psychological and social functioning, motivation, and treatment process; (b) the time periods examined (1, 6, and 12 months during treatment); and (c) baseline levels of risk and motivation. Significant interactions between time and type of inmate suggest important new directions for research, theory, and practice in offender-based substance abuse treatment.

  12. Bayesian latent variable models for hierarchical clustered count outcomes with repeated measures in microbiome studies.

    Science.gov (United States)

    Xu, Lizhen; Paterson, Andrew D; Xu, Wei

    2017-04-01

    Motivated by the multivariate nature of microbiome data with hierarchical taxonomic clusters, counts that are often skewed and zero inflated, and repeated measures, we propose a Bayesian latent variable methodology to jointly model multiple operational taxonomic units within a single taxonomic cluster. This novel method can incorporate both negative binomial and zero-inflated negative binomial responses, and can account for serial and familial correlations. We develop a Markov chain Monte Carlo algorithm that is built on a data augmentation scheme using Pólya-Gamma random variables. Hierarchical centering and parameter expansion techniques are also used to improve the convergence of the Markov chain. We evaluate the performance of our proposed method through extensive simulations. We also apply our method to a human microbiome study.

  13. Review of tandem repeat search tools: a systematic approach to evaluating algorithmic performance

    National Research Council Canada - National Science Library

    Lim, Kian Guan; Kwoh, Chee Keong; Hsu, Li Yang; Wirawan, Adrianto

    2013-01-01

    .... Over the last 10-15 years, numerous tools have been developed for searching tandem repeats, but differences in the search algorithms adopted and difficulties with parameter settings have confounded...

  14. Microcomputer-based tests for repeated-measures: Metric properties and predictive validities

    Science.gov (United States)

    Kennedy, Robert S.; Baltzley, Dennis R.; Dunlap, William P.; Wilkes, Robert L.; Kuntz, Lois-Ann

    1989-01-01

    A menu of psychomotor and mental acuity tests were refined. Field applications of such a battery are, for example, a study of the effects of toxic agents or exotic environments on performance readiness, or the determination of fitness for duty. The key requirement of these tasks is that they be suitable for repeated-measures applications, and so questions of stability and reliability are a continuing, central focus of this work. After the initial (practice) session, seven replications of 14 microcomputer-based performance tests (32 measures) were completed by 37 subjects. Each test in the battery had previously been shown to stabilize in less than five 90-second administrations and to possess retest reliabilities greater than r = 0.707 for three minutes of testing. However, all the tests had never been administered together as a battery and they had never been self-administered. In order to provide predictive validity for intelligence measurement, the Wechsler Adult Intelligence Scale-Revised and the Wonderlic Personnel Test were obtained on the same subjects.

  15. Characterization of the peripheral blood transcriptome in a repeated measures design using a panel of healthy individuals

    DEFF Research Database (Denmark)

    De Boever, Patrick; Wens, Britt; Forcheh, Anyiawung Chiara

    2014-01-01

    A repeated measures microarray design with 22 healthy, non-smoking volunteers (aging 32. ±. 5. years) was set up to study transcriptome profiles in whole blood samples. The results indicate that repeatable data can be obtained with high within-subject correlation. Probes that could discriminate....... Our study suggests that the blood transcriptome of healthy individuals is reproducible over a time period of several months. © 2013 Elsevier Inc....

  16. Status of a UAVSAR designed for repeat pass interferometry for deformation measurements

    Science.gov (United States)

    Hensley, Scott; Wheeler, Kevin; Sadowy, Greg; Miller, Tim; Shaffer, Scott; Muellerschoen, Ron; Jones, Cathleen; Zebker, Howard; Madsen, Soren; Paul, Rose

    2005-01-01

    NASA's Jet Propulsion Laboratory is currently implementing a reconfigurable polarimetric L-band synthetic aperture radar (SAR), specifically designed to acquire airborne repeat track interferometric (RTI) SAR data, also known as differential interferometric measurements. Differential interferometry can provide key deformation measurements, important for the scientific studies of Earthquakes and volcanoes. Using precision real-time GPS and a sensor controlled flight management system, the system will be able to fly predefined paths with great precision. The expected performance of the flight control system will constrain the flight path to be within a 10 m diameter tube about the desired flight track. The radar wilI be designed to operate on a UAV (Unpiloted Aria1 Vehicle) but will initially be demonstrated on a minimally piloted vehicle (MPV), such as the Proteus buitt by Scaled Composites or on a NASA Gulfstream III. The radar design is a fully polarimetric with an 80 MHz bandwidth (2 m range resolution) and 16 km range swath. The antenna is an electronically steered along track to assure that the actual antenna pointing can be controlled independent of the wind direction and speed. Other features supported by the antenna include an elevation monopulse option and a pulse-to-pulse resteering capability that will enable some novel modes of operation. The system will nominally operate at 45,000 ft (13800 m). The program began out as an Instrument Incubator Project (IIP) funded by NASA Earth Science and Technology Office (ESTO).

  17. Repeated measures of serum glucose and insulin in relation to postmenopausal breast cancer.

    Science.gov (United States)

    Kabat, Geoffrey C; Kim, Mimi; Caan, Bette J; Chlebowski, Rowan T; Gunter, Marc J; Ho, Gloria Y F; Rodriguez, Beatriz L; Shikany, James M; Strickler, Howard D; Vitolins, Mara Z; Rohan, Thomas E

    2009-12-01

    Experimental and epidemiological evidence suggests that circulating glucose and insulin may play a role in breast carcinogenesis. However, few cohort studies have examined breast cancer risk in association with glucose and insulin levels, and studies to date have had only baseline measurements of exposure. We conducted a longitudinal study of postmenopausal breast cancer risk using the 6% random sample of women in the Women's Health Initiative clinical trials whose fasting blood samples, provided at baseline and at years 1, 3 and 6, were analyzed for glucose and insulin. In addition, a 1% sample of women in the observational study, who had glucose and insulin measured in fasting blood samples drawn at baseline and in year 3, were included in the analysis. We used Cox proportional hazards models to estimate hazard ratios and 95% confidence intervals for the association of baseline and follow-up measurements of serum glucose and insulin with breast cancer risk. All statistical tests were 2-sided. Among 5,450 women with baseline serum glucose and insulin values, 190 incident cases of breast cancer were ascertained over a median of 8.0 years of follow-up. The highest tertile of baseline insulin, relative to the lowest, was associated with a 2-fold increase in risk in the total population (multivariable hazard ratio 2.22, 95% confidence interval 1.39-3.53) and with a 3-fold increase in risk in women who were not enrolled in the intervention arm of any clinical trial (multivariable hazard ratio 3.15, 95% confidence interval 1.61-6.17). Glucose levels showed no association with risk. Analysis of the repeated measurements supported the results of the baseline analysis. These data suggest that elevated serum insulin levels may be a risk factor for postmenopausal breast cancer.

  18. Repeated Blood Pressure Measurements in Childhood in Prediction of Hypertension in Adulthood.

    Science.gov (United States)

    Oikonen, Mervi; Nuotio, Joel; Magnussen, Costan G; Viikari, Jorma S A; Taittonen, Leena; Laitinen, Tomi; Hutri-Kähönen, Nina; Jokinen, Eero; Jula, Antti; Cheung, Michael; Sabin, Matthew A; Daniels, Stephen R; Raitakari, Olli T; Juonala, Markus

    2016-01-01

    Hypertension may be predicted from childhood risk factors. Repeated observations of abnormal blood pressure in childhood may enhance prediction of hypertension and subclinical atherosclerosis in adulthood compared with a single observation. Participants (1927, 54% women) from the Cardiovascular Risk in Young Finns Study had systolic and diastolic blood pressure measurements performed when aged 3 to 24 years. Childhood/youth abnormal blood pressure was defined as above 90th or 95th percentile. After a 21- to 31-year follow-up, at the age of 30 to 45 years, hypertension (>140/90 mm Hg or antihypertensive medication) prevalence was found to be 19%. Carotid intima-media thickness was examined, and high-risk intima-media was defined as intima-media thickness >90th percentile or carotid plaques. Prediction of adulthood hypertension and high-risk intima-media was compared between one observation of abnormal blood pressure in childhood/youth and multiple observations by improved Pearson correlation coefficients and area under the receiver operating curve. When compared with a single measurement, 2 childhood/youth observations improved the correlation for adult systolic (r=0.44 versus 0.35, Ppressure. In addition, 2 abnormal childhood/youth blood pressure observations increased the prediction of hypertension in adulthood (0.63 for 2 versus 0.60 for 1 observation, P=0.003). When compared with 2 measurements, third observation did not provide any significant improvement for correlation or prediction (P always >0.05). A higher number of childhood/youth observations of abnormal blood pressure did not enhance prediction of adult high-risk intima-media thickness. Compared with a single measurement, the prediction of adult hypertension was enhanced by 2 observations of abnormal blood pressure in childhood/youth. © 2015 American Heart Association, Inc.

  19. Corneal Topographic and Aberrometric Measurements Obtained with a Multidiagnostic Device in Healthy Eyes: Intrasession Repeatability

    Directory of Open Access Journals (Sweden)

    David P. Piñero

    2017-01-01

    Full Text Available Purpose. To evaluate the intrasession repeatability of corneal curvature, eccentricity, and aberrometric measurements obtained with a multidiagnostic device in healthy eyes. Methods. This study enrolled 107 eyes of 107 patients ranging in age from 23 to 65 years. All of them underwent a complete anterior segment examination with the VX120 system (Visionix-Luneau Technologies, Chartres, France. Three consecutive measurements were obtained. The within-subject standard deviation (Sw, intrasubject precision (1.96×Sw, and intraclass correlation coefficient (ICC were calculated. Results. All Sw for corneal power measurements were below 0.26 D, with ICC above 0.982. The Sw for corneal astigmatism at different areas (3, 5, and 7 mm was below 0.21 D, with ICC above 0.913. Concerning the axis of astigmatism, its Sw was below 11.27°, with ICC above 0.975. The Sw and ICC for corneal eccentricity were 0.067 and 0.957, respectively. The Sw and ICC for high-order aberration root mean square (RMS were 0.048 µm and 0.901, respectively. For 3rd- and 4th-order aberrometric parameters, all Sw were below 0.037 µm and all ICC were higher than 0.84, except for quadrafoil RMS (ICC: 0.689. Conclusions. The multidiagnostic device evaluated is able to provide consistent measurements of corneal power, eccentricity, and third- and fourth-order aberrations in healthy eyes.

  20. Measurement of repeat effects in Chicago’s criminal social network

    Directory of Open Access Journals (Sweden)

    Paul Kump

    2016-07-01

    Full Text Available The “near-repeat” effect is a well-known criminological phenomenon in which the occurrence of a crime incident gives rise to a temporary elevation of crime risk within close physical proximity to an initial incident. Adopting a social network perspective, we instead define a near repeat in terms of geodesic distance within a criminal social network, rather than spatial distance. Specifically, we report a statistical analysis of repeat effects in arrest data for Chicago during the years 2003–2012. We divide the arrest data into two sets (violent crimes and other crimes and, for each set, we compare the distributions of time intervals between repeat incidents to theoretical distributions in which repeat incidents occur only by chance. We first consider the case of the same arrestee participating in repeat incidents (“exact repeats” and then extend the analysis to evaluate repeat risks of those arrestees near one another in the social network. We observe repeat effects that diminish as a function of geodesic distance and time interval, and we estimate typical time scales for repeat crimes in Chicago.

  1. Effects of repeatability measures on results of fMRI sICA: a study on simulated and real resting-state effects.

    Science.gov (United States)

    Remes, Jukka J; Starck, Tuomo; Nikkinen, Juha; Ollila, Esa; Beckmann, Christian F; Tervonen, Osmo; Kiviniemi, Vesa; Silven, Olli

    2011-05-15

    Spatial independent components analysis (sICA) has become a widely applied data-driven method for fMRI data, especially for resting-state studies. These sICA approaches are often based on iterative estimation algorithms and there are concerns about accuracy due to noise. Repeatability measures such as ICASSO, RAICAR and ARABICA have been introduced as remedies but information on their effects on estimates is limited. The contribution of this study was to provide more of such information and test if the repeatability analyses are necessary. We compared FastICA-based ordinary and repeatability approaches concerning mixing vector estimates. Comparisons included original FastICA, FSL4 Melodic FastICA and original and modified ICASSO. The effects of bootstrapping and convergence threshold were evaluated. The results show that there is only moderate improvement due to repeatability measures and only in the bootstrapping case. Bootstrapping attenuated power from time courses of resting-state network related ICs at frequencies higher than 0.1 Hz and made subsets of low frequency oscillations more emphasized IC-wise. The convergence threshold did not have a significant role concerning the accuracy of estimates. The performance results suggest that repeatability measures or strict converge criteria might not be needed in sICA analyses of fMRI data. Consequently, the results in existing sICA fMRI literature are probably valid in this sense. A decreased accuracy of original bootstrapping ICASSO was observed and corrected by using centrotype mixing estimates but the results warrant for thorough evaluations of data-driven methods in general. Also, given the fMRI-specific considerations, further development of sICA methods is strongly encouraged. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. Constructing Measure by Repeated Infinite Subdivision%通过反复无限细分定义测度

    Institute of Scientific and Technical Information of China (English)

    严质彬

    2005-01-01

    This paper generalizes the method of constructing measure by repeated finite subdivision in fTactal geometry to that by infinite subdivision. Two conditions for the existing method are removed. A measure on the interval [0, 1] is constructed using this generalized method.

  3. Cardiometabolic treatment decisions in patients with type 2 diabetes : the role of repeated measurements and medication burden

    NARCIS (Netherlands)

    Voorham, J.; Haaijer-Ruskamp, F. M.; Wolffenbuttel, B. H. R.; Stolk, R. P.; Denig, P.

    2010-01-01

    Purpose Clinical guidelines for cardiometabolic risk management indicate a simple threshold-based strategy for treatment, but physicians and their patients may be reluctant to modify drug treatment after a single elevated measurement. We determined how repeated measurements of blood pressure, choles

  4. Cardiometabolic treatment decisions in patients with type 2 diabetes : the role of repeated measurements and medication burden

    NARCIS (Netherlands)

    Voorham, J.; Haaijer-Ruskamp, F. M.; Wolffenbuttel, B. H. R.; Stolk, R. P.; Denig, P.

    2010-01-01

    Purpose Clinical guidelines for cardiometabolic risk management indicate a simple threshold-based strategy for treatment, but physicians and their patients may be reluctant to modify drug treatment after a single elevated measurement. We determined how repeated measurements of blood pressure,

  5. Tracking photon jumps with repeated quantum non-demolition parity measurements.

    Science.gov (United States)

    Sun, L; Petrenko, A; Leghtas, Z; Vlastakis, B; Kirchmair, G; Sliwa, K M; Narla, A; Hatridge, M; Shankar, S; Blumoff, J; Frunzio, L; Mirrahimi, M; Devoret, M H; Schoelkopf, R J

    2014-07-24

    Quantum error correction is required for a practical quantum computer because of the fragile nature of quantum information. In quantum error correction, information is redundantly stored in a large quantum state space and one or more observables must be monitored to reveal the occurrence of an error, without disturbing the information encoded in an unknown quantum state. Such observables, typically multi-quantum-bit parities, must correspond to a special symmetry property inherent in the encoding scheme. Measurements of these observables, or error syndromes, must also be performed in a quantum non-demolition way (projecting without further perturbing the state) and more quickly than errors occur. Previously, quantum non-demolition measurements of quantum jumps between states of well-defined energy have been performed in systems such as trapped ions, electrons, cavity quantum electrodynamics, nitrogen-vacancy centres and superconducting quantum bits. So far, however, no fast and repeated monitoring of an error syndrome has been achieved. Here we track the quantum jumps of a possible error syndrome, namely the photon number parity of a microwave cavity, by mapping this property onto an ancilla quantum bit, whose only role is to facilitate quantum state manipulation and measurement. This quantity is just the error syndrome required in a recently proposed scheme for a hardware-efficient protected quantum memory using Schrödinger cat states (quantum superpositions of different coherent states of light) in a harmonic oscillator. We demonstrate the projective nature of this measurement onto a region of state space with well-defined parity by observing the collapse of a coherent state onto even or odd cat states. The measurement is fast compared with the cavity lifetime, has a high single-shot fidelity and has a 99.8 per cent probability per single measurement of leaving the parity unchanged. In combination with the deterministic encoding of quantum information in cat

  6. Analyzing repeated measures semi-continuous data, with application to an alcohol dependence study.

    Science.gov (United States)

    Liu, Lei; Strawderman, Robert L; Johnson, Bankole A; O'Quigley, John M

    2016-02-01

    Two-part random effects models (Olsen and Schafer,(1) Tooze et al.(2)) have been applied to repeated measures of semi-continuous data, characterized by a mixture of a substantial proportion of zero values and a skewed distribution of positive values. In the original formulation of this model, the natural logarithm of the positive values is assumed to follow a normal distribution with a constant variance parameter. In this article, we review and consider three extensions of this model, allowing the positive values to follow (a) a generalized gamma distribution, (b) a log-skew-normal distribution, and (c) a normal distribution after the Box-Cox transformation. We allow for the possibility of heteroscedasticity. Maximum likelihood estimation is shown to be conveniently implemented in SAS Proc NLMIXED. The performance of the methods is compared through applications to daily drinking records in a secondary data analysis from a randomized controlled trial of topiramate for alcohol dependence treatment. We find that all three models provide a significantly better fit than the log-normal model, and there exists strong evidence for heteroscedasticity. We also compare the three models by the likelihood ratio tests for non-nested hypotheses (Vuong(3)). The results suggest that the generalized gamma distribution provides the best fit, though no statistically significant differences are found in pairwise model comparisons.

  7. Preliminary evaluation of a micro-based repeated measures testing system

    Science.gov (United States)

    Kennedy, Robert S.; Wilkes, Robert L.; Lane, Norman E.

    1985-01-01

    A need exists for an automated performance test system to study the effects of various treatments which are of interest to the aerospace medical community, i.e., the effects of drugs and environmental stress. The ethics and pragmatics of such assessment demand that repeated measures in small groups of subjects be the customary research paradigm. Test stability, reliability-efficiency and factor structure take on extreme significance; in a program of study by the U.S. Navy, 80 percent of 150 tests failed to meet minimum metric requirements. The best is being programmed on a portable microprocessor and administered along with tests in their original formats in order to examine their metric properties in the computerized mode. Twenty subjects have been tested over four replications on a 6.0 minute computerized battery (six tests) and which compared with five paper and pencil marker tests. All tests achieved stability within the four test sessions, reliability-efficiencies were high (r greater than .707 for three minutes testing), and the computerized tests were largely comparable to the paper and pencil version from which they were derived. This computerized performance test system is portable, inexpensive and rugged.

  8. A repeated measures experiment of green exercise to improve self-esteem in UK school children.

    Directory of Open Access Journals (Sweden)

    Katharine Reed

    Full Text Available Exercising in natural, green environments creates greater improvements in adult's self-esteem than exercise undertaken in urban or indoor settings. No comparable data are available for children. The aim of this study was to determine whether so called 'green exercise' affected changes in self-esteem; enjoyment and perceived exertion in children differently to urban exercise. We assessed cardiorespiratory fitness (20 m shuttle-run and self-reported physical activity (PAQ-A in 11 and 12 year olds (n = 75. Each pupil completed two 1.5 mile timed runs, one in an urban and another in a rural environment. Trials were completed one week apart during scheduled physical education lessons allocated using a repeated measures design. Self-esteem was measured before and after each trial, ratings of perceived exertion (RPE and enjoyment were assessed after completing each trial. We found a significant main effect (F (1,74, = 12.2, p<0.001, for the increase in self-esteem following exercise but there was no condition by exercise interaction (F (1,74, = 0.13, p = 0.72. There were no significant differences in perceived exertion or enjoyment between conditions. There was a negative correlation (r = -0.26, p = 0.04 between habitual physical activity and RPE during the control condition, which was not evident in the green exercise condition (r = -0.07, p = 0.55. Contrary to previous studies in adults, green exercise did not produce significantly greater increases in self-esteem than the urban exercise condition. Green exercise was enjoyed more equally by children with differing levels of habitual physical activity and has the potential to engage less active children in exercise.

  9. Analyzing repeated data collected by mobile phones and frequent text messages. An example of Low back pain measured weekly for 18 weeks

    Directory of Open Access Journals (Sweden)

    Axén Iben

    2012-07-01

    Full Text Available Abstract Background Repeated data collection is desirable when monitoring fluctuating conditions. Mobile phones can be used to gather such data from large groups of respondents by sending and receiving frequently repeated short questions and answers as text messages. The analysis of repeated data involves some challenges. Vital issues to consider are the within-subject correlation, the between measurement occasion correlation and the presence of missing values. The overall aim of this commentary is to describe different methods of analyzing repeated data. It is meant to give an overview for the clinical researcher in order for complex outcome measures to be interpreted in a clinically meaningful way. Methods A model data set was formed using data from two clinical studies, where patients with low back pain were followed with weekly text messages for 18 weeks. Different research questions and analytic approaches were illustrated and discussed, as well as the handling of missing data. In the applications the weekly outcome “number of days with pain” was analyzed in relation to the patients’ “previous duration of pain” (categorized as more or less than 30 days in the previous year. Research questions with appropriate analytical methods 1: How many days with pain do patients experience? This question was answered with data summaries. 2: What is the proportion of participants “recovered” at a specific time point? This question was answered using logistic regression analysis. 3: What is the time to recovery? This question was answered using survival analysis, illustrated in Kaplan-Meier curves, Proportional Hazard regression analyses and spline regression analyses. 4: How is the repeatedly measured data associated with baseline (predictor variables? This question was answered using generalized Estimating Equations, Poisson regression and Mixed linear models analyses. 5: Are there subgroups of patients with similar courses of pain

  10. Repeatability and reproducibility of anterior chamber volume measurements using 3-dimensional corneal and anterior segment optical coherence tomography.

    Science.gov (United States)

    Fukuda, Shinichi; Kawana, Keisuke; Yasuno, Yoshiaki; Oshika, Tetsuro

    2011-03-01

    To evaluate the repeatability and reproducibility of anterior chamber volume (ACV) measurements using swept-source 3-dimensional corneal and anterior segment optical coherence tomography (CAS-OCT) and dual Scheimpflug imaging. Department of Ophthalmology, Institute of Clinical Medicine, University of Tsukuba, Ibaraki, Japan. Nonrandomized clinical trial. Measurements were taken in normal eyes (subject group) and in eyes with primary angle closure (PAC) (patient group). In the subject group, the entire ACV and the central 8.0 mm diameter ACV were measured using CAS-OCT and dual Scheimpflug imaging. In the patient group, the entire ACV and 8.0 mm ACV were measured using CAS-OCT. The coefficient of variation and intraclass correlation coefficient (ICC) were calculated to evaluate repeatability and reproducibility, and the correlation between the 2 devices was assessed. In the subject group, the mean 8.0 mm ACV was 110.14 mm(3) ± 12.57 (SD) using CAS-OCT and 114.51 ± 14.69 mm(3) using Scheimpflug imaging; there was a significant linear correlation (r = 0.878, P ACV on CAS-OCT was 165.15 ± 29.29 mm(3). The ICCs of the 8.0 mm and entire ACV measurements were greater than 0.94. The coefficients of repeatability and reproducibility of the 8.0 mm ACV and entire ACV measurements were less than 5%. In the patient group, the 8.0 mm and entire ACV measurements showed good reproducibility and repeatability. The CAS-OCT method allowed noninvasive measurement of the entire ACV with sufficient repeatability and reproducibility. The 8.0 mm ACV measurements with CAS-OCT and Scheimpflug imaging were comparable. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  11. A New Approach to Measuring Financial Contagion

    OpenAIRE

    Kee-Hong Bae; G. Andrew Karolyi; Rene M. Stulz

    2000-01-01

    This paper proposes a new approach to evaluate contagion in financial markets. Our measure of contagion captures the co-incidence of extreme return shocks across countries within a region and across regions that cannot be explained by linear propagation models of shocks. We characterize the extent of contagion, its economic significance, and its determinants using a multinomial logistic regression model. Applying our approach to daily returns of emerging markets during the 1990s, we find that...

  12. Detecting variable responses in time-series using repeated measures ANOVA: Application to physiologic challenges [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Paul M. Macey

    2016-07-01

    Full Text Available We present an approach to analyzing physiologic timetrends recorded during a stimulus by comparing means at each time point using repeated measures analysis of variance (RMANOVA. The approach allows temporal patterns to be examined without an a priori model of expected timing or pattern of response. The approach was originally applied to signals recorded from functional magnetic resonance imaging (fMRI volumes-of-interest (VOI during a physiologic challenge, but we have used the same technique to analyze continuous recordings of other physiological signals such as heart rate, breathing rate, and pulse oximetry. For fMRI, the method serves as a complement to whole-brain voxel-based analyses, and is useful for detecting complex responses within pre-determined brain regions, or as a post-hoc analysis of regions of interest identified by whole-brain assessments. We illustrate an implementation of the technique in the statistical software packages R and SAS. VOI timetrends are extracted from conventionally preprocessed fMRI images. A timetrend of average signal intensity across the VOI during the scanning period is calculated for each subject. The values are scaled relative to baseline periods, and time points are binned. In SAS, the procedure PROC MIXED implements the RMANOVA in a single step. In R, we present one option for implementing RMANOVA with the mixed model function “lme”. Model diagnostics, and predicted means and differences are best performed with additional libraries and commands in R; we present one example. The ensuing results allow determination of significant overall effects, and time-point specific within- and between-group responses relative to baseline. We illustrate the technique using fMRI data from two groups of subjects who underwent a respiratory challenge. RMANOVA allows insight into the timing of responses and response differences between groups, and so is suited to physiologic testing paradigms eliciting complex

  13. Precision (Repeatability and Reproducibility and Agreement of Corneal Power Measurements Obtained by Topcon KR-1W and iTrace.

    Directory of Open Access Journals (Sweden)

    Yanjun Hua

    Full Text Available To evaluate the repeatability and reproducibility of corneal power measurements obtained by Topcon KR-1W and iTrace, and assess the agreement with measurements obtained by Allegro Topolyzer and IOLMaster.The right eyes of 100 normal subjects were prospectively scanned 3 times using all the 4 devices. Another observer performed additional 3 consecutive scans using the Topcon KR-1W and iTrace in the same session. About one week later, the first observer repeated the measurements using the Topcon KR-1W and iTrace. The steep keratometry (Ks, flat keratometry (Kf, mean keratometry (Km, J0 and J45 were analyzed. Repeatability and reproducibility of measurements were evaluated by the within-subject standard deviation (Sw, coefficient of variation (CoV, test-retest repeatability (2.77Sw, and intraclass correlation coefficient (ICC. Agreements between devices were assessed using Bland-Altman analysis and 95% limits of agreement (LoA.Intraobserver repeatability and interobserver and intersession reproducibility of the Ks, Kf and Km showed a CoV of no more than 0.5%, a 2.77Sw of 0.70 D or less, and an ICC of no less than 0.99. However, J0 and J45 showed poor intraobserver repeatability and interobserver and intersession reproducibility (all ICCs not greater than 0.446. Statistically significant differences existed between Topcon KR-1W and IOLMaster, Topcon KR-1W and iTrace, Topcon KR-1W and Topolyzer, iTrace and Topolyzer, iTrace and IOLMaster for Ks, Kf and Km measurements (all P < 0.05. The mean differences between Topcon KR-1W, iTrace, and the other 2 devices were small. The 95% LoA were approximately 1.0 D to 1.5 D for all measurements.The Ks, Kf and Km obtained by Topcon KR-1W and iTrace showed excellent intraobserver repeatability and interobserver and intersession reproducibility in normal eyes. The agreement between Topcon KR-1W and Topolyzer, Topcon KR-1W and IOLMaster, iTrace and Topolyzer, iTrace and IOLMaster, Topcon KR-1W and iTrace were not

  14. Separating climate-induced mass transfers and instrumental effects from tectonic signal in repeated absolute gravity measurements

    Science.gov (United States)

    Van Camp, M.; Viron, O.; Avouac, J. P.

    2016-05-01

    We estimate the signature of the climate-induced mass transfers in repeated absolute gravity measurements based on satellite gravimetric measurements from the Gravity Recovery and Climate Experiment (GRACE) mission. We show results at the globe scale and compare them with repeated absolute gravity (AG) time behavior in three zones where AG surveys have been published: Northwestern Europe, Canada, and Tibet. For 10 yearly campaigns, the uncertainties affecting the determination of a linear gravity rate of change range 3-4 nm/s2/a in most cases, in the absence of instrumental artifacts. The results are consistent with what is observed for long-term repeated campaigns. We also discuss the possible artifact that can result from using short AG survey to determine the tectonic effects in a zone of high hydrological variability. We call into question the tectonic interpretation of several gravity changes reported from stations in Tibet, in particular the variation observed prior to the 2015 Gorkha earthquake.

  15. Evaluating measurement accuracy a practical approach

    CERN Document Server

    Rabinovich, Semyon G

    2013-01-01

    The goal of Evaluating Measurement Accuracy: A Practical Approach is to present methods for estimating the accuracy of measurements performed in industry, trade, and scientific research. From developing the theory of indirect measurements to proposing new methods of reduction, transformation, and enumeration, this work encompasses the full range of measurement data processing. It includes many examples that illustrate the application of general theory to typical problems encountered in measurement practice. As a result, the book serves as an inclusive reference work for data processing of all types of measurements: single and multiple, combined and simultaneous, direct (both linear and nonlinear), and indirect (both dependent and independent). It is a working tool for experimental scientists and engineers of all disciplines who work with instrumentation. It is also a good resource for natural science and engineering students and for technicians performing measurements in industry. A key feature of the book is...

  16. Assessment of individual agreements with repeated measurements based on generalized confidence intervals.

    Science.gov (United States)

    Quiroz, Jorge; Burdick, Richard K

    2009-01-01

    Individual agreement between two measurement systems is determined using the total deviation index (TDI) or the coverage probability (CP) criteria as proposed by Lin (2000) and Lin et al. (2002). We used a variance component model as proposed by Choudhary (2007). Using the bootstrap approach, Choudhary (2007), and generalized confidence intervals, we construct bounds on TDI and CP. A simulation study was conducted to assess whether the bounds maintain the stated type I error probability of the test. We also present a computational example to demonstrate the statistical methods described in the paper.

  17. Accounting for uncertainty in volumes of seabed change measured with repeat multibeam sonar surveys

    Science.gov (United States)

    Schimel, Alexandre C. G.; Ierodiaconou, Daniel; Hulands, Lachlan; Kennedy, David M.

    2015-12-01

    Seafloors of unconsolidated sediment are highly dynamic features; eroding or accumulating under the action of tides, waves and currents. Assessing which areas of the seafloor experienced change and measuring the corresponding volumes involved provide insights into these important active sedimentation processes. Computing the difference between Digital Elevation Models (DEMs) obtained from repeat Multibeam Echosounders (MBES) surveys has become a common technique to identify these areas, but the uncertainty in these datasets considerably affects the estimation of the volumes displaced. The two main techniques used to take into account uncertainty in volume estimations are the limitation of calculations to areas experiencing a change in depth beyond a chosen threshold, and the computation of volumetric confidence intervals. However, these techniques are still in their infancy and, as a result, are often crude, seldom used or poorly understood. In this article, we explored a number of possible methodological advances to address this issue, including: (1) using the uncertainty information provided by the MBES data processing algorithm CUBE, (2) adapting fluvial geomorphology techniques for volume calculations using spatially variable thresholds and (3) volumetric histograms. The nearshore seabed off Warrnambool harbour - located in the highly energetic southwest Victorian coast, Australia - was used as a test site. Four consecutive MBES surveys were carried out over a four-months period. The difference between consecutive DEMs revealed an area near the beach experiencing large sediment transfers - mostly erosion - and an area of reef experiencing increasing deposition from the advance of a nearby sediment sheet. The volumes of sediment displaced in these two areas were calculated using the techniques described above, both traditionally and using the suggested improvements. We compared the results and discussed the applicability of the new methodological improvements

  18. Repeated Geophysical Surface Measurements to Estimate the Dynamics of Underground Coalfires

    Science.gov (United States)

    Wuttke, M. W.; Kessels, W.; Han, J.; Halisch, M.; Rüter, H.; Lindner, H.

    2009-04-01

    in a range between -130 and 176 nT. The maxima are most likely caused by the conversion of pyrite and markasit into maghemite, hematite and magnetite. Therefore the identified patches with high magnetic anomalies should have a direct connection to the burning coal in firezone 18. The firezone in Wuda has been visited now for five, that in Queergou for two times. All the discussed geophysical measurements together allow an integrated interpretation. Each result can be related to the combustion process with a particular likelihood for the vertical projection to the combustion centre. Probability calculations with chosen weight factors for each observation method are discussed. A so called fireindex deduced from the repeated measurements reveals the dynamics of the coal fire.

  19. Heart failure re-admission: measuring the ever shortening gap between repeat heart failure hospitalizations.

    Directory of Open Access Journals (Sweden)

    Jeffrey A Bakal

    Full Text Available Many quality-of-care and risk prediction metrics rely on time to first rehospitalization even though heart failure (HF patients may undergo several repeat hospitalizations. The aim of this study is to compare repeat hospitalization models. Using a population-based cohort of 40,667 patients, we examined both HF and all cause re-hospitalizations using up to five years of follow-up. Two models were examined: the gap-time model which estimates the adjusted time between hospitalizations and a multistate model which considered patients to be in one of four states; community-dwelling, in hospital for HF, in hospital for any reason, or dead. The transition probabilities and times were then modeled using patient characteristics and number of repeat hospitalizations. We found that during the five years of follow-up roughly half of the patients returned for a subsequent hospitalization for each repeat hospitalization. Additionally, we noted that the unadjusted time between hospitalizations was reduced ∼40% between each successive hospitalization. After adjustment each additional hospitalization was associated with a 28 day (95% CI: 22-35 reduction in time spent out of hospital. A similar pattern was seen when considering the four state model. A large proportion of patients had multiple repeat hospitalizations. Extending the gap between hospitalizations should be an important goal of treatment evaluation.

  20. A Cranial-Sided Approach for Repeated Mitral Periprosthetic Leak After Right Pneumonectomy.

    Science.gov (United States)

    Takahashi, Yosuke; Shibata, Toshihiko; Sasaki, Yasuyuki; Kato, Yasuyuki; Motoki, Manabu; Morisaki, Akimasa; Nishimura, Shinsuke; Hattori, Koji

    2016-03-01

    A 72-year-old man presented with worsening dyspnea on effort. He underwent right pneumonectomy 40 years ago, then mitral valve replacement through a right thoracotomy 8 years ago with repeat surgery to repair a periprosthetic valve leak; the mediastinum was displaced to the right, and the heart was rotated counterclockwise. Transthoracic echocardiography showed periprosthetic valve leak recurrence near the left atrial appendage. We repaired the periprosthetic valve leak through a median sternotomy. Transecting the main pulmonary artery allowed us to widely open the cranial-sided left atrium. We obtained good exposure of the mitral valve, and repaired the periprosthetic valve leak using pledgeted sutures and a pericardial patch.

  1. Splashing Our Way to Playfulness! An Aquatic Playgroup for Young Children with Autism, A Repeated Measures Design

    Science.gov (United States)

    Fabrizi, Sarah E.

    2015-01-01

    This study investigated the effectiveness of an aquatic playgroup on the playfulness of children, ages 2 to 3 with autism spectrum disorder. Using a repeated measures design, we followed 10 children and their caregivers who participated in a 6-week aquatic playgroup in southwest Florida. Four dyads completed the entire 12-week study period. The…

  2. Splashing Our Way to Playfulness! An Aquatic Playgroup for Young Children with Autism, A Repeated Measures Design

    Science.gov (United States)

    Fabrizi, Sarah E.

    2015-01-01

    This study investigated the effectiveness of an aquatic playgroup on the playfulness of children, ages 2 to 3 with autism spectrum disorder. Using a repeated measures design, we followed 10 children and their caregivers who participated in a 6-week aquatic playgroup in southwest Florida. Four dyads completed the entire 12-week study period. The…

  3. A Correction for the Epsilon Approximate Test in Repeated Measures Designs with Two or More Independent Groups.

    Science.gov (United States)

    Lecoutre, Bruno

    1991-01-01

    The routine epsilon approximate test in repeated measures designs when the condition of circularity is unfulfilled uses an erroneous formula in the case of two or more groups. Because this may lead to underestimation of the deviation from circularity when the subject number is small, a correction is proposed. (Author/SLD)

  4. Repeatability of swept-source optical coherence tomography retinal and choroidal thickness measurements in neovascular age-related macular degeneration

    DEFF Research Database (Denmark)

    Hanumunthadu, Daren; Ilginis, Tomas; Restori, Marie

    2017-01-01

    BACKGROUND: The aim was to determine the intrasession repeatability of swept-source optical coherence tomography (SS-OCT)-derived retinal and choroidal thickness measurements in eyes with neovascular age-related macular degeneration (nAMD). METHODS: A prospective study consisting of patients with...

  5. Short term soil erosion dynamics in alpine grasslands - Results from a Fallout Radionuclide repeated-sampling approach

    Science.gov (United States)

    Arata, Laura; Meusburger, Katrin; Zehringer, Markus; Ketterer, Michael E.; Mabit, Lionel; Alewell, Christine

    2016-04-01

    Improper land management and climate change has resulted in accelerated soil erosion rates in Alpine grasslands. To efficiently mitigate and control soil erosion and reduce its environmental impact in Alpine grasslands, reliable and validated methods for comprehensive data generation on its magnitude and spatial extent are mandatory. The use of conventional techniques (e.g. sediment traps, erosion pins or rainfall simulations) may be hindered by the extreme topographic and climatic conditions of the Alps. However, the application of the Fallout Radionuclides (FRNs) as soil tracers has already showed promising results in these specific agro-ecosystems. Once deposited on the ground, FRNs strongly bind to fine particles at the surface soil and move across the landscape primarily through physical processes. As such, they provide an effective track of soil and sediment redistribution. So far, applications of FRN in the Alps include 137Cs (half-life: 30.2 years) and 239+240Pu (239Pu [half-life = 24110 years] and 240Pu [half-life = 6561 years]). To investigate short term (4-5 years) erosion dynamics in the Swiss Alps, the authors applied a FRNs repeated sampling approach. Two study areas in the central Swiss Alps have been investigated: the Urseren Valley (Canton Uri), where significant land use changes occurred in the last centuries, and the Piora Valley (Canton Ticino), where land use change plays a minor role. Soil samples have been collected at potentially erosive sites along the valleys over a period of 4-5 years and measured for 137Cs and 239+240Pu activity. The inventory change between the sampling years indicates high erosion and deposition dynamics at both valleys. High spatial variability of 137Cs activities at all sites has been observed, reflecting the heterogeneous distribution of 137Cs fallout after the Chernobyl power plant accident in 1986. Finally, a new modelling technique to convert the inventory changes to quantitative estimates of soil erosion has

  6. GSTM1 and APE1 genotypes affect arsenic-induced oxidative stress: a repeated measures study

    Directory of Open Access Journals (Sweden)

    Quamruzzaman Quazi

    2007-12-01

    Full Text Available Abstract Background Chronic arsenic exposure is associated with an increased risk of skin, bladder and lung cancers. Generation of oxidative stress may contribute to arsenic carcinogenesis. Methods To investigate the association between arsenic exposure and oxidative stress, urinary 8-hydroxy-2'-deoxyguanosine (8-OHdG was evaluated in a cohort of 97 women recruited from an arsenic-endemic region of Bangladesh in 2003. Arsenic exposure was measured in urine, toenails, and drinking water. Drinking water and urine samples were collected on three consecutive days. Susceptibility to oxidative stress was evaluated by genotyping relevant polymorphisms in glutathione-s transferase mu (GSTM1, human 8-oxoguanine glycosylase (hOGG1 and apurinic/apyrimidinic endonuclease (APE1 genes using the Taqman method. Data were analyzed using random effects Tobit regression to account for repeated measures and 8-OHdG values below the detection limit. Results A consistent negative effect for APE1 was observed across water, toenail and urinary arsenic models. APE1 148 glu/glu + asp/glu genotype was associated with a decrease in logged 8-OHdG of 0.40 (95%CI -0.73, -0.07 compared to APE1 148 asp/asp. An association between total urinary arsenic and 8-OHdG was observed among women with the GSTM1 null genotype but not in women with GSTM1 positive. Among women with GSTM1 null, a comparison of the second, third, and fourth quartiles of total urinary arsenic to the first quartile resulted in a 0.84 increase (95% CI 0.27, 1.42, a 0.98 increase (95% CI 033, 1.66 and a 0.85 increase (95% CI 0.27, 1.44 in logged 8-OHdG, respectively. No effects between 8-OHdG and toenail arsenic or drinking water arsenic were observed. Conclusion These results suggest the APE1 variant genotype decreases repair of 8-OHdG and that arsenic exposure is associated with oxidative stress in women who lack a functional GSTM1 detoxification enzyme.

  7. Measuring equilibrium models: a multivariate approach

    Directory of Open Access Journals (Sweden)

    Nadji RAHMANIA

    2011-04-01

    Full Text Available This paper presents a multivariate methodology for obtaining measures of unobserved macroeconomic variables. The used procedure is the multivariate Hodrick-Prescot which depends on smoothing param eters. The choice of these parameters is crucial. Our approach is based on consistent estimators of these parameters, depending only on the observed data.

  8. Pilot study: Assessing repeatability of the EcoWalk platform resistive pressure sensors to measure plantar pressure during barefoot standing

    Science.gov (United States)

    Zequera, Martha; Perdomo, Oscar; Wilches, Carlos; Vizcaya, Pedro

    2013-06-01

    Plantar pressure provides useful information to assess the feet's condition. These systems have emerged as popular tools in clinical environment. These systems present errors and no compensation information is presented by the manufacturer, leading to uncertainty in the measurements. Ten healthy subjects, 5 females and 5 males, were recruited. Lateral load distribution, antero-posterior load distribution, average pressure, contact area, and force were recorded. The aims of this study were to assess repeatability of the EcoWalk system and identify the range of pressure values observed in the normal foot. The coefficient of repeatability was less than 4% for all parameters considered.

  9. Reliability of near-infrared spectroscopy for measuring biceps brachii oxygenation during sustained and repeated isometric contractions

    Science.gov (United States)

    Muthalib, Makii; Millet, Guillaume Y.; Quaresima, Valentina; Nosaka, Kazunori

    2010-01-01

    We examine the test-retest reliability of biceps brachii tissue oxygenation index (TOI) parameters measured by near-infrared spectroscopy during a 10-s sustained and a 30-repeated (1-s contraction, 1-s relaxation) isometric contraction task at 30% of maximal voluntary contraction (30% MVC) and maximal (100% MVC) intensities. Eight healthy men (23 to 33 yr) were tested on three sessions separated by 3 h and 24 h, and the within-subject reliability of torque and each TOI parameter were determined by Bland-Altman+/-2 SD limits of agreement plots and coefficient of variation (CV). No significant (P>0.05) differences between the three sessions were found for mean values of torque and TOI parameters during the sustained and repeated tasks at both contraction intensities. All TOI parameters were within+/-2 SD limits of agreement. The CVs for torque integral were similar between the sustained and repeated task at both intensities (4 to 7%) however, the CVs for TOI parameters during the sustained and repeated task were lower for 100% MVC (7 to 11%) than for 30% MVC (22 to 36%). It is concluded that the reliability of the biceps brachii NIRS parameters during both sustained and repeated isometric contraction tasks is acceptable.

  10. Comparison of Central Corneal Thickness Measurements by Ultrasonic Pachymetry and Orbscan II Corneal Topography and Evaluation of Ultrasonic Pachymetry Repeatability

    Directory of Open Access Journals (Sweden)

    Semra Tiryaki Demir

    2014-08-01

    Full Text Available Objectives: Comparison of central corneal thickness (CCT measurements by ultrasonic pachymetry and Orbscan II corneal topography and evaluation of ultrasonic pachymetry repeatability for same observer. Materials and Methods: The study included 132, 82, and 80 eyes of 66 patients with primary open-angle glaucoma (POAG, 41 patients with ocular hypertension (OHT, and 40 controls, respectively. All subjects were subjected to routine ophthalmic examination. Orbscan II (Bausch&Lomb corneal topography and ultrasonic pachymetry (Nidek Ultrasonic Pachymetry UP-1000 were used for measurement of CCT. ANOVA (Turkey test was used for variable distribution, paired sample t-test was used for repeated measurements, and the analyses were done by SPSS 20.0. Results: Mean CCT was 558.9±37.2 µm by ultrasonic pachymetry and 553.4±37 µm by corneal topography. There was a significant difference between the two measurements (p0.05. CCT was 555±39.2 µm, 564.3±28.4 µm, and 559.7±41.5 µm by ultrasonic pachymetry in POAG, OHT, and control subjects, respectively; CCT was 550.3±38.3 µm, 558.5±28 µm, and 553.2±42.5 µm by Orbscan II corneal topography in POAG, OHT, and control subjects, respectively. There was a significant linear correlation between Orbscan II corneal topography and ultrasonic pachymetry in CCT measurements (r=0.975, p<0.0001. Repeatability of ultrasonic pachymetry for same observer was (ICC value 0.990. Conclusion: There is a significant correlation between Orbscan II corneal topography and ultrasonic pachymetry in CCT measurements. These two methods of measurements should not be substituted for each other, since ultrasonic pachymetry measures CCT greater than Orbscan II corneal topography. Repeatability of ultrasonic pachymetry for same observer is very high. (Turk J Ophthalmol 2014; 44: 263-7

  11. The OSIRIS Weight of Evidence approach: ITS for the endpoints repeated-dose toxicity (RepDose ITS).

    Science.gov (United States)

    Tluczkiewicz, Inga; Batke, Monika; Kroese, Dinant; Buist, Harrie; Aldenberg, Tom; Pauné, Eduard; Grimm, Helvi; Kühne, Ralph; Schüürmann, Gerrit; Mangelsdorf, Inge; Escher, Sylvia E

    2013-11-01

    In the FP6 European project OSIRIS, Integrated Testing Strategies (ITSs) for relevant toxicological endpoints were developed to avoid new animal testing and thus to reduce time and costs. The present paper describes the development of an ITS for repeated-dose toxicity called RepDose ITS which evaluates the conditions under which in vivo non-guideline studies are reliable. In a tiered approach three aspects of these "non-guideline" studies are assessed: the documentation of the study (reliability), the quality of the study design (adequacy) and the scope of examination (validity). The reliability is addressed by the method "Knock-out criteria", which consists of four essential criteria for repeated-dose toxicity studies. A second tool, termed QUANTOS (Quality Assessment of Non-guideline Toxicity Studies), evaluates and weights the adequacy of the study by using intra-criterion and inter-criteria weighting. Finally, the Coverage approach calculates a probability that the detected Lowest-Observed-Effect-Level (LOEL) is similar to the LOEL of a guideline study dependent on the examined targets and organs of the non-guideline study. If the validity and adequacy of the non-guideline study are insufficient for risk assessment, the ITS proposes to apply category approach or the Threshold of Toxicological Concern (TTC) concept, and only as a last resort new animal-testing.

  12. Repeated testing improves achievement in a blended learning approach for risk competence training of medical students: results of a randomized controlled trial.

    Science.gov (United States)

    Spreckelsen, C; Juenger, J

    2017-09-26

    Adequate estimation and communication of risks is a critical competence of physicians. Due to an evident lack of these competences, effective training addressing risk competence during medical education is needed. Test-enhanced learning has been shown to produce marked effects on achievements. This study aimed to investigate the effect of repeated tests implemented on top of a blended learning program for risk competence. We introduced a blended-learning curriculum for risk estimation and risk communication based on a set of operationalized learning objectives, which was integrated into a mandatory course "Evidence-based Medicine" for third-year students. A randomized controlled trial addressed the effect of repeated testing on achievement as measured by the students' pre- and post-training score (nine multiple-choice items). Basic numeracy and statistical literacy were assessed at baseline. Analysis relied on descriptive statistics (histograms, box plots, scatter plots, and summary of descriptive measures), bootstrapped confidence intervals, analysis of covariance (ANCOVA), and effect sizes (Cohen's d, r) based on adjusted means and standard deviations. All of the 114 students enrolled in the course consented to take part in the study and were assigned to either the intervention or control group (both: n = 57) by balanced randomization. Five participants dropped out due to non-compliance (control: 4, intervention: 1). Both groups profited considerably from the program in general (Cohen's d for overall pre vs. post scores: 2.61). Repeated testing yielded an additional positive effect: while the covariate (baseline score) exhibits no relation to the post-intervention score, F(1, 106) = 2.88, p > .05, there was a significant effect of the intervention (repeated tests scenario) on learning achievement, F(1106) = 12.72, p blended learning approach can be improved significantly by implementing a test-enhanced learning design, namely repeated testing. As

  13. Comparison of intraclass correlation coefficient estimates and standard errors between using cross-sectional and repeated measurement data: the Safety Check cluster randomized trial.

    Science.gov (United States)

    Ip, Edward H; Wasserman, Richard; Barkin, Shari

    2011-03-01

    Designing cluster randomized trials in clinical studies often requires accurate estimates of intraclass correlation, which quantifies the strength of correlation between units, such as participants, within a cluster, such as a practice. Published ICC estimates, even when available, often suffer from the problem of wide confidence intervals. Using data from a national, randomized, controlled study concerning violence prevention for children--the Safety Check--we compare the ICC values derived from two approaches only baseline data and using both baseline and follow-up data. Using a variance component decomposition approach, the latter method allows flexibility in handling complex data sets. For example, it allows for shifts in the outcome variable over time and for an unbalanced cluster design. Furthermore, we evaluate the large-sample formula for ICC estimates and standard errors using the bootstrap method. Our findings suggest that ICC estimates range from 0.012 to 0.11 for providers within practice and range from 0.018 to 0.11 for families within provider. The estimates derived from the baseline-only and repeated-measurements approaches agree quite well except in cases in which variation over repeated measurements is large. The reductions in the widths of ICC confidence limits from using repeated measurement over baseline only are, respectively, 62% and 42% at the practice and provider levels. The contribution of this paper therefore includes two elements, which are a methodology for improving the accuracy of ICC, and the reporting of such quantities for pediatric and other researchers who are interested in designing clustered randomized trials similar to the current study.

  14. Repeatability of Volume and Regional Body Composition Measurements of the Lower Limb Using Dual-energy X-ray Absorptiometry

    DEFF Research Database (Denmark)

    Gjorup, Caroline A; Zerahn, Bo; Juul, Sarah

    2016-01-01

    Lower limb lymphedema is a dynamic condition in which tissue composition and volume measurements are affected. Various definitions of lower limb lymphedema exist but volume differences between the limbs are widely used. It is therefore necessary to have a readily available noninvasive measurement...... of agreement on the Bland-Altman plots. These results confirm DXA to be a highly repeatable method for volume and tissue composition measurements of the lower limb. In a population at risk of lymphedema, DXA offers a clinically readily available noninvasive method allowing multiple measurements of volume...... and tissue composition on a routine basis, important for diagnosing, monitoring, managing, and researching lymphedema....

  15. Power analysis for multivariate and repeated measurements designs via SPSS: correction and extension of D'Amico, Neilands, and Zambarano (2001).

    Science.gov (United States)

    Osborne, Jason W

    2006-05-01

    D'Amico, Neilands, and Zambarano (2001) published SPSS syntax to perform power analyses for three complex procedures: ANCOVA, MANOVA, and repeated measures ANOVA. Unfortunately, the published SPSS syntax for performing the repeated measures analysis needed some minor revision in order to perform the analysis correctly. This article presents the corrected syntax that will successfully perform the repeated measures analysis and provides some guidance on modifying the syntax to customize the analysis.

  16. Repeatability and reproducibility of measurements of the suburethral tape location obtained in pelvic floor ultrasound performed with a transvaginal probe

    Directory of Open Access Journals (Sweden)

    Maria Magdalena Dresler

    2017-06-01

    Full Text Available Introduction: Implants used to treat patients with urogynecological conditions are well visible in US examination. The position of the suburethral tape (sling is determined in relation to the urethra or the pubic symphysis. Aim of the study: The study was aimed at assessing the accuracy of measurements determining suburethral tape location obtained in pelvic US examination performed with a transvaginal probe. Material and methods: The analysis covered the results of sonographic measurements obtained according to a standardized technique in women referred for urogynecological diagnostics. Data from a total of 68 patients were used to analyse the repeatability and reproducibility of results obtained on the same day. Results: The intraclass correlation coefficient for the repeatability and reproducibility of the sonographic measurements of suburethral tape location obtained with a transvaginal probe ranged from 0.6665 to 0.9911. The analysis of the measurements confirmed their consistency to be excellent or good. Conclusions: Excellent and good repeatability and reproducibility of the measurements of the suburethral tape location obtained in a pelvic ultrasound performed with a transvaginal probe confirm the test’s validity and usefulness for clinical and academic purposes.

  17. A methodical approach to performance measurement experiments : measure and measurement specification

    OpenAIRE

    Hoeksema, F.W.; Veen,, A. m.; Beijnum, van, B.J.F.

    1997-01-01

    This report describes a methodical approach to performance measurement experiments. This approach gives a blueprint for the whole trajectory from the notion of performance measures and how to define them via planning, instrumentation and execution of the experiments to interpretation of the results. The first stage of the approach, Measurement Initialisation, has been worked out completely. It is shown that a well-defined system description allows a procedural approach to defining performance...

  18. Measuring segregation: an activity space approach.

    Science.gov (United States)

    Wong, David W S; Shaw, Shih-Lung

    2011-06-01

    While the literature clearly acknowledges that individuals may experience different levels of segregation across their various socio-geographical spaces, most measures of segregation are intended to be used in the residential space. Using spatially aggregated data to evaluate segregation in the residential space has been the norm and thus individual's segregation experiences in other socio-geographical spaces are often de-emphasized or ignored. This paper attempts to provide a more comprehensive approach in evaluating segregation beyond the residential space. The entire activity spaces of individuals are taken into account with individuals serving as the building blocks of the analysis. The measurement principle is based upon the exposure dimension of segregation. The proposed measure reflects the exposure of individuals of a referenced group in a neighborhood to the populations of other groups that are found within the activity spaces of individuals in the referenced group. Using the travel diary data collected from the tri-county area in southeast Florida and the imputed racial-ethnic data, this paper demonstrates how the proposed segregation measurement approach goes beyond just measuring population distribution patterns in the residential space and can provide a more comprehensive evaluation of segregation by considering various socio-geographical spaces.

  19. An Approach to Measuring Software Quality Perception

    Science.gov (United States)

    Hofman, Radoslaw

    Perception measuring and perception management is an emerging approach in the area of product management. Cognitive, psychological, behavioral and neurological theories, tools and methods are being employed for a better understanding of the mechanisms of a consumer's attitude and decision processes. Software is also being defined as a product, however this kind of product is significantly different from all other products. Software products are intangible and it is difficult to trace their characteristics which are strongly dependant on a dynamic context of use.

  20. Repeatability of swept-source optical coherence tomography retinal and choroidal thickness measurements in neovascular age-related macular degeneration.

    Science.gov (United States)

    Hanumunthadu, Daren; Ilginis, Tomas; Restori, Marie; Sagoo, Mandeep S; Tufail, Adnan; Balaggan, Kamaljit S; Patel, Praveen J

    2017-05-01

    The aim was to determine the intrasession repeatability of swept-source optical coherence tomography (SS-OCT)-derived retinal and choroidal thickness measurements in eyes with neovascular age-related macular degeneration (nAMD). A prospective study consisting of patients with active nAMD enrolled in the Distance of Choroid Study at Moorfields Eye Hospital, London. Patients underwent three 12×9 mm macular raster scans using the deep range imaging (DRI) OCT-1 SS-OCT (Topcon) device in a single imaging session. Retinal and choroidal thicknesses were calculated for the ETDRS macular subfields. Repeatability was calculated according to methods described by Bland and Altman. 39 eyes of 39 patients with nAMD were included with a mean (±SD) age of 73.9 (±7.2) years. The mean (±SD) retinal thickness of the central macular subfield was 225.7 μm (±12.4 μm). The repeatability this subfield, expressed as a percentage of the mean central macular subfield thickness, was 23.2%. The percentage repeatability of the other macular subfields ranged from 13.2% to 28.7%. The intrasession coefficient of repeatability of choroidal thickness of the central macular subfield was 57.2 μm with a mean choroidal thickness (±SD) of 181 μm (±15.8 μm). This study suggests that a change >23.2% of retinal thickness and 57.2 μm choroidal thickness in the central macular subfield is required to distinguish true clinical change from measurement variability when using the DRI OCT-1 device to manage patients with nAMD. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  1. A Multifunctional Frontloading Approach for Repeated Recycling of a Pressure-Controlled AFM Micropipette.

    Directory of Open Access Journals (Sweden)

    Phillip Roder

    Full Text Available Fluid force microscopy combines the positional accuracy and force sensitivity of an atomic force microscope (AFM with nanofluidics via a microchanneled cantilever. However, adequate loading and cleaning procedures for such AFM micropipettes are required for various application situations. Here, a new frontloading procedure is described for an AFM micropipette functioning as a force- and pressure-controlled microscale liquid dispenser. This frontloading procedure seems especially attractive when using target substances featuring high costs or low available amounts. Here, the AFM micropipette could be filled from the tip side with liquid from a previously applied droplet with a volume of only a few μL using a short low-pressure pulse. The liquid-loaded AFM micropipettes could be then applied for experiments in air or liquid environments. AFM micropipette frontloading was evaluated with the well-known organic fluorescent dye rhodamine 6G and the AlexaFluor647-labeled antibody goat anti-rat IgG as an example of a larger biological compound. After micropipette usage, specific cleaning procedures were tested. Furthermore, a storage method is described, at which the AFM micropipettes could be stored for a few hours up to several days without drying out or clogging of the microchannel. In summary, the rapid, versatile and cost-efficient frontloading and cleaning procedure for the repeated usage of a single AFM micropipette is beneficial for various application situations from specific surface modifications through to local manipulation of living cells, and provides a simplified and faster handling for already known experiments with fluid force microscopy.

  2. A Multifunctional Frontloading Approach for Repeated Recycling of a Pressure-Controlled AFM Micropipette.

    Science.gov (United States)

    Roder, Phillip; Hille, Carsten

    2015-01-01

    Fluid force microscopy combines the positional accuracy and force sensitivity of an atomic force microscope (AFM) with nanofluidics via a microchanneled cantilever. However, adequate loading and cleaning procedures for such AFM micropipettes are required for various application situations. Here, a new frontloading procedure is described for an AFM micropipette functioning as a force- and pressure-controlled microscale liquid dispenser. This frontloading procedure seems especially attractive when using target substances featuring high costs or low available amounts. Here, the AFM micropipette could be filled from the tip side with liquid from a previously applied droplet with a volume of only a few μL using a short low-pressure pulse. The liquid-loaded AFM micropipettes could be then applied for experiments in air or liquid environments. AFM micropipette frontloading was evaluated with the well-known organic fluorescent dye rhodamine 6G and the AlexaFluor647-labeled antibody goat anti-rat IgG as an example of a larger biological compound. After micropipette usage, specific cleaning procedures were tested. Furthermore, a storage method is described, at which the AFM micropipettes could be stored for a few hours up to several days without drying out or clogging of the microchannel. In summary, the rapid, versatile and cost-efficient frontloading and cleaning procedure for the repeated usage of a single AFM micropipette is beneficial for various application situations from specific surface modifications through to local manipulation of living cells, and provides a simplified and faster handling for already known experiments with fluid force microscopy.

  3. Comparison of Central Corneal Thickness Measurements by Ultrasonic Pachymetry and Orbscan II Corneal Topography and Evaluation of Ultrasonic Pachymetry Repeatability

    OpenAIRE

    Semra Tiryaki Demir; Mahmut Odabaşı; Mehmet Ersin Oba; Ayşe Burcu Dirim; Efe Can; Orhan Kara

    2014-01-01

    Objectives: Comparison of central corneal thickness (CCT) measurements by ultrasonic pachymetry and Orbscan II corneal topography and evaluation of ultrasonic pachymetry repeatability for same observer. Materials and Methods: The study included 132, 82, and 80 eyes of 66 patients with primary open-angle glaucoma (POAG), 41 patients with ocular hypertension (OHT), and 40 controls, respectively. All subjects were subjected to routine ophthalmic examination. Orbscan II (Bausch&Lomb) ...

  4. Comparison of Repeated Measurement Design and Mixed Models in Evaluation of the Entonox Effect on Labor Pain

    Directory of Open Access Journals (Sweden)

    Nasim Karimi

    2017-01-01

    Full Text Available Background & objectives: In many medical studies, the response variable is measured repeatedly over time to evaluate the treatment effect that is known as longitudinal study. The analysis method for this type of data is repeated measures ANOVA that uses only one correlation structure and the results are not valid with inappropriate correlation structure. To avoid this problem, a convenient alternative is mixed models. So, the aim of this study was to compare of mixed and repeated measurement models for examination of the Entonox effect on the labor pain. Methods: This experimental study was designed to compare the effect of Entonox and oxygen inhalation on pain relief between two groups. Data were analyzed using repeated measurement and mixed models with different correlation structures. Selection and comparison of proper correlation structures performed using Akaike information criterion, Bayesian information criterion and restricted log-likelihood. Data were analyzed using SPSS-22. Results: Results of our study showed that all variables containing analgesia methods, labor duration of the first and second stages, and time were significant in these tests. In mixed model, heterogeneous first-order autoregressive, first-order autoregressive, heterogeneous Toeplitz and unstructured correlation structures were recognized as the best structures. Also, all variables were significant in these structures. Unstructured variance covariance matrix was recognized as the worst structure and labor duration of the first and second stages was not significant in this structure. Conclusions: This study showed that the Entonox inhalation has a significant effect on pain relief in primiparous and it is confirmed by all of the models.

  5. Repeated Microsphere Delivery for Serial Measurement of Regional Blood Perfusion in the Chronically Instrumented, Conscious Canine

    Science.gov (United States)

    Bartoli, Carlo R.; Okabe, Kazunori; Akiyama, Ichiro; Coull, Brent; Godleski, John J.

    2008-01-01

    INTRODUCTION For chronic, repeated hemodynamic studies in conscious dogs, we designed and tested a chronically instrumented canine, microsphere delivery model. The goals of this study were to investigate the accuracy of repeated estimations of blood perfusion using fluorescent-labeled microspheres and to develop and validate a chronic preparation that permits consecutive estimations in the same conscious animal over an extended protocol. METHODS Via thoracotomy, 9 dogs were instrumented with left atrial appendage and aortic vascular access catheters connected to subcutaneous vascular access ports (VAPs). Four animals received 7 serial injections of 1.6 million 15μm microspheres (total: 11.2 million), and five animals received 8 serial injections of 2.25 million microspheres (total: 18 million) over the course of 11 or 18 weeks. RESULTS All catheters have remained bidirectionally patent during protocol for 14.9±0.8 (Mean±SEM) weeks. Sphere accumulation did not significantly alter global myocardial (p=0.69, p=0.25), renal (p=0.92, p=0.12), hepatic (p=0.84, p=0.32), or splenic (p=0.33, p=0.70) blood perfusion in either set of animals. CONCLUSIONS Catheters remained bidirectionally patent for months, did not interfere with the hemodynamic responses of the preparation, and allowed repeat percutaneous injection of microspheres and withdrawal of reference arterial blood from within conscious canines. Eight serial injections totaling 18 million microspheres over 18 weeks did not alter regional myocardial, hepatic, renal, or splenic blood flow. This dependable, chronic, percutaneous arterial access preparation provides a means for examining acute and long-term effects of pathophysiological, pharmaceutical, and environmental influences on regional arterial blood perfusion in conscious, large animals. PMID:17632127

  6. Measuring Aseismic Slip through Characteristically Repeating Earthquakes at the Mendocino Triple Junction, Northern California

    Science.gov (United States)

    Materna, K.; Taira, T.; Burgmann, R.

    2016-12-01

    The Mendocino Triple Junction (MTJ), at the transition point between the San Andreas fault system, the Mendocino Transform Fault, and the Cascadia Subduction Zone, undergoes rapid tectonic deformation and produces more large (M>6.0) earthquakes than any region in California. Most of the active faults of the triple junction are located offshore, making it difficult to characterize both seismic slip and aseismic creep. In this work, we study aseismic creep rates near the MTJ using characteristically repeating earthquakes (CREs) as indicators of creep rate. CREs are generally interpreted as repeated failures of the same seismic patch within an otherwise creeping fault zone; as a consequence, the magnitude and recurrence time of the CREs can be used to determine a fault's creep rate through empirically calibrated scaling relations. Using seismic data from 2010-2016, we identify CREs as recorded by an array of eight 100-Hz PBO borehole seismometers deployed in the Cape Mendocino area. For each event pair with epicenters less than 30 km apart, we compute the cross-spectral coherence of 20 seconds of data starting one second before the P-wave arrival. We then select pairs with high coherence in an appropriate frequency band, which is determined uniquely for each event pair based on event magnitude, station distance, and signal-to-noise ratio. The most similar events (with median coherence above 0.95 at two or more stations) are selected as CREs and then grouped into CRE families, and each family is used to infer a local creep rate. On the Mendocino Transform Fault, we find relatively high creep rates of >5 cm/year that increase closer to the Gorda Ridge. Closer to shore and to the MTJ itself, we find many families of repeaters on and off the transform fault with highly variable creep rates, indicative of the complex deformation that takes place there.

  7. Persistent repeated measurements by magnetic resonance spectroscopy demonstrate minimal hepatic encephalopathy: a case report.

    Science.gov (United States)

    Scheau, C; Popa, G A; Ghergus, A E; Preda, E M; Capsa, R A; Lupescu, I G

    2013-09-15

    Minimal Hepatic Encephalopathy (MHE), previously referred to as infraclinical or subclinical is a precursor in the development of clinical hepatic encephalopathy (HE). The demonstration of MHE is done through neuropsychological testing in the absence of clinical evidence of HE, patients showing only a mild cognitive impairment. Neuropsychological tests employed consist of Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) and portosystemic encephalopathy (PSE) test score. Unfortunately, there are numerous occasions when the tests prove irrelevant: in the situation of inexperienced investigators, the patient's poor education, vision problems or concurring central nervous system disease, all of which may delay or deviate from the correct diagnosis.

  8. Accuracy and repeatability of direct ciliary sulcus diameter measurements by full-scale 50-megahertz ultrasound biomicroscopy

    Institute of Scientific and Technical Information of China (English)

    LI De-jiao; WANG Ning-li; CHEN Shu; LI Shu-ning; MU Da-peng; WANG Tao

    2009-01-01

    Background Phakic intraocular lens (pIOL) implantation has been a popular means for the treatment of high ametropia. Measurements of ciliary sulcus diameter is important for pIOL size determining. But till now, no perfect system can directly measure it. The present study was to evaluate the accuracy, repeatability and reproducibility of direct sulcus diameter measurements obtained by a full-scale 50-megahertz (MHz) ultrasound biomicroscopy (UBM).Methods A fresh cadaver human eye with a scale marker inserted through the posterior chamber plane from 3 o'clock to 9 o'clock meridian and 30 randomly selected eyes from 30 normal subjects were scanned by full-scale 50-MHz UBM in horizontal meridional scan plane. The distance between the scales and the whole length of the marker inside the cadaver eye were measured by the same observer using the "built-in" measurement tools and the indicating error of instrument was calculated. Reproducibility of the measurement was evaluated in 30 eyes by 2 operators using Blander and Altman plot test. Repeatability was evaluated from 10 successive eyes randomly selected from the 30 eyes by one operator.Results On a scale of 1 mm, the greatest indicating error was 40 μm; the mean largest indicating error of 1 mm scale from the 10 images was (26±14) μm; on a scale of 11 mm, the greatest indicating error was 70 μo; the error rate was 0.64%. The mean length of the needle inside the eye of the 10 images was 11.05 mm, with the mean indicating error of 47 μm, the average error rate was 0.43%. For ciliary sulcus diameter measurements in vivo, the coefficient of variation was 0.38%; the coefficients of repeatability for intra-observer and inter-observer measurements were 1.99% and 2.55%, respectively. The limits of agreement for intra-observer and inter-observer measurement were-0.41 mm to 0.48 mm and -0.59 mm to 0.58 ram, respectively.Conclusion The full-scale 50-MHz UBM can be a high accuracy and good repeatability means for direct

  9. One Approach for Dynamic L-lysine Modelling of Repeated Fed-batch Fermentation

    Directory of Open Access Journals (Sweden)

    Kalin Todorov

    2007-03-01

    Full Text Available This article deals with establishment of dynamic unstructured model of variable volume fed-batch fermentation process with intensive droppings for L-lysine production. The presented approach of the investigation includes the following main procedures: description of the process by generalized stoichiometric equations; preliminary data processing and calculation of specific rates for main kinetic variables; identification of the specific rates as a second-order non-linear dynamic models; establishment and optimisation of dynamic model of the process; simulation researches. MATLAB is used as a research environment.

  10. Approaches towards airport economic performance measurement

    Directory of Open Access Journals (Sweden)

    Ivana STRYČEKOVÁ

    2011-01-01

    Full Text Available The paper aims to assess how economic benchmarking is being used by airports as a means of performance measurement and comparison of major international airports in the world. The study focuses on current benchmarking practices and methods by taking into account different factors according to which it is efficient to benchmark airports performance. As methods are considered mainly data envelopment analysis and stochastic frontier analysis. Apart from them other approaches are discussed by airports to provide economic benchmarking. The main objective of this article is to evaluate the efficiency of the airports and answer some undetermined questions involving economic benchmarking of the airports.

  11. Approaches to measuring entanglement in chemical magnetometers.

    Science.gov (United States)

    Tiersch, M; Guerreschi, G G; Clausen, J; Briegel, H J

    2014-01-01

    Chemical magnetometers are radical pair systems such as solutions of pyrene and N,N-dimethylaniline (Py-DMA) that show magnetic field effects in their spin dynamics and their fluorescence. We investigate the existence and decay of quantum entanglement in free geminate Py-DMA radical pairs and discuss how entanglement can be assessed in these systems. We provide an entanglement witness and propose possible observables for experimentally estimating entanglement in radical pair systems with isotropic hyperfine couplings. As an application, we analyze how the field dependence of the entanglement lifetime in Py-DMA could in principle be used for magnetometry and illustrate the propagation of measurement errors in this approach.

  12. Measurement repeatability of tibial tuberosity-trochlear groove offset distance in red fox (Vulpes vulpes) cadavers

    DEFF Research Database (Denmark)

    Miles, James Edward; Jensen, Bente Rona; Kirpensteijn, Jolle

    2013-01-01

    Objective-To describe CT image reconstruction criteria for measurement of the tibial tuberosity-trochlear groove (TT-TG) offset distance, evaluate intra- and inter-reconstruction repeatability, and identify key sources of error in the measurement technique, as determined in vulpine hind limbs. An...... and applied criteria. The TT-TG offset distance has potential as an objective assessment of alignment of the distal portion of the quadriceps mechanism; its use as an aid in case selection for corrective femoral osteotomy among dogs with medial patellar luxation warrants investigation....

  13. New approach to energy loss measurements

    CERN Document Server

    Trzaska, W H; Alanko, T; Mutterer, M; Raeisaenen, J; Tjurin, G; Wojdyr, M

    2002-01-01

    A new approach to energy loss measurements is proposed. In the same experiment electronic stopping force (power) in gold, nickel, carbon, polycarbonate and Havar for sup 4 sup 0 Ar, sup 2 sup 8 Si, sup 1 sup 6 O, sup 4 He and sup 1 H ions in the energy range 0.12-11 MeV/u has been measured. In this paper we give the full results for gold, nickel, and carbon and for sup 4 sup 0 Ar, sup 1 sup 6 O, sup 4 He and sup 1 H ions. Good agreement of the measured stopping force values for light ions with literature data is interpreted as the positive test of the experimental technique. The same technique used with heavy ions yields agreement with the published data only for energies above 1 MeV/u. At lower energies we observe progressively increasing discrepancy. This discrepancy is removed completely as soon as we neglect pulse height defect compensation. This observation makes us believe that the majority of the published results as well as semi-empirical calculations based on them (like the popular SRIM) may be in er...

  14. A methodical approach to performance measurement experiments : measure and measurement specification

    NARCIS (Netherlands)

    Hoeksema, F.W.; Veen, van der J.T.; Beijnum, van B.J.F.

    1997-01-01

    This report describes a methodical approach to performance measurement experiments. This approach gives a blueprint for the whole trajectory from the notion of performance measures and how to define them via planning, instrumentation and execution of the experiments to interpretation of the results.

  15. A Methodical Approach to Performance Measurement Experiments: Measure and Measurement Specification

    NARCIS (Netherlands)

    Hoeksema, F.W.; van der Veen, Johan (CTIT); van Beijnum, Bernhard J.F.

    1997-01-01

    This report describes a methodical approach to performance measurement experiments. This approach gives a blueprint for the whole trajectory from the notion of performance measures and how to define them via planning, instrumentation and execution of the experiments to interpretation of the results.

  16. Intergenic and repeat transcription in human, chimpanzee and macaque brains measured by RNA-Seq.

    Directory of Open Access Journals (Sweden)

    Augix Guohua Xu

    Full Text Available Transcription is the first step connecting genetic information with an organism's phenotype. While expression of annotated genes in the human brain has been characterized extensively, our knowledge about the scope and the conservation of transcripts located outside of the known genes' boundaries is limited. Here, we use high-throughput transcriptome sequencing (RNA-Seq to characterize the total non-ribosomal transcriptome of human, chimpanzee, and rhesus macaque brain. In all species, only 20-28% of non-ribosomal transcripts correspond to annotated exons and 20-23% to introns. By contrast, transcripts originating within intronic and intergenic repetitive sequences constitute 40-48% of the total brain transcriptome. Notably, some repeat families show elevated transcription. In non-repetitive intergenic regions, we identify and characterize 1,093 distinct regions highly expressed in the human brain. These regions are conserved at the RNA expression level across primates studied and at the DNA sequence level across mammals. A large proportion of these transcripts (20% represents 3'UTR extensions of known genes and may play roles in alternative microRNA-directed regulation. Finally, we show that while transcriptome divergence between species increases with evolutionary time, intergenic transcripts show more expression differences among species and exons show less. Our results show that many yet uncharacterized evolutionary conserved transcripts exist in the human brain. Some of these transcripts may play roles in transcriptional regulation and contribute to evolution of human-specific phenotypic traits.

  17. A Description of Quasar Variability Measured Using Repeated SDSS and POSS Imaging

    CERN Document Server

    MacLeod, Chelsea L; Sesar, Branimir; de Vries, Wim; Kochanek, Christopher S; Kelly, Brandon C; Becker, Andrew C; Lupton, Robert H; Hall, Patrick B; Richards, Gordon T; Anderson, Scott F; Schneider, Donald P

    2011-01-01

    We provide a quantitative description and statistical interpretation of the optical continuum variability of quasars. The Sloan Digital Sky Survey (SDSS) has obtained repeated imaging in five UV-to-IR photometric bands for 34,727 spectroscopically confirmed quasars. About 10,000 quasars have an average of 60 observations in each band obtained over a decade along stripe 82 (S82), whereas the remaining ~25,000 have 2-3 observations due to scan overlaps. The observed time lags span the range from a day to almost 10 years, and constrain quasar variability at rest-frame time lags of up to 4 years, and at rest-frame wavelengths from 1000A to 6000A. We publicly release a user-friendly catalog of quasars from the SDSS Data Release 7 that have been observed at least twice in SDSS or once in both SDSS and the Palomar Observatory Sky Survey, and we use it to analyze the ensemble properties of quasar variability. Based on a damped random walk (DRW) model defined by a characteristic time scale and an asymptotic variabilit...

  18. Linear systems a measurement based approach

    CERN Document Server

    Bhattacharyya, S P; Mohsenizadeh, D N

    2014-01-01

    This brief presents recent results obtained on the analysis, synthesis and design of systems described by linear equations. It is well known that linear equations arise in most branches of science and engineering as well as social, biological and economic systems. The novelty of this approach is that no models of the system are assumed to be available, nor are they required. Instead, a few measurements made on the system can be processed strategically to directly extract design values that meet specifications without constructing a model of the system, implicitly or explicitly. These new concepts are illustrated by applying them to linear DC and AC circuits, mechanical, civil and hydraulic systems, signal flow block diagrams and control systems. These applications are preliminary and suggest many open problems. The results presented in this brief are the latest effort in this direction and the authors hope these will lead to attractive alternatives to model-based design of engineering and other systems.

  19. Estimation of spatial patterns of urban air pollution over a 4-week period from repeated 5-min measurements

    Science.gov (United States)

    Gillespie, Jonathan; Masey, Nicola; Heal, Mathew R.; Hamilton, Scott; Beverland, Iain J.

    2017-02-01

    Determination of intra-urban spatial variations in air pollutant concentrations for exposure assessment requires substantial time and monitoring equipment. The objective of this study was to establish if short-duration measurements of air pollutants can be used to estimate longer-term pollutant concentrations. We compared 5-min measurements of black carbon (BC) and particle number (PN) concentrations made once per week on 5 occasions, with 4 consecutive 1-week average nitrogen dioxide (NO2) concentrations at 18 locations at a range of distances from busy roads in Glasgow, UK. 5-min BC and PN measurements (averaged over the two 5-min periods at the start and end of a week) explained 40-80%, and 7-64% respectively, of spatial variation in the intervening 1-week NO2 concentrations for individual weeks. Adjustment for variations in background concentrations increased the percentage of explained variation in the bivariate relationship between the full set of NO2 and BC measurements over the 4-week period from 28% to 50% prior to averaging of repeat measurements. The averages of five 5-min BC and PN measurements made over 5 weeks explained 75% and 33% respectively of the variation in average 1-week NO2 concentrations over the same period. The relatively high explained variation observed between BC and NO2 measured on different time scales suggests that, with appropriate steps to correct or average out temporal variations, repeated short-term measurements can be used to provide useful information on longer-term spatial patterns for these traffic-related pollutants.

  20. Variation in repeated mouth-opening measurements in head and neck cancer patients with and without trismus.

    Science.gov (United States)

    Jager-Wittenaar, H; Dijkstra, P U; Vissink, A; van Oort, R P; Roodenburg, J L N

    2009-01-01

    Trismus after head and neck cancer treatment may severely limit mandibular functioning. Interventions aimed at reducing trismus can only be evaluated when the amount of variation associated with these measurements is known. The aim of this study was to analyse the variation in mouth-opening measurements in patients treated for head and neck cancer, with and without trismus. Maximal mouth opening was measured in 120 patients in two sessions of three repeated measurements by one observer. To analyse the influence of interobserver variation on mouth-opening measurements a subgroup of 30 patients was measured by a second observer. The standard deviation of the six measurements per patient was used as the variation in measurements of maximal mouth opening. No significant difference was found in maximal mouth opening in patients with (n=33) or without (n=87) trismus. The interobserver intraclass correlation coefficient (ICC) was 0.98. Intraobserver ICC and intersession ICC reliabilities both were 0.99. The variation in the mean values of the three measurements was only slightly smaller than the variation of the single measurements. Variation in maximal mouth opening in patients with trismus does not differ from variation in maximal mouth opening in patients without trismus. Interobserver variation is limited.

  1. The use of a measure of acute irritation to predict the outcome of repeated usage of hand soap products.

    Science.gov (United States)

    Williams, C; Wilkinson, M; McShane, P; Pennington, D; Fernandez, C; Pierce, S

    2011-06-01

    Healthcare-associated infection is an important worldwide problem that could be reduced by better hand hygiene practice. However, an increasing number of healthcare workers are experiencing irritant contact dermatitis of the hands as a result of repeated hand washing. This may lead to a reduced level of compliance with regard to hand hygiene. To assess whether a measure of acute irritation by hand soaps could predict the effects of repeated usage over a 2-week period. In a double-blind, randomized comparison study, the comparative irritation potential of four different hand soaps was assessed over a 24-h treatment period. The effect of repeated hand washing with the hand soap products over a 2-week period in healthy adult volunteers on skin barrier function was then determined by assessment of transepidermal water loss (TEWL), epidermal hydration and a visual assessment using the Hand Eczema Severity Index (HECSI) at days 0, 7 and 14. A total of 121 subjects from the 123 recruited completed phase 1 of the study. All four products were seen to be significantly different from each other in terms of the irritant reaction observed and all products resulted in a significantly higher irritation compared with the no-treatment control. Seventy-nine of the initial 121 subjects were then enrolled into the repeated usage study. A statistically significant worsening of the clinical condition of the skin as measured by HECSI was seen from baseline to day 14 in those subjects repeatedly washing their hands with two of the four soap products (products C and D) with P-values of 0·02 and 0·01, respectively. Subclinical assessment of the skin barrier function by measuring epidermal hydration was significantly increased from baseline to day 7 after repeated hand washing with products A, B and D but overall no significant change was seen in all four products tested by day 14. A statistically significant increase in TEWL at day 14 was seen for product A (P = 0·02) indicating a

  2. A DESCRIPTION OF QUASAR VARIABILITY MEASURED USING REPEATED SDSS AND POSS IMAGING

    Energy Technology Data Exchange (ETDEWEB)

    MacLeod, Chelsea L.; Ivezic, Zeljko; Becker, Andrew C.; Anderson, Scott F. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Sesar, Branimir [Division of Physics, Mathematics and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); De Vries, Wim [Department of Physics, University of California, One Shields Ave, Davis, CA 95616 (United States); Kochanek, Christopher S. [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Kelly, Brandon C. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Lupton, Robert H. [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Hall, Patrick B. [Department of Physics and Astronomy, York University, Toronto, ON M3J 1P3 (Canada); Richards, Gordon T. [Department of Physics, Drexel University, 3141 Chestnut Street, Philadelphia, PA 19104 (United States); Schneider, Donald P. [Department of Astronomy and Astrophysics, Pennsylvania State University, 525 Davey Laboratory, University Park, PA 16802 (United States)

    2012-07-10

    We provide a quantitative description and statistical interpretation of the optical continuum variability of quasars. The Sloan Digital Sky Survey (SDSS) has obtained repeated imaging in five UV-to-IR photometric bands for 33,881 spectroscopically confirmed quasars. About 10,000 quasars have an average of 60 observations in each band obtained over a decade along Stripe 82 (S82), whereas the remaining {approx}25,000 have 2-3 observations due to scan overlaps. The observed time lags span the range from a day to almost 10 years, and constrain quasar variability at rest-frame time lags of up to 4 years, and at rest-frame wavelengths from 1000 A to 6000 A. We publicly release a user-friendly catalog of quasars from the SDSS Data Release 7 that have been observed at least twice in SDSS or once in both SDSS and the Palomar Observatory Sky Survey, and we use it to analyze the ensemble properties of quasar variability. Based on a damped random walk (DRW) model defined by a characteristic timescale and an asymptotic variability amplitude that scale with the luminosity, black hole mass, and rest wavelength for individual quasars calibrated in S82, we can fully explain the ensemble variability statistics of the non-S82 quasars such as the exponential distribution of large magnitude changes. All available data are consistent with the DRW model as a viable description of the optical continuum variability of quasars on timescales of {approx}5-2000 days in the rest frame. We use these models to predict the incidence of quasar contamination in transient surveys such as those from the Palomar Transient Factory and Large Synoptic Survey Telescope.

  3. Chemical genetic approach identifies microtubule affinity-regulating kinase 1 as a leucine-rich repeat kinase 2 substrate.

    Science.gov (United States)

    Krumova, Petranka; Reyniers, Lauran; Meyer, Marc; Lobbestael, Evy; Stauffer, Daniela; Gerrits, Bertran; Muller, Lionel; Hoving, Sjouke; Kaupmann, Klemens; Voshol, Johannes; Fabbro, Doriano; Bauer, Andreas; Rovelli, Giorgio; Taymans, Jean-Marc; Bouwmeester, Tewis; Baekelandt, Veerle

    2015-07-01

    Mutations in leucine-rich repeat kinase 2 (LRRK2) are the most common cause of autosomal-dominant forms of Parkinson's disease. LRRK2 is a modular, multidomain protein containing 2 enzymatic domains, including a kinase domain, as well as several protein-protein interaction domains, pointing to a role in cellular signaling. Although enormous efforts have been made, the exact pathophysiologic mechanisms of LRRK2 are still not completely known. In this study, we used a chemical genetics approach to identify LRRK2 substrates from mouse brain. This approach allows the identification of substrates of 1 particular kinase in a complex cellular environment. Several of the identified peptides are involved in the regulation of microtubule (MT) dynamics, including microtubule-associating protein (MAP)/microtubule affinity-regulating kinase 1 (MARK1). MARK1 is a serine/threonine kinase known to phosphorylate MT-binding proteins such as Tau, MAP2, and MAP4 at KXGS motifs leading to MT destabilization. In vitro kinase assays and metabolic-labeling experiments in living cells confirmed MARK1 as an LRRK2 substrate. Moreover, we also showed that LRRK2 and MARK1 are interacting in eukaryotic cells. Our findings contribute to the identification of physiologic LRRK2 substrates and point to a potential mechanism explaining the reported effects of LRRK2 on neurite morphology.

  4. Repeatability of Cone Spacing Measures in Eyes With Inherited Retinal Degenerations

    National Research Council Canada - National Science Library

    Zayit-Soudry, Shiri; Sippl-Swezey, Nicolas; Porco, Travis C; Lynch, Stephanie K; Syed, Reema; Ratnam, Kavitha; Menghini, Moreno; Roorda, Austin J; Duncan, Jacque L

    2015-01-01

    ...)-derived cone spacing measures in eyes with inherited retinal degenerations (IRD) and in normal eyes. Twenty IRD patients and 10 visually normal subjects underwent AOSLO imaging at two visits separated by no more than 1 month...

  5. Repeatability of Choroidal Thickness Measurements on Enhanced Depth Imaging Optical Coherence Tomography Using Different Posterior Boundaries.

    Science.gov (United States)

    Vuong, Vivian S; Moisseiev, Elad; Cunefare, David; Farsiu, Sina; Moshiri, Ala; Yiu, Glenn

    2016-09-01

    To assess the reliability of manual choroidal thickness measurements by comparing different posterior boundary definitions of the choroidal-scleral junction on enhanced depth imaging optical coherence tomography (EDI-OCT). Reliability analysis. Two graders marked the choroidal-scleral junction with segmentation software using different posterior boundaries: (1) the outer border of the choroidal vessel lumen, (2) the outer border of the choroid stroma, and (3) the inner border of the sclera, to measure the vascular choroidal thickness (VCT), stromal choroidal thickness (SCT), and total choroidal thickness (TCT), respectively. Measurements were taken at 0.5-mm intervals from 1.5 mm nasal to 1.5 mm temporal to the fovea, and averaged continuously across the central 3 mm of the macula. Intraclass correlation coefficient (ICC) and coefficient of reliability (CR) were compared to assess intergrader and intragrader reliability. Choroidal thickness measurements varied significantly with different posterior boundaries (P choroidal-scleral junction visibility was Choroidal thickness measurements are more reproducible when measured to the border of the choroid stroma (SCT) than the vascular lumen (VCT) or sclera (TCT). Copyright © 2016 Elsevier Inc. All rights reserved.

  6. The Repeatability Assessment of Three-Dimensional Capsule-Intraocular Lens Complex Measurements by Means of High-Speed Swept-Source Optical Coherence Tomography

    Science.gov (United States)

    Chang, Pingjun; Li, Jin; Savini, Giacomo; Huang, Jinhai; Huang, Shenghai; Zhao, Yinying; Liao, Na; Lin, Lei; Yu, Xiaoyu; Zhao, Yun-e

    2015-01-01

    Purpose To rebuild the three-dimensional (3-D) model of the anterior segment by high-speed swept-source optical coherence tomography (SSOCT) and evaluate the repeatability of measurement for the parameters of capsule-intraocular lens (C-IOL) complex. Methods Twenty-two pseudophakic eyes from 22 patients were enrolled. Three continuous SSOCT measurements were performed in all eyes and the tomograms obtained were used for 3-D reconstruction. The output data were used to evaluate the measurement repeatability. The parameters included postoperative aqueous depth (PAD), the area and diameter of the anterior capsule opening (Area and D), IOL tilt (IOL-T), horizontal, vertical, and space decentration of the IOL, anterior capsule opening, and IOL-anterior capsule opening. Results PAD, IOL-T, Area, D, and all decentration measurements showed high repeatability. Repeated measure analysis showed there was no statistically significant difference among the three continuous measurements (all P > .05). Pearson correlation analysis showed high correlation between each pair of them (all r >0.90, P<0.001). ICCs were all more than 0.9 for all parameters. The 95% LoAs of all parameters were narrow for comparison of three measurements, which showed high repeatability for three measurements. Conclusion SSOCT is available to be a new method for the 3-D measurement of C-IOL complex after cataract surgery. This method presented high repeatability in measuring the parameters of the C-IOL complex. PMID:26600254

  7. Repeatability of measurements: Non-Hermitian observables and quantum Coriolis force

    Science.gov (United States)

    Gardas, Bartłomiej; Deffner, Sebastian; Saxena, Avadh

    2016-08-01

    A noncommuting measurement transfers, via the apparatus, information encoded in a system's state to the external "observer." Classical measurements determine properties of physical objects. In the quantum realm, the very same notion restricts the recording process to orthogonal states as only those are distinguishable by measurements. Therefore, even a possibility to describe physical reality by means of non-Hermitian operators should volens nolens be excluded as their eigenstates are not orthogonal. Here, we show that non-Hermitian operators with real spectra can be treated within the standard framework of quantum mechanics. Furthermore, we propose a quantum canonical transformation that maps Hermitian systems onto non-Hermitian ones. Similar to classical inertial forces this map is accompanied by an energetic cost, pinning the system on the unitary path.

  8. Mediation of the Relationship between Maternal Phthalate Exposure and Preterm Birth by Oxidative Stress with Repeated Measurements across Pregnancy.

    Science.gov (United States)

    Ferguson, Kelly K; Chen, Yin-Hsiu; VanderWeele, Tyler J; McElrath, Thomas F; Meeker, John D; Mukherjee, Bhramar

    2017-03-01

    Mediation analysis is useful for understanding mechanisms and has been used minimally in the study of the environment and disease. We examined mediation of the association between phthalate exposure during pregnancy and preterm birth by oxidative stress. This nested case-control study of preterm birth (n = 130 cases, 352 controls) included women who delivered in Boston, Massachusestts, from 2006 through 2008. Phthalate metabolites and 8-isoprostane, an oxidative stress biomarker, were measured in urine from three visits in pregnancy. We applied four counterfactual mediation methods: method 1, utilizing exposure and mediator averages; method 2, using averages but allowing for an exposure-mediator interaction; method 3, incorporating longitudinal measurements of the exposure and mediator; and method 4, using longitudinal measurements and allowing for an exposure-mediator interaction. We observed mediation of the associations between phthalate metabolites and all preterm birth by 8-isoprostane, with the greatest estimated proportion mediated observed for spontaneous preterm births specifically. Fully utilizing repeated measures of the exposure and mediator improved precision of indirect (i.e., mediated) effect estimates, and including an exposure-mediator interaction increased the estimated proportion mediated. For example, for mono(2-ethyl-carboxy-propyl) phthalate (MECPP), a metabolite of di(2-ethylhexyl) phthalate (DEHP), the percent of the total effect mediated by 8-isoprostane increased from 47% to 60% with inclusion of an exposure-mediator interaction term, in reference to a total adjusted odds ratio of 1.67 or 1.48, respectively. This demonstrates mediation of the phthalate-preterm birth relationship by oxidative stress, and the utility of complex regression models in capturing mediated associations when repeated measures of exposure and mediator are available and an exposure-mediator interaction may exist. Citation: Ferguson KK, Chen YH, VanderWeele TJ, Mc

  9. An L-band SAR for repeat pass deformation measurements on a UAV platform

    Science.gov (United States)

    Wheeler, Kevin; Hensley, Scott; Lou, Yunling

    2004-01-01

    We are proposing to develop a miniaturized polarimetric L-band synthetic aperture radar (SAR) for repeatpass differential interferometric measurements of deformation for rapidly deforming surfaces of geophysical interest such as volcanoes or earthquakes that is to be flown on a unmanned aerial vehicle (UAV or minimally piloted vehicle (MPV).

  10. Repeated measurements of P retention in ponies fed rations with various Ca:P ratios

    NARCIS (Netherlands)

    van Doorn, D A; Schaafstra, F J W C; Wouterse, H; Everts, H; Estepa, J C; Aguilera-Tejero, E; Beynen, A C

    2014-01-01

    This study addresses the question of whether feeding rations rich in P for a period of up to 42 d induces a positive P balance in adult ponies. Biochemical bone markers and parathyroid hormone (PTH; intact as well as whole PTH) were measured to obtain clues as to the effect of P loading on bone meta

  11. Repeated measures of urinary oxidative stress biomarkers during pregnancy and preterm birth.

    Science.gov (United States)

    Ferguson, Kelly K; McElrath, Thomas F; Chen, Yin-Hsiu; Loch-Caruso, Rita; Mukherjee, Bhramar; Meeker, John D

    2015-02-01

    The purpose of this study was to investigate oxidative stress as a mechanism of preterm birth in human subjects; we examined associations between urinary biomarkers of oxidative stress that were measured at multiple time points during pregnancy and preterm birth. This nested case-control study included 130 mothers who delivered preterm and 352 mothers who delivered term who were originally recruited as part of an ongoing prospective birth cohort at Brigham and Women's Hospital. Two biomarkers that included 8-hydroxydeoxyguanosine (8-OHdG) and 8-isoprostane were measured in urine samples that were collected at up to 4 time points (median 10, 18, 26, and 35 weeks) during gestation. Urinary concentrations of 8-isoprostane and 8-OHdG decreased and increased, respectively, as pregnancy progressed. Average levels of 8-isoprostane across pregnancy were associated with increased odds of spontaneous preterm birth (adjusted odds ratio, 6.25; 95% confidence interval, 2.86-13.7), and associations were strongest with levels measured later in pregnancy. Average levels of 8-OHdG were protective against overall preterm birth (adjusted odds ratio, 0.19; 95% confidence interval, 0.10-0.34), and there were no apparent differences in the protective effect in cases of spontaneous preterm birth compared with cases of placental origin. Odds ratios for overall preterm birth were more protective in association with urinary 8-OHdG concentrations that were measured early in pregnancy. Maternal oxidative stress may be an important contributor to preterm birth, regardless of subtype and timing of exposure during pregnancy. The 2 biomarkers that were measured in the present study had opposite associations with preterm birth; an improved understanding of what each represents may help to identify more precisely important mechanisms in the pathway to preterm birth. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Characterization of fetal growth by repeated ultrasound measurements in the wild guinea pig (Cavia aperea).

    Science.gov (United States)

    Schumann, K; Guenther, A; Göritz, F; Jewgenow, K

    2014-08-01

    Fetal growth during pregnancy has previously been studied in the domesticated guinea pig (Cavia aperea f. porcellus) after dissecting pregnant females, but there are no studies describing the fetal growth in their wild progenitor, the wild guinea pig (C aperea). In this study, 50 pregnancies of wild guinea pig sows were investigated using modern ultrasound technique. The two most common fetal growth parameters (biparietal diameter [BPD] and crown-rump-length [CRL]) and uterine position were measured. Data revealed similar fetal growth patterns in the wild guinea pig and domesticated guinea pig in the investigated gestation period, although they differ in reproductive milestones such as gestation length (average duration of pregnancy 68 days), average birth weight, and litter mass. In this study, pregnancy lasted on average 60.2 days with a variance of less than a day (0.96 days). The measured fetal growth parameters are strongly correlated with each (R = 0.91; P guinea pig.

  13. Extended fluctuation theorems for repeated measurements and feedback within Hamiltonian framework

    Energy Technology Data Exchange (ETDEWEB)

    Lahiri, Sourabh, E-mail: sourabhlahiri@gmail.com [Laboratoire de Physico-Chimie Théorique, ESPCI, 10 rue Vauquelin, F-75231 Paris (France); Jayannavar, A.M. [Institute of Physics, Sachivalaya Marg, Bhubaneswar 751005 (India)

    2016-04-29

    We derive the extended fluctuation theorems in presence of multiple measurements and feedback, when the system is governed by Hamiltonian dynamics. We use only the forward phase space trajectories in the derivation. However, to obtain an expression for the efficacy parameter, we must necessarily use the notion of reverse trajectory. Our results show that the correction term appearing in the exponent of the extended fluctuation theorems is non-unique, whereas the physical meaning of the efficacy parameter is unique. - Highlights: • Extended Fluctuation Theorems under multiple measurements and feedback have been derived using Hamiltonian dynamics. • We prove the theorems without using the notion of reverse trajectory. • We show that the correction terms are not unique. • The efficacy parameter is shown to have a unique physical meaning.

  14. Composite tube and plate manufacturing repeatability as determined by precision measurements of thermal strain

    Science.gov (United States)

    Riddle, Lenn A.; Tucker, James R.; Bluth, A. Marcel

    2013-09-01

    Composite materials often carry the reputation of demonstrating high variability in critical material properties. The JWST telescope metering structure is fabricated of several thousand separate composite piece parts. The stringent dimensional stability requirements on the metering structure require the critical thermal strain response of every composite piece be verified either at the billet or piece part level. JWST is a unique composite space structure in that it has required the manufacturing of several hundred composite billets that cover many lots of prepreg and many years of fabrication. The flight billet thermal expansion acceptance criteria limits the coefficient of thermal expansion (CTE) to a tolerance ranging between +/-0.014 ppm/K to +/-0.04 ppm/K around a prescribed nominal when measured from 293 K down to 40 K. The different tolerance values represent different material forms including flat plates and different tube cross-section dimensions. A precision measurement facility was developed that could measure at the required accuracy and at a pace that supported the composite part fabrication rate. The test method and facility is discussed and the results of a statistical process analysis of the flight composite billets are surveyed.

  15. Are exhaled nitric oxide measurements using the portable NIOX MINO repeatable?

    Directory of Open Access Journals (Sweden)

    Raza Abid

    2010-04-01

    Full Text Available Abstract Background Exhaled nitric oxide is a non-invasive marker of airway inflammation and a portable analyser, the NIOX MINO (Aerocrine AB, Solna, Sweden, is now available. This study aimed to assess the reproducibility of the NIOX MINO measurements across age, sex and lung function for both absolute and categorical exhaled nitric oxide values in two distinct groups of children and teenagers. Methods Paired exhaled nitric oxide readings were obtained from 494 teenagers, aged 16-18 years, enrolled in an unselected birth cohort and 65 young people, aged 6-17 years, with asthma enrolled in an interventional asthma management study. Results The birth cohort participants showed a high degree of variability between first and second exhaled nitric oxide readings (mean intra-participant difference 1.37 ppb, 95% limits of agreement -7.61 to 10.34 ppb, although there was very close agreement when values were categorised as low, normal, intermediate or high (kappa = 0.907, p Conclusions The reproducibility of exhaled nitric oxide is poor for absolute values but acceptable when values are categorised as low, normal, intermediate or high in children and teenagers. One measurement is therefore sufficient when using categorical exhaled nitric oxide values to direct asthma management but a mean of at least two measurements is required for absolute values.

  16. Reliability of plasma lipopolysaccharide-binding protein (LBP) from repeated measures in healthy adults.

    Science.gov (United States)

    Citronberg, Jessica S; Wilkens, Lynne R; Lim, Unhee; Hullar, Meredith A J; White, Emily; Newcomb, Polly A; Le Marchand, Loïc; Lampe, Johanna W

    2016-09-01

    Plasma lipopolysaccharide-binding protein (LBP), a measure of internal exposure to bacterial lipopolysaccharide, has been associated with several chronic conditions and may be a marker of chronic inflammation; however, no studies have examined the reliability of this biomarker in a healthy population. We examined the temporal reliability of LBP measured in archived samples from participants in two studies. In Study one, 60 healthy participants had blood drawn at two time points: baseline and follow-up (either three, six, or nine months). In Study two, 24 individuals had blood drawn three to four times over a seven-month period. We measured LBP in archived plasma by ELISA. Test-retest reliability was estimated by calculating the intraclass correlation coefficient (ICC). Plasma LBP concentrations showed moderate reliability in Study one (ICC 0.60, 95 % CI 0.43-0.75) and Study two (ICC 0.46, 95 % CI 0.26-0.69). Restricting the follow-up period improved reliability. In Study one, the reliability of LBP over a three-month period was 0.68 (95 % CI: 0.41-0.87). In Study two, the ICC of samples taken ≤seven days apart was 0.61 (95 % CI 0.29-0.86). Plasma LBP concentrations demonstrated moderate test-retest reliability in healthy individuals with reliability improving over a shorter follow-up period.

  17. Repeated serum creatinine measurement in primary care: Not all patients have chronic renal failure.

    Science.gov (United States)

    Gentille Lorente, Delicia; Gentille Lorente, Jorge; Salvadó Usach, Teresa

    2015-01-01

    To assess the prevalence of kidney failure in patients from a primary care centre in a basic healthcare district with laboratory availability allowing serum creatinine measurements. An observational descriptive cross-sectional study. A basic healthcare district serving 23,807 people aged ≥ 18 years. Prevalence of kidney failure among 17,240 patients having at least one laboratory measurement available was 8.5% (mean age 77.6 ± 12.05 years). In 33.2% of such patients an occult kidney failure was found (98.8% were women). Prevalence of chronic kidney failure among 10,011 patients having at least 2 laboratory measurements available (≥ 3 months apart) was 5.5% with mean age being 80.1 ± 10.0 years (most severely affected patients were those aged 75 to 84); 59.7% were men and 76.3% of cases were in stage 3. An occult kidney failure was found in 5.3% of patients with women being 86.2% of them (a glomerular filtration rate<60 ml/min was estimated for plasma creatinine levels of 0.9 mg/dl or higher). Comparison of present findings to those previously reported demonstrates the need for further studies on the prevalence of overall (chronic and acute) kidney failure in Spain in order to estimate the real scope of the disease. Primary care physicians play a critical role in disease detection, therapy, control and recording (in medical records). MDRD equation is useful and practical to estimate glomerular filtration rate. Copyright © 2015 The Authors. Published by Elsevier España, S.L.U. All rights reserved.

  18. Prediction Accuracy in Multivariate Repeated-Measures Bayesian Forecasting Models with Examples Drawn from Research on Sleep and Circadian Rhythms

    Directory of Open Access Journals (Sweden)

    Clark Kogan

    2016-01-01

    Full Text Available In study designs with repeated measures for multiple subjects, population models capturing within- and between-subjects variances enable efficient individualized prediction of outcome measures (response variables by incorporating individuals response data through Bayesian forecasting. When measurement constraints preclude reasonable levels of prediction accuracy, additional (secondary response variables measured alongside the primary response may help to increase prediction accuracy. We investigate this for the case of substantial between-subjects correlation between primary and secondary response variables, assuming negligible within-subjects correlation. We show how to determine the accuracy of primary response predictions as a function of secondary response observations. Given measurement costs for primary and secondary variables, we determine the number of observations that produces, with minimal cost, a fixed average prediction accuracy for a model of subject means. We illustrate this with estimation of subject-specific sleep parameters using polysomnography and wrist actigraphy. We also consider prediction accuracy in an example time-dependent, linear model and derive equations for the optimal timing of measurements to achieve, on average, the best prediction accuracy. Finally, we examine an example involving a circadian rhythm model and show numerically that secondary variables can improve individualized predictions in this time-dependent nonlinear model as well.

  19. Prediction Accuracy in Multivariate Repeated-Measures Bayesian Forecasting Models with Examples Drawn from Research on Sleep and Circadian Rhythms.

    Science.gov (United States)

    Kogan, Clark; Kalachev, Leonid; Van Dongen, Hans P A

    2016-01-01

    In study designs with repeated measures for multiple subjects, population models capturing within- and between-subjects variances enable efficient individualized prediction of outcome measures (response variables) by incorporating individuals response data through Bayesian forecasting. When measurement constraints preclude reasonable levels of prediction accuracy, additional (secondary) response variables measured alongside the primary response may help to increase prediction accuracy. We investigate this for the case of substantial between-subjects correlation between primary and secondary response variables, assuming negligible within-subjects correlation. We show how to determine the accuracy of primary response predictions as a function of secondary response observations. Given measurement costs for primary and secondary variables, we determine the number of observations that produces, with minimal cost, a fixed average prediction accuracy for a model of subject means. We illustrate this with estimation of subject-specific sleep parameters using polysomnography and wrist actigraphy. We also consider prediction accuracy in an example time-dependent, linear model and derive equations for the optimal timing of measurements to achieve, on average, the best prediction accuracy. Finally, we examine an example involving a circadian rhythm model and show numerically that secondary variables can improve individualized predictions in this time-dependent nonlinear model as well.

  20. Listening levels of teenage iPod users: does measurement approach matter?

    Directory of Open Access Journals (Sweden)

    Nicole C. Haines

    2012-01-01

    Full Text Available The main objective of this study was to determine the influence of background noise levels and measurement approach on user-selected listening levels (USLLs chosen by teenaged MP3 player users. It was hypothesized that the presence of background noise would (i increase the USLL across all measurement approaches, (ii result in no significant USLL differences between survey reports, objective lab measures or calibrated self-report field measures, and (iii cause no interaction effect between level of background noise and measurement approach. There were two independent variables in this study: the level of background noise and measurement approach. The first independent variable, level of background noise, had two levels: quiet and transportation noise. The second independent variable, measurement approach, had three levels: survey, objective in-ear lab measurement and calibrated self-report field measurement. The dependent variable was ear canal A-weighted sound pressure level (dBA SPL. A 2 x 3 repeated-measures ANOVA was used to determine the significance of the main and interaction effects. USLLs increased in the presence of background noise, regardless of the measurement approach used. However, the listening levels estimated by the participants using the survey and self-report field measure were significantly lower than those recorded using in-ear laboratory measurements by 9.6 and 3.3 dBA respectively. In-ear laboratory measures yielded the highest listening levels. Higher listening levels were observed in the presence of background noise for all measurement approaches. It appears that subjects’ survey responses underestimate true listening levels in comparison to self-report calibrated field measures, and that both underestimate listening levels measured in the laboratory setting. More research in this area is warranted to determine whether measurement techniques can be refined and adjusted to accurately reflect real-world listening

  1. Listening levels of teenage iPod users: does measurement approach matter?

    Science.gov (United States)

    Haines, Nicole C; Hodgetts, William E; Ostevik, Amberley V; Rieger, Jana M

    2012-01-09

    The main objective of this study was to determine the influence of background noise levels and measurement approach on user-selected listening levels (USLLs) chosen by teenaged MP3 player users. It was hypothesized that the presence of background noise would (i) increase the USLL across all measurement approaches, (ii) result in no significant USLL differences between survey reports, objective lab measures or calibrated self-report field measures, and (iii) cause no interaction effect between level of background noise and measurement approach. There were two independent variables in this study: the level of background noise and measurement approach. The first independent variable, level of background noise, had two levels: quiet and transportation noise. The second independent variable, measurement approach, had three levels: survey, objective in-ear lab measurement and calibrated self-report field measurement. The dependent variable was ear canal A-weighted sound pressure level (dBA SPL). A 2 × 3 repeated-measures ANOVA was used to determine the significance of the main and interaction effects. USLLs increased in the presence of background noise, regardless of the measurement approach used. However, the listening levels estimated by the participants using the survey and self-report field measure were significantly lower than those recorded using in-ear laboratory measurements by 9.6 and 3.3 dBA respectively. In-ear laboratory measures yielded the highest listening levels. Higher listening levels were observed in the presence of background noise for all measurement approaches. It appears that subjects' survey responses underestimate true listening levels in comparison to self-report calibrated field measures, and that both underestimate listening levels measured in the laboratory setting. More research in this area is warranted to determine whether measurement techniques can be refined and adjusted to accurately reflect real-world listening preferences.

  2. The clinical applicability of a daily summary of patients' self-reported postoperative pain-A repeated measure analysis.

    Science.gov (United States)

    Wikström, Lotta; Eriksson, Kerstin; Fridlund, Bengt; Nilsson, Mats; Årestedt, Kristofer; Broström, Anders

    2017-03-23

    (i) To determine whether a central tendency, median, based on patients' self-rated pain is a clinically applicable daily measure to show patients' postoperative pain on the first day after major surgery (ii) and to determine the number of self-ratings required for the calculation of this measure. Perioperative pain traits in medical records are difficult to overview. The clinical applicability of a daily documented summarising measure of patients' self-rated pain scores is little explored. A repeated measure design was carried out at three Swedish country hospitals. Associations between the measures were analysed with nonparametric statistical methods; systematic and individual group changes were analysed separately. Measure I: pain scores at rest and activity postoperative day 1; measure II: retrospective average pain from postoperative day 1. The sample consisted of 190 general surgery patients and 289 orthopaedic surgery patients with a mean age of 65; 56% were men. Forty-four percent had a pre-operative daily intake of analgesia, and 77% used postoperative opioids. A range of 4-9 pain scores seem to be eligible for the calculation of the daily measures of pain. Rank correlations for individual median scores, based on four ratings, vs. retrospective self-rated average pain, were moderate and strengthened with increased numbers of ratings. A systematic group change towards a higher level of reported retrospective pain was significant. The median values were clinically applicable daily measures. The risk of obtaining a higher value than was recalled by patients seemed to be low. Applicability increased with increased frequency of self-rated pain scores and with high-quality pain assessments. The documenting of daily median pain scores at rest and during activity could constitute the basis for obtaining patients' experiences by showing their pain severity trajectories. The measures could also be an important key to predicting postoperative health

  3. Understanding how adherence goals promote adherence behaviours: a repeated measure observational study with HIV seropositive patients

    Directory of Open Access Journals (Sweden)

    Jones Gareth

    2012-08-01

    Full Text Available Abstract Background The extent to which patients follow treatments as prescribed is pivotal to treatment success. An exceptionally high level (> 95% of HIV medication adherence is required to suppress viral replication and protect the immune system and a similarly high level (> 80% of adherence has also been suggested in order to benefit from prescribed exercise programmes. However, in clinical practice, adherence to both often falls below the desirable level. This project aims to investigate a wide range of psychological and personality factors that may lead to adherence/non-adherence to medical treatment and exercise programmes. Methods HIV positive patients who are referred to the physiotherapist-led 10-week exercise programme as part of the standard care are continuously recruited. Data on social cognitive variables (attitude, intention, subjective norms, self-efficacy, and outcome beliefs about the goal and specific behaviours, selected personality factors, perceived quality of life, physical activity, self-reported adherence and physical assessment are collected at baseline, at the end of the exercise programme and again 3 months later. The project incorporates objective measures of both exercise (attendance log and improvement in physical measures such as improved fitness level, weight loss, improved circumferential anthropometric measures and medication adherence (verified by non-invasive hair analysis. Discussion The novelty of this project comes from two key aspects, complemented with objective information on exercise and medication adherence. The project assesses beliefs about both the underlying goal such as following prescribed treatment; and about the specific behaviours such as undertaking the exercise or taking the medication, using both implicit and explicit assessments of patients’ beliefs and attitudes. We predict that i the way people think about the underlying goal of their treatments explains medication and exercise

  4. Repeated measures of inflammation and oxidative stress biomarkers in preeclamptic and normotensive pregnancies.

    Science.gov (United States)

    Ferguson, Kelly K; Meeker, John D; McElrath, Thomas F; Mukherjee, Bhramar; Cantonwine, David E

    2017-05-01

    Preeclampsia is a prevalent and enigmatic disease, in part characterized by poor remodeling of the spiral arteries. However, preeclampsia does not always clinically present when remodeling has failed to occur. Hypotheses surrounding the "second hit" that is necessary for the clinical presentation of the disease focus on maternal inflammation and oxidative stress. Yet, the studies to date that have investigated these factors have used cross-sectional study designs or small study populations. In the present study, we sought to explore longitudinal trajectories, beginning early in gestation, of a panel of inflammation and oxidative stress markers in women who went on to have preeclamptic or normotensive pregnancies. We examined 441 subjects from the ongoing LIFECODES prospective birth cohort, which included 50 mothers who experienced preeclampsia and 391 mothers with normotensive pregnancies. Participants provided urine and plasma samples at 4 time points during gestation (median, 10, 18, 26, and 35 weeks) that were analyzed for a panel of oxidative stress and inflammation markers. Oxidative stress biomarkers included 8-isoprostane and 8-hydroxydeoxyguanosine. Inflammation biomarkers included C-reactive protein, the cytokines interleukin-1β, -6, and -10, and tumor necrosis factor-α. We created Cox proportional hazard models to calculate hazard ratios based on time of preeclampsia diagnosis in association with biomarker concentrations at each of the 4 study visits. In adjusted models, hazard ratios of preeclampsia were significantly (Pinflammation biomarkers that were measured at visit 2 (median, 18 weeks; hazard ratios, 1.31-1.83, in association with an interquartile range increase in biomarker). Hazard ratios at this time point were the most elevated for C-reactive protein, for interleukin-1β, -6, and -10, and for the oxidative stress biomarker 8-isoprostane (hazard ratio, 1.68; 95% confidence interval, 1.14-2.48) compared to other time points. Hazard ratios for

  5. C-reactive Protein: Repeated Measurements will Improve Dialysis Patient Care.

    Science.gov (United States)

    Cobo, Gabriela; Qureshi, Abdul Rashid; Lindholm, Bengt; Stenvinkel, Peter

    2016-01-01

    Systemic inflammation is a common feature in the uremic phenotype and associates with poor outcomes. The awareness regarding the importance of inflammation assessment in chronic kidney disease (CKD) patients has risen in recent years, and despite the development of novel biomarkers, C-reactive protein (CRP) is still the most measured inflammatory parameter. Notwithstanding, the possible weak points of CRP determination, this biomarker has demonstrated being useful both for guidance in clinical practice and for risk estimation. In addition, regular determination of CRP among dialysis patients has been associated with better outcomes in different dialysis facilities. Because persistent inflammation may be a silent reflection of various pathophysiologic alterations in CKD, it is crucial that inflammatory markers are regularly monitored and therapeutic attempts be made to target this inflammation.

  6. Repeated heart rate measurement and cardiovascular outcomes in left ventricular systolic dysfunction.

    Science.gov (United States)

    Hamill, Victoria; Ford, Ian; Fox, Kim; Böhm, Michael; Borer, Jeffrey S; Ferrari, Roberto; Komajda, Michel; Steg, Philippe Gabriel; Tavazzi, Luigi; Tendera, Michal; Swedberg, Karl

    2015-10-01

    Elevated resting heart rate is associated with increased cardiovascular risk, particularly in patients with left ventricular systolic dysfunction. Heart rate is not monitored routinely in these patients. We hypothesized that routine monitoring of heart rate would increase its prognostic value in patients with left ventricular systolic dysfunction. We analyzed the relationship between heart rate measurements and a range of adverse cardiovascular outcomes, including hospitalization for worsening heart failure, in the pooled placebo-treated patients from the morBidity-mortality EvAlUaTion of the If inhibitor ivabradine in patients with coronary disease and left ventricULar dysfunction (BEAUTIFUL) trial and Systolic Heart failure treatment with the If inhibitor ivabradine (SHIFT) Trial, using standard and time-varying covariate Cox proportional hazards models. By adjusting for other prognostic factors, models were fitted for baseline heart rate alone or for time-updated heart rate (latest heart rate) alone or corrected for baseline heart rate or for immediate previous time-updated heart rate. Baseline heart rate was strongly associated with all outcomes apart from hospitalization for myocardial infarction. Time-updated heart rate increased the strengths of associations for all outcomes. Adjustment for baseline heart rate or immediate previous time-updated heart rate modestly reduced the prognostic importance of time-updated heart rate. For hospitalization for worsening heart failure, each 5 beats/min increase in baseline heart rate and time-updated heart rate was associated with a 15% (95% confidence interval, 12-18) and 22% (confidence interval, 19-40) increase in risk, respectively. Even after correction, the prognostic value of time-updated heart rate remained greater. In patients with left ventricular systolic dysfunction, time-updated heart rate is more strongly related with adverse cardiovascular outcomes than baseline heart rate. Heart rate should be measured to

  7. Psychological impact and recovery after involvement in a patient safety incident: a repeated measures analysis

    Science.gov (United States)

    Van Gerven, Eva; Bruyneel, Luk; Panella, Massimiliano; Euwema, Martin; Sermeus, Walter; Vanhaecht, Kris

    2016-01-01

    Objective To examine individual, situational and organisational aspects that influence psychological impact and recovery of a patient safety incident on physicians, nurses and midwives. Design Cross-sectional, retrospective surveys of physicians, midwives and nurses. Setting 33 Belgian hospitals. Participants 913 clinicians (186 physicians, 682 nurses, 45 midwives) involved in a patient safety incident. Main outcome measures The Impact of Event Scale was used to retrospectively measure psychological impact of the safety incident at the time of the event and compare it with psychological impact at the time of the survey. Results Individual, situational as well as organisational aspects influenced psychological impact and recovery of a patient safety incident. Psychological impact is higher when the degree of harm for the patient is more severe, when healthcare professionals feel responsible for the incident and among female healthcare professionals. Impact of degree of harm differed across clinicians. Psychological impact is lower among more optimistic professionals. Overall, impact decreased significantly over time. This effect was more pronounced for women and for those who feel responsible for the incident. The longer ago the incident took place, the stronger impact had decreased. Also, higher psychological impact is related with the use of a more active coping and planning coping strategy, and is unrelated to support seeking coping strategies. Rendered support and a support culture reduce psychological impact, whereas a blame culture increases psychological impact. No associations were found with job experience and resilience of the health professional, the presence of a second victim support team or guideline and working in a learning culture. Conclusions Healthcare organisations should anticipate on providing their staff appropriate and timely support structures that are tailored to the healthcare professional involved in the incident and to the specific

  8. Repeat-until-success quantum repeaters

    Science.gov (United States)

    Bruschi, David Edward; Barlow, Thomas M.; Razavi, Mohsen; Beige, Almut

    2014-09-01

    We propose a repeat-until-success protocol to improve the performance of probabilistic quantum repeaters. Conventionally, these rely on passive static linear-optics elements and photodetectors to perform Bell-state measurements (BSMs) with a maximum success rate of 50%. This is a strong impediment for entanglement swapping between distant quantum memories. Every time a BSM fails, entanglement needs to be redistributed between the corresponding memories in the repeater link. The key ingredients of our scheme are repeatable BSMs. Under ideal conditions, these turn probabilistic quantum repeaters into deterministic ones. Under realistic conditions, our protocol too might fail. However, using additional threshold detectors now allows us to improve the entanglement generation rate by almost orders of magnitude, at a nominal distance of 1000 km, compared to schemes that rely on conventional BSMs. This improvement is sufficient to make the performance of our scheme comparable to the expected performance of some deterministic quantum repeaters.

  9. Repeat, Low Altitude Measurements of Vegetation Status and Biomass Using Manned Aerial and UAS Imagery in a Piñon-Juniper Woodland

    Science.gov (United States)

    Krofcheck, D. J.; Lippitt, C.; Loerch, A.; Litvak, M. E.

    2015-12-01

    Measuring the above ground biomass of vegetation is a critical component of any ecological monitoring campaign. Traditionally, biomass of vegetation was measured with allometric-based approach. However, it is also time-consuming, labor-intensive, and extremely expensive to conduct over large scales and consequently is cost-prohibitive at the landscape scale. Furthermore, in semi-arid ecosystems characterized by vegetation with inconsistent growth morphologies (e.g., piñon-juniper woodlands), even ground-based conventional allometric approaches are often challenging to execute consistently across individuals and through time, increasing the difficulty of the required measurements and consequently the accuracy of the resulting products. To constrain the uncertainty associated with these campaigns, and to expand the extent of our measurement capability, we made repeat measurements of vegetation biomass in a semi-arid piñon-juniper woodland using structure-from-motion (SfM) techniques. We used high-spatial resolution overlapping aerial images and high-accuracy ground control points collected from both manned aircraft and multi-rotor UAS platforms, to generate digital surface model (DSM) for our experimental region. We extracted high-precision canopy volumes from the DSM and compared these to the vegetation allometric data, s to generate high precision canopy volume models. We used these models to predict the drivers of allometric equations for Pinus edulis and Juniperous monosperma (canopy height, diameter at breast height, and root collar diameter). Using this approach, we successfully accounted for the carbon stocks in standing live and standing dead vegetation across a 9 ha region, which contained 12.6 Mg / ha of standing dead biomass, with good agreement to our field plots. Here we present the initial results from an object oriented workflow which aims to automate the biomass estimation process of tree crown delineation and volume calculation, and partition

  10. A novel AX+/BX- paradigm to assess fear learning and safety-signal processing with repeated-measure designs.

    Science.gov (United States)

    Kazama, Andy M; Schauder, Kimberly B; McKinnon, Michael; Bachevalier, Jocelyne; Davis, Michael

    2013-04-15

    One of the core symptoms of anxiety disorders, such as post-traumatic stress disorder, is the failure to overcome feelings of danger despite being in a safe environment. This deficit likely stems from an inability to fully process safety signals, which are cues in the environment that enable healthy individuals to over-ride fear in aversive situations. Studies examining safety signal learning in rodents, humans, and non-human primates currently rely on between-groups designs. Because repeated-measure designs reduce the number of subjects required, and facilitate a broader range of safety signal studies, the current project sought to develop a repeated-measures safety-signal learning paradigm in non-human primates. Twelve healthy rhesus macaques of both sexes received three rounds of auditory fear-potentiated startle training and testing using an AX+/BX- design with all visual cues. Cue AX was paired with an aversive blast of air, whereas the same X cue in compound with another B cue (BX) signaled the absence of an air blast. Hence, cue B served as a safety signal. Once animals consistently discriminated between the aversive (AX+) and safe (BX-) cues, measured by greater startle amplitude in the presence of AX vs. BX, they were tested for conditioned inhibition by eliciting startle in the presence of a novel ambiguous combined cue (AB). Similar to previous AX+/BX- studies, healthy animals rapidly learned to discriminate between the AX+ and BX- cues as well as demonstrate conditioned inhibition in the presence of the combined AB cue (i.e. lower startle amplitude in the presence of AB vs. AX). Additionally, animals performed consistently across three rounds of testing using three new cues each time. The results validate this novel method that will serve as a useful tool for better understanding the mechanisms for the regulation of fear and anxiety. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Hepatic 18F-FDG Uptake Measurements on PET/MR: Impact of Volume of Interest Location on Repeatability

    Directory of Open Access Journals (Sweden)

    Liran Domachevsky

    2017-01-01

    Full Text Available Background. To investigate same day 18F-FDG (Fluorodeoxyglucose PET (Positron Emission Tomography/MR (Magnetic Resonance test-retest repeatability of Standardized Uptake Value measurements normalized for body weight (SUV and lean body mass (SUL in different locations in the liver. Methods. This prospective study was IRB approved with written informed consent obtained. 35 patients (20 women and 15 men, 61±11.2 years that performed a whole-body 18F-FDG PET/MR followed by liver-dedicated contrast-enhanced 18F-FDG PET/MR were included. SUV/L max, mean, and peak were measured inferior to, superior to, and at the right portal vein and in the left lobe of the liver. The coefficient of variation (CV and intraclass correlation coefficient (ICC were calculated and Bland-Altman plots were obtained. Results. The variability for SUV/L’s measurements was lowest inferior to the portal vein (<9.2% followed by measurements performed at the level of the portal vein (<14.6%. Conclusion. The area inferior to the portal vein is the most reliable location for hepatic 18F-FDG uptake measurements on PET/MR.

  12. Magnetic Resonance Spectroscopy in Patients with Insomnia: A Repeated Measurement Study.

    Science.gov (United States)

    Spiegelhalder, Kai; Regen, Wolfram; Nissen, Christoph; Feige, Bernd; Baglioni, Chiara; Riemann, Dieter; Hennig, Jürgen; Lange, Thomas

    2016-01-01

    Chronic insomnia is one of the most prevalent central nervous system disorders. It is characterized by increased arousal levels, however, the neurobiological causes and correlates of hyperarousal in insomnia remain to be further determined. In the current study, magnetic resonance spectroscopy was used in the morning and evening in a well-characterized sample of 20 primary insomnia patients (12 females; 8 males; 42.7 ± 13.4 years) and 20 healthy good sleepers (12 females; 8 males; 44.1 ± 10.6 years). The most important inhibitory and excitatory neurotransmitters of the central nervous system, γ-aminobutyric acid (GABA) and glutamate/glutamine (Glx), were assessed in the anterior cingulate cortex (ACC) and dorsolateral prefrontal cortex (DLPFC). The primary hypothesis, a diurnal effect on GABA levels in patients with insomnia, could not be confirmed. Moreover, the current results did not support previous findings of altered GABA levels in individuals with insomnia. Exploratory analyses, however, suggested that GABA levels in the ACC may be positively associated with habitual sleep duration, and, thus, reduced GABA levels may be a trait marker of objective sleep disturbances. Moreover, there was a significant GROUP x MEASUREMENT TIME interaction effect on Glx in the DLPFC with increasing Glx levels across the day in the patients but not in the control group. Therefore, Glx levels may reflect hyperarousal at bedtime in those with insomnia. Future confirmatory studies should include larger sample sizes to investigate brain metabolites in different subgroups of insomnia.

  13. Using high-dynamic-range digital repeat photography to measure plant phenology in a subarctic mire.

    Science.gov (United States)

    Garnello, A.; Dye, D. G.; Bogle, R.; Vogel, J.; Saleska, S. R.; Crill, P. M.

    2015-12-01

    A novel Visual Imaging System (VIS) was designed and deployed in a subarctic mire (68° 20' N, 19° 03'E) aimed at cataloging plant biological changes (phenology) and analyzing seasonal color shifts in relation to micrometeorological data along the summer growing season: June-November, 2015. The VIS is designed as a tower-based, solar-powered, automated phenology camera (Phenocam) that collects red, green, blue (RGB) and near-infrared (NIR) landscape images in High Dynamic Range (HDR) with fully programmable temporal resolution. HDR composite images are made through combining a series of rapid-capture photos with incremental increases of exposure times and a fixed focus, minimizing the spatial and visual data lost from shadows or from the over-saturation of light. This visual record of ecosystem phenology stages (Phenophases) is being used to (1) investigate vegetation-dependent spectral indices; (2) establish a cross-year comparison record of Phenophase seasonality; (3) investigate meteorological-dependent vegetation Phenophases; (4) provide ground-truthing measurements that enhance broader spatial-scale remote sensing analyses of subarctic wetlands.

  14. A Finite Mixture of Nonlinear Random Coefficient Models for Continuous Repeated Measures Data.

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R; Zopluoglu, Cengiz

    2016-09-01

    Nonlinear random coefficient models (NRCMs) for continuous longitudinal data are often used for examining individual behaviors that display nonlinear patterns of development (or growth) over time in measured variables. As an extension of this model, this study considers the finite mixture of NRCMs that combine features of NRCMs with the idea of finite mixture (or latent class) models. The efficacy of this model is that it allows the integration of intrinsically nonlinear functions where the data come from a mixture of two or more unobserved subpopulations, thus allowing the simultaneous investigation of intra-individual (within-person) variability, inter-individual (between-person) variability, and subpopulation heterogeneity. Effectiveness of this model to work under real data analytic conditions was examined by executing a Monte Carlo simulation study. The simulation study was carried out using an R routine specifically developed for the purpose of this study. The R routine used maximum likelihood with the expectation-maximization algorithm. The design of the study mimicked the output obtained from running a two-class mixture model on task completion data.

  15. Breath acidification in adolescent runners exposed to atmospheric pollution: A prospective, repeated measures observational study

    Directory of Open Access Journals (Sweden)

    Van Sickle David

    2008-03-01

    Full Text Available Abstract Background Vigorous outdoors exercise during an episode of air pollution might cause airway inflammation. The purpose of this study was to examine the effects of vigorous outdoor exercise during peak smog season on breath pH, a biomarker of airway inflammation, in adolescent athletes. Methods We measured breath pH both pre- and post-exercise on ten days during peak smog season in 16 high school athletes engaged in daily long-distance running in a downwind suburb of Atlanta. The association of post-exercise breath pH with ambient ozone and particulate matter concentrations was tested with linear regression. Results We collected 144 pre-exercise and 146 post-exercise breath samples from 16 runners (mean age 14.9 years, 56% male. Median pre-exercise breath pH was 7.58 (interquartile range: 6.90 to 7.86 and did not change significantly after exercise. We observed no significant association between ambient ozone or particulate matter and post-exercise breath pH. However both pre- and post-exercise breath pH were strikingly low in these athletes when compared to a control sample of 14 relatively sedentary healthy adults and to published values of breath pH in healthy subjects. Conclusion Although we did not observe an acute effect of air pollution exposure during exercise on breath pH, breath pH was surprisingly low in this sample of otherwise healthy long-distance runners. We speculate that repetitive vigorous exercise may induce airway acidification.

  16. SMOS Measurements Preliminary Validation: Objectives and Approach

    Science.gov (United States)

    Sabia, Roberto; Gourrion, Jerome; Gabarró, Carolina; Talone, Marco; Portabella, Marcos; Ballabrera, Joaquim; Lopez de Aretxabaleta, Alfredo; Camps, Adriano; Monerris, Alessandra; Font, Jordi

    2010-05-01

    strategy to mitigate the scene-dependent bias found in the SMOS measurements. The comparison of TB departures distributions will be performed within specific classes, aiming at reducing the degrees of freedom of the measurement. Namely, the data will be sorted according to the incidence angle, the wind speed, the across-track distance, the radiometric accuracy and the spatial resolution. Concerning SSS, in turn, the proposed activities will involve inter-comparisons with various external salinity sources. As a further classification, external sources can be distinguished into data coming from models and data collected in-situ. The validation strategy foresees the comparison of SSS misfit (retrieved SSS minus ground-truth SSS) distributions within specific classes. This will be performed sorting geographical areas (different oceans, different zonal frames) and geophysical conditions (e.g. low/high surface temperature, wind speed and SSS conditions). Specific comparisons with in-situ data coming from oceanographic cruises transects and from VOS (Voluntary Observatory Ships) are foreseen, as well as against moored buoys, profilers, and drifters. These data will be arranged in specific match-up datasets, to properly organize the spatio-temporal collocation of the SMOS and in-situ measurements. The possibility of using model solutions for validation will also be considered. Model data are obtained from hindcast simulations from available prediction systems. Concerning the salinity retrieval inversion scheme, efforts will be devoted to the optimization of both the GMF (Geophysical Model Function) and the minimization cost function. With the increase of data availability, the semi-empirical GMF in the ocean salinity Level 2 operational processor will be improved, in particular the roughness-dependent TB term. The introduction of non-linear relationships in the semi-empirical roughness model is a likely extension of this formulation. The prospective approach is to develop, at a

  17. Repeatability of corticospinal and spinal measures during lengthening and shortening contractions in the human tibialis anterior muscle.

    Directory of Open Access Journals (Sweden)

    Jamie Tallent

    Full Text Available UNLABELLED: Elements of the human central nervous system (CNS constantly oscillate. In addition, there are also methodological factors and changes in muscle mechanics during dynamic muscle contractions that threaten the stability and consistency of transcranial magnetic stimulation (TMS and perpherial nerve stimulation (PNS measures. PURPOSE: To determine the repeatability of TMS and PNS measures during lengthening and shortening muscle actions in the intact human tibialis anterior. METHODS: On three consecutive days, 20 males performed lengthening and shortening muscle actions at 15, 25, 50 and 80% of maximal voluntary contraction (MVC. The amplitude of the Motor Evoked Potentials (MEPs produced by TMS was measured at rest and during muscle contraction at 90° of ankle joint position. MEPs were normalised to Mmax determined with PNS. The corticospinal silent period was recorded at 80% MVC. Hoffman reflex (H-reflex at 10% isometric and 25% shortening and lengthening MVCs, and V-waves during MVCs were also evoked on each of the three days. RESULTS: With the exception of MEPs evoked at 80% shortening MVC, all TMS-derived measures showed good reliability (ICC = 0.81-0.94 from days 2 to 3. Confidence intervals (CI, 95% were lower between days 2 and 3 when compared to days 1 and 2. MEPs significantly increased at rest from days 1 to 2 (P = 0.016 and days 1 to 3 (P = 0.046. The H-reflex during dynamic muscle contraction was reliable across the three days (ICC = 0.76-0.84. V-waves (shortening, ICC = 0.77, lengthening ICC = 0.54 and the H-reflex at 10% isometric MVC (ICC = 0.66 was generally less reliable over the three days. CONCLUSION: Although it is well known that measures of the intact human CNS exhibit moment-to-moment fluctuations, careful experimental arrangements make it possible to obtain consistent and repeatable measurements of corticospinal and spinal excitability in the actively lengthening and shortening human

  18. Are There Linguistic Markers of Suicidal Writing That Can Predict the Course of Treatment? A Repeated Measures Longitudinal Analysis.

    Science.gov (United States)

    Brancu, Mira; Jobes, David; Wagner, Barry M; Greene, Jeffrey A; Fratto, Timothy A

    2016-07-02

    The purpose of this pilot study was to predict resolution of suicidal ideation and risk over the course of therapy among suicidal outpatients (N = 144) using a novel method for analyzing Self- verses Relationally oriented qualitative written responses to the Suicide Status Form (SSF). A content analysis software program was used to extract word counts and a repeated measures longitudinal design was implemented to assess improvement over time. Patients with primarily Relationally focused word counts were more likely to have a quicker suicide risk resolution than those with more Self-focused word counts (6-7 sessions versus 17-18 sessions). Implications of these data are discussed, including the potential for enhancing treatment outcomes using this method with individuals entering treatment.

  19. Boolean approach to dichotomic quantum measurement theories

    Science.gov (United States)

    Nagata, K.; Nakamura, T.; Batle, J.; Abdalla, S.; Farouk, A.

    2017-02-01

    Recently, a new measurement theory based on truth values was proposed by Nagata and Nakamura [Int. J. Theor. Phys. 55, 3616 (2016)], that is, a theory where the results of measurements are either 0 or 1. The standard measurement theory accepts a hidden variable model for a single Pauli observable. Hence, we can introduce a classical probability space for the measurement theory in this particular case. Additionally, we discuss in the present contribution the fact that projective measurement theories (the results of which are either +1 or -1) imply the Bell, Kochen, and Specker (BKS) paradox for a single Pauli observable. To justify our assertion, we present the BKS theorem in almost all the two-dimensional states by using a projective measurement theory. As an example, we present the BKS theorem in two-dimensions with white noise. Our discussion provides new insight into the quantum measurement problem by using this measurement theory based on the truth values.

  20. A Measurement Approach for Process Improvement | Woherem ...

    African Journals Online (AJOL)

    LBS Management Review ... Many organisations today have embarked on a process improvement or re-engineering project. ... are used in the measurement of software processes, REFINE is used for the measurement of business processes.

  1. Evaluating measurement accuracy a practical approach

    CERN Document Server

    Rabinovich, Semyon G

    2017-01-01

    This book presents a systematic and comprehensive exposition of the theory of measurement accuracy and provides solutions that fill significant and long-standing gaps in the classical theory. It eliminates the shortcomings of the classical theory by including methods for estimating accuracy of single measurements, the most common type of measurement. The book also develops methods of reduction and enumeration for indirect measurements, which do not require Taylor series and produce a precise solution to this problem. It produces grounded methods and recommendations for summation of errors. The monograph also analyzes and critiques two foundation metrological documents, the International Vocabulary of Metrology (VIM) and the Guide to the Expression of Uncertainty in Measurement (GUM), and discusses directions for their revision. This new edition adds a step-by-step guide on how to evaluate measurement accuracy and recommendations on how to calculate systematic error of multiple measurements. There is also an e...

  2. Quantum repeated games revisited

    CERN Document Server

    Frackiewicz, Piotr

    2011-01-01

    We present a scheme for playing quantum repeated 2x2 games based on the Marinatto and Weber's approach to quantum games. As a potential application, we study twice repeated Prisoner's Dilemma game. We show that results not available in classical game can be obtained when the game is played in the quantum way. Before we present our idea, we comment on the previous scheme of playing quantum repeated games.

  3. A returns based approach to performance measurement

    OpenAIRE

    Casselman, Graham Andrew

    2005-01-01

    This paper applies the performance measurement technique first developed by Sharpe (1992) to a selected pool of Canadian investment funds. Sharpe's technique differed from traditional performance measurement methods in two areas. The first being that Sharpe's method included distinct investing styles as asset classes, resulting in an expanded the asset class universe. The second being his method of method of measuring asset class exposures. With a constrained regression equation, Sharpe was a...

  4. Association between proton magnetic resonance spectroscopy measurements and CAG repeat number in patients with spinocerebellar ataxias 2, 3, or 6.

    Directory of Open Access Journals (Sweden)

    Po-Shan Wang

    Full Text Available The aim of this study was to correlate magnetic resonance spectroscopy (MRS measurements, including that for the N-acetyl aspartate (NAA/creatine (Cr ratio in the vermis (denoted V-NAA, right cerebellar hemisphere (R-NAA, and left (L-NAA cerebellar hemisphere, with the clinical scale for the assessment and rating of ataxia (SARA score for patients with spinocerebellar ataxia (SCA types 2, 3, and 6. A total of 24 patients with SCA2, 48 with SCA3, and 16 with SCA6 were recruited; 12 patients with SCA2, 43 with SCA3, and 8 with SCA6 underwent detailed magnetic resonance neuroimaging. Forty-four healthy, age-matched individuals without history of neurologic disease served as control subjects. V-NAA and patient age were used to calculate the predicted age at which a patient with SCA2 or SCA3 would reach an onset V-NAA value. Results showed the following: the NAA/Cr ratio decreased with increasing age in patients with SCA but not in control subjects; the SARA score increased progressively with age and duration of illness; V-NAA showed a better correlation with SARA score than R-NAA in patients with SCA2 or SCA3; the ratio of age to V-NAA correlated well with CAG repeat number; the retrospectively predicted age of onset for SCA2 and SCA3 was consistent with patient-reported age of onset; R-NAA showed a better correlation with SARA score than V-NAA in patients with SCA6; V-NAA and R-NAA correlated with clinical severity (SARA score in patients with SCA. The correlation between CAG repeat number and age could be expressed as a simple linear function, which might explain previous observations claiming that the greater the CAG repeat number, the earlier the onset of illness and the faster the disease progression. These findings support the use of MRS values to predict age of disease onset and to retrospectively evaluate the actual age of disease onset in SCA.

  5. Association between proton magnetic resonance spectroscopy measurements and CAG repeat number in patients with spinocerebellar ataxias 2, 3, or 6.

    Science.gov (United States)

    Wang, Po-Shan; Chen, Hung-Chieh; Wu, Hsiu-Mei; Lirng, Jiing-Feng; Wu, Yu-Te; Soong, Bing-Wen

    2012-01-01

    The aim of this study was to correlate magnetic resonance spectroscopy (MRS) measurements, including that for the N-acetyl aspartate (NAA)/creatine (Cr) ratio in the vermis (denoted V-NAA), right cerebellar hemisphere (R-NAA), and left (L-NAA) cerebellar hemisphere, with the clinical scale for the assessment and rating of ataxia (SARA) score for patients with spinocerebellar ataxia (SCA) types 2, 3, and 6. A total of 24 patients with SCA2, 48 with SCA3, and 16 with SCA6 were recruited; 12 patients with SCA2, 43 with SCA3, and 8 with SCA6 underwent detailed magnetic resonance neuroimaging. Forty-four healthy, age-matched individuals without history of neurologic disease served as control subjects. V-NAA and patient age were used to calculate the predicted age at which a patient with SCA2 or SCA3 would reach an onset V-NAA value. Results showed the following: the NAA/Cr ratio decreased with increasing age in patients with SCA but not in control subjects; the SARA score increased progressively with age and duration of illness; V-NAA showed a better correlation with SARA score than R-NAA in patients with SCA2 or SCA3; the ratio of age to V-NAA correlated well with CAG repeat number; the retrospectively predicted age of onset for SCA2 and SCA3 was consistent with patient-reported age of onset; R-NAA showed a better correlation with SARA score than V-NAA in patients with SCA6; V-NAA and R-NAA correlated with clinical severity (SARA score) in patients with SCA. The correlation between CAG repeat number and age could be expressed as a simple linear function, which might explain previous observations claiming that the greater the CAG repeat number, the earlier the onset of illness and the faster the disease progression. These findings support the use of MRS values to predict age of disease onset and to retrospectively evaluate the actual age of disease onset in SCA.

  6. Diffusion-weighted magnetic resonance imaging for assessment of lung lesions: repeatability of the apparent diffusion coefficient measurement

    Energy Technology Data Exchange (ETDEWEB)

    Bernardin, L.; Douglas, N.H.M.; Collins, D.J.; Giles, S.L.; O' Flynn, E.A.M.; Orton, M.; DeSouza, N.M. [Institute of Cancer Research and Royal Marsden NHS Foundation Trust, CRUK and EPSRC Cancer Imaging Centre, Surrey (United Kingdom)

    2014-02-15

    To establish repeatability of apparent diffusion coefficients (ADCs) acquired from free-breathing diffusion-weighted magnetic resonance imaging (DW-MRI) in malignant lung lesions and investigate effects of lesion size, location and respiratory motion. Thirty-six malignant lung lesions (eight patients) were examined twice (1- to 5-h interval) using T1-weighted, T2-weighted and axial single-shot echo-planar DW-MRI (b = 100, 500, 800 s/mm{sup 2}) during free-breathing. Regions of interest around target lesions on computed b = 800 s/mm{sup 2} images by two independent observers yielded ADC values from maps (pixel-by-pixel fitting using all b values and a mono-exponential decay model). Intra- and inter-observer repeatability was assessed per lesion, per patient and by lesion size (> or <2 cm) or location. ADCs were similar between observers (mean ± SD, 1.15 ± 0.28 x 10{sup -3} mm{sup 2}/s, observer 1; 1.15 ± 0.29 x 10{sup -3} mm{sup 2}/s, observer 2). Intra-observer coefficients of variation of the mean [median] ADC per lesion and per patient were 11 % [11.4 %], 5.7 % [5.7 %] for observer 1 and 9.2 % [9.5 %], 3.9 % [4.7 %] for observer 2 respectively; inter-observer values were 8.9 % [9.3 %] (per lesion) and 3.0 % [3.7 %] (per patient). Inter-observer coefficient of variation (CoV) was greater for lesions <2 cm (n = 20) compared with >2 cm (n = 16) (10.8 % vs 6.5 % ADC{sub mean}, 11.3 % vs 6.7 % ADC{sub median}) and for mid (n = 14) vs apical (n = 9) or lower zone (n = 13) lesions (13.9 %, 2.7 %, 3.8 % respectively ADC{sub mean}; 14.2 %, 2.8 %, 4.7 % respectively ADC{sub median}). Free-breathing DW-MRI of whole lung achieves good intra- and inter-observer repeatability of ADC measurements in malignant lung tumours. (orig.)

  7. Accuracy and repeatability of quantitative fluoroscopy for the measurement of sagittal plane translation and finite centre of rotation in the lumbar spine.

    Science.gov (United States)

    Breen, Alexander; Breen, Alan

    2016-07-01

    Quantitative fluoroscopy (QF) was developed to measure intervertebral mechanics in vivo and has been found to have high repeatability and accuracy for the measurement of intervertebral rotations. However, sagittal plane translation and finite centre of rotation (FCR) are potential measures of stability but have not yet been fully validated for current QF. This study investigated the repeatability and accuracy of QF for measuring these variables. Repeatability was assessed from L2-S1 in 20 human volunteers. Accuracy was investigated using 10 consecutive measurements from each of two pairs of linked and instrumented dry human vertebrae as reference; one which tilted without translation and one which translated without tilt. The results found intra- and inter-observer repeatability for translation to be 1.1mm or less (SEM) with fair to substantial reliability (ICC 0.533-0.998). Intra-observer repeatability of FCR location for inter-vertebral rotations of 5° and above ranged from 1.5mm to 1.8mm (SEM) with moderate to substantial reliability (ICC 0.626-0.988). Inter-observer repeatability for FCR ranged from 1.2mm to 5.7mm, also with moderate to substantial reliability (ICC 0.621-0.878). Reliability was substantial (ICC>0.81) for 10/16 measures for translation and 5/8 for FCR location. Accuracy for translation was 0.1mm (fixed centre) and 2.2mm (moveable centre), with an FCR error of 0.3mm(x) and 0.4mm(y) (fixed centre). This technology was found to have a high level of accuracy and with a few exceptions, moderate to substantial repeatability for the measurement of translation and FCR from fluoroscopic motion sequences.

  8. Personality Measurement in Children: A Dimensional Approach

    Science.gov (United States)

    Eysenck, H. J.; And Others

    1970-01-01

    Reported are the results of a personality inventory measuring the personality dimensions of extraversion, neuroticism, psychoticism, and lying, which was administered to over 3,000 school children. (KW)

  9. Realistic Approach for Phasor Measurement Unit Placement

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul

    2015-01-01

    This paper presents a realistic cost-effectivemodel for optimal placement of phasor measurement units (PMUs) for complete observability of a power system considering practical cost implications. The proposed model considers hidden or otherwise unaccounted practical costs involved in PMU...

  10. Measurement of Surface Displacement and Deformation of Mass Movements Using Least Squares Matching of Repeat High Resolution Satellite and Aerial Images

    Directory of Open Access Journals (Sweden)

    Misganu Debella-Gilo

    2012-01-01

    Full Text Available Displacement and deformation are fundamental measures of Earth surface mass movements such as glacier flow, rockglacier creep and rockslides. Ground-based methods of monitoring such mass movements can be costly, time consuming and limited in spatial and temporal coverage. Remote sensing techniques, here matching of repeat optical images, are increasingly used to obtain displacement and deformation fields. Strain rates are usually computed in a post-processing step based on the gradients of the measured velocity field. This study explores the potential of automatically and directly computing velocity, rotation and strain rates on Earth surface mass movements simultaneously from the matching positions and the parameters of the geometric transformation models using the least squares matching (LSM approach. The procedures are exemplified using bi-temporal high resolution satellite and aerial images of glacier flow, rockglacier creep and land sliding. The results show that LSM matches the images and computes longitudinal strain rates, transverse strain rates and shear strain rates reliably with mean absolute deviations in the order of 10−4 (one level of significance below the measured values as evaluated on stable grounds. The LSM also improves the accuracy of displacement estimation of the pixel-precision normalized cross-correlation by over 90% under ideal (simulated circumstances and by about 25% for real multi-temporal images of mass movements.

  11. The use of portable NIRS to measure muscle oxygenation and haemodynamics during a repeated sprint running test.

    Science.gov (United States)

    Jones, Ben; Hesford, Catherine M; Cooper, Chris E

    2013-01-01

    Portable near-infrared spectroscopy (NIRS) devices were originally developed for use in exercise and sports science by Britton Chance in the 1990s (the RunMan and microRunman series). However, only recently with the development of more robust, and wireless systems, has the routine use in elite sport become possible. As with the medical use of NIRS, finding applications of the technology that are relevant to practitioners is the key issue. One option is to use NIRS to track exercise training-induced adaptations in muscle. Portable NIRS devices enable monitoring during the normal 'field' routine uses to assess fitness, such as repeat sprint shuttle tests. Knowledge about the acute physiological responses to these specific tests has practical applications within team sport training prescription, where development of both central and peripheral determinants of high-intensity intermittent exercise needs to be considered. The purpose of this study was to observe NIRS-detected parameters during a repeat sprint test. We used the PortaMon, a two wavelength spatially resolved NIR spectrometer manufactured by Artinis Inc., to assess NIR changes in the gastrocnemius muscle of both the left and right leg during high-intensity running. Six university standard rugby players were assessed (age 20 ± 1.5 years; height 183 ± 1.0 cm; weight 89.4 ± 5.8 kg; body fat 12.2 ± 3.0 %); the subjects completed nine repeated shuttle runs, which incorporated forward, backward and change of direction movements. Individual sprint time, total time to complete test, blood lactate response (BL), heart rate values (HR) and haemoglobin variables (ΔHHb, ΔtHb, ΔHbO2 and ΔTSI%) were measured. Total time to complete the test was 260 ± 20 s, final blood lactate was 14.3 ± 2.8 mM, and maximal HR 182 ± 5 bpm. NIRS variables displayed no differences between right and left legs. During the test, the group-averaged data showed a clear decrease in HbO2 (max. decrease 11.41 ± 4.95 μM), increase in HHb

  12. The quantum measurement approach to particle oscillations

    CERN Document Server

    Anastopoulos, C

    2010-01-01

    The LSND and MiniBoone seeming anomalies in neutrino oscillations are usually attributed to physics beyond the Standard model. It is, however, possible that they may be an artefact of the theoretical treatment of particle oscillations that ignores fine points of quantum measurement theory relevant to the experiments. In this paper, we construct a rigorous measurement-theoretic framework for the description of particle oscillations, employing no assumptions extrinsic to quantum theory. The formalism leads to a non-standard oscillation formula; at low energy it predicts an `anomalous' oscillation wavelength, while at high energy it differs from the standard expression by a factor of 2. The key novelties in the formalism are the treatment of a particle's time of arrival at the detector as a genuine quantum observable, the theoretical precision in the definition of quantum probabilities, and the detailed modeling of the measurement process. The article also contains an extensive critical review of existing theore...

  13. Integral, measure and derivative a unified approach

    CERN Document Server

    Shilov, G E

    2012-01-01

    This graduate-level textbook and monograph defines the functions of a real variable through consistent use of the Daniell scheme, offering a rare and useful alternative to customary approaches. The treatment can be understood by any reader with a solid background in advanced calculus, and it features many problems with hints and answers. ""The exposition is fresh and sophisticated,"" declared Sci-Tech Book News, ""and will engage the interest of accomplished mathematicians."" Part one is devoted to the integral, moving from the Reimann integral and step functions to a general theory, and obta

  14. Extended Reconstruction Approaches for Saturation Measurements Using Reserved Quantization Indices

    DEFF Research Database (Denmark)

    Li, Peng; Arildsen, Thomas; Larsen, Torben

    2012-01-01

    This paper proposes a reserved quantization indices method for saturated measurements in compressed sensing. The existing approaches tailored for saturation effect do not provide a way to identify saturated measurements, which is mandatory in practical implementations.We introduce a method using...... reserved quantization indices to mark saturated measurements, which is applicable to current quantizer models. Two extended approaches based on the proposed method have been investigated compared to the existing approaches. The investigation shows that saturated measurements can be identified by reserved...... quantization indices without adding extra hardware resources while maintaining a comparable reconstruction quality to the existing approaches....

  15. Approaches to measuring cultural diversity in recreation

    Science.gov (United States)

    Chieh-Lu Li; James D. Absher; Yi-Chung Hsu; Alan R. Graefe

    2008-01-01

    Measuring cultural diversity in recreation has become an important topic because of the increasing coverage of and interest in ethnicity and cross-cultural aspects of recreation. Introducing theories and methods from established disciplines other than leisure studies/recreation and park studies is necessary to understand this important issue. In this article, we first...

  16. Virtual destination image: a new measurement approach

    NARCIS (Netherlands)

    R. Govers (Robert); F.M. Go (Frank); K. Kumar (Kuldeep)

    2007-01-01

    textabstractThis study utilized enabling internet and computerized content analysis technologies to measure destination image from a phenomenographic post-positivist perspective. In an online survey, respondents were asked to describe their image of one of seven case study destinations that they had

  17. A NEW APPROACH FOR MEASURING CORPORATE REPUTATION

    Directory of Open Access Journals (Sweden)

    Percy Marquina Feldman

    2014-01-01

    Full Text Available This study describes the concept of corporate reputation and reviews some of the major points that exist when it comes to measuring it. It thus suggests a new index for measurement and its advantages and disadvantages are pointed out. The consistency of the seven key variables for the collecting indicator is described by the results of a factor analysis and correlations. Finally, the indicator is put to test by gathering the perception of corporate reputation of 1500 individuals for 69 companies belonging to 15 different industrial sectors, in Peru. The results indicate that the proposed index variables are not necessarily of greatest interest to the study sample in which companies have a better performance. Also greater memorial companies aren’t necessarily those that enjoy a greater corporate reputation. Managerial implications for the organizations in the process of managing and monitoring the dimensions involved of this key asset are also referenced.

  18. Repeated-sprint cycling does not induce respiratory muscle fatigue in active adults: measurements from the powerbreathe® inspiratory muscle trainer.

    Science.gov (United States)

    Minahan, Clare; Sheehan, Beth; Doutreband, Rachel; Kirkwood, Tom; Reeves, Daniel; Cross, Troy

    2015-03-01

    This study examined respiratory muscle strength using the POWERbreathe® inspiratory muscle trainer (i.e., 'S-Index') before and after repeated-sprint cycling for comparison with maximal inspiratory pressure (MIP) values obtained during a Mueller maneuver. The S-Index was measured during six trials across two sessions using the POWERbreathe® and MIP was measured during three trials in a single session using a custom-made manometer in seven recreationally active adults. Global respiratory muscle strength was measured using both devices before and after the performance of sixteen, 6-s sprints on a cycle ergometer. Intraclass correlation coefficients for the POWERbreathe® S-index indicated excellent (p Repeated-sprint cycling had no effect on respiratory muscle strength as measured by the POWERbreathe® (p > 0.99) and during the Mueller maneuver (p > 0.99). The POWERbreathe® S-Index is a moderately reliable, but not equivalent, measure of MIP determined during a Mueller maneuver. Furthermore, repeated-sprint cycling does not induce globalized respiratory muscle fatigue in recreationally-active adults. Key pointsThe S-Index as measured by the POWERbreathe® is a reliable measure of respiratory muscle strengthThe S-Index does not accurately reflect maximal inspiratory pressure obtained from a Mueller maneuverRepeated-sprint cycling does not induce respiratory muscle fatigue as measured by the POWERbreathe® and the Manometer.

  19. Product lifecycle approach to cascade impaction measurements.

    Science.gov (United States)

    Tougas, Terrence P; Christopher, Dave; Mitchell, Jolyon; Lyapustina, Svetlana; Van Oort, Michiel; Bauer, Richard; Glaab, Volker

    2011-03-01

    Over the lifecycle of an orally inhaled product (OIP), multi-stage cascade impactor (CI) measurements are used for different purposes and to address different questions. Full-resolution CIs can provide important information during product development and are widely used but are time- and resource-intensive, highly variable, and suboptimal for OIP quality control (QC) testing. By contrast, Efficient Data Analysis (EDA) combined with Abbreviated Impactor Measurement (AIM) systems pertinent either for QC and-possibly-for adult Human Respiratory Tract (pHRT) has been introduced for OIP performance assessment during and post-development. This article summarizes available evidence and discusses a strategy for using either abbreviated or full-resolution CI systems depending on the purpose of the measurement, such that adequate, accurate, and efficient testing of aerodynamic particle size distribution (APSD) of OIPs can be achieved throughout the lifecycle of a product. Under these proposals, a comprehensive testing program should initially be conducted by full-resolution CI in OIP development to ascertain the product's APSD. Subsequently, correlations should be established from the selected AIM CIs to the corresponding full-resolution system, ideally developing specifications common to both techniques. In the commercial phase, it should be possible to release product using AIM/EDA, keeping the full-resolution CI for investigations, change control, and trouble-shooting, thus optimizing resources for APSD characterization throughout the product lifecycle. If an in vitro-in vivo relationship is established and clinically relevant sizes are known, an AIM-pHRT could serve as a quick indicator that clinically relevant fractions have not changed and also, in the management of post-approval changes. © 2011 American Association of Pharmaceutical Scientists

  20. NEW APPROACHES ON REVENUE RECOGNITION AND MEASUREMENT

    Directory of Open Access Journals (Sweden)

    Cristina-Aurora, BUNEA-BONTAȘ

    2014-11-01

    Full Text Available Revenue is an important indicator to users of financial statements in assessing an entity's financial performance and position. International Financial Reporting Standard 15 Revenue from Contracts with Customers (IFRS 15 issued in May 2014 provides a robust framework for addressing revenue issues. The standard establishes principles for reporting useful information to users of financial statements about the nature, amount, timing and uncertainty of revenue and cash flows arising from an entity's contracts with customers. This article outlines the basic principles that an entity should must apply to measure and recognise revenue and the related cash flows.

  1. Ambient particulate air pollution and circulating antioxidant enzymes: A repeated-measure study in healthy adults in Beijing, China.

    Science.gov (United States)

    Wu, Shaowei; Wang, Bin; Yang, Di; Wei, Hongying; Li, Hongyu; Pan, Lu; Huang, Jing; Wang, Xin; Qin, Yu; Zheng, Chanjuan; Shima, Masayuki; Deng, Furong; Guo, Xinbiao

    2016-01-01

    The association of systemic antioxidant activity with ambient air pollution has been unclear. A panel of 40 healthy college students underwent repeated blood collection for 12 occasions under three exposure scenarios before and after relocating from a suburban area to an urban area in Beijing, China in 2010-2011. We measured various air pollutants including fine particles (PM2.5) and determined circulating levels of antioxidant enzymes extracellular superoxide dismutase (EC-SOD) and glutathione peroxidase 1 (GPX1) in the laboratory. An interquartile range increase of 63.4 μg/m(3) at 3-d PM2.5 moving average was associated with a 6.3% (95% CI: 0.6, 12.4) increase in EC-SOD and a 5.5% (95% CI: 1.3, 9.8) increase in GPX1. Several PM2.5 chemical constituents, including negative ions (nitrate and chloride) and metals (e.g., iron and strontium), were consistently associated with increases in EC-SOD and GPX1. Our results support activation of circulating antioxidant enzymes following exposure to particulate air pollution. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. A method to construct a points system to predict cardiovascular disease considering repeated measures of risk factors

    Science.gov (United States)

    Carbayo-Herencia, Julio Antonio; Vigo, Maria Isabel; Gil-Guillén, Vicente Francisco

    2016-01-01

    Current predictive models for cardiovascular disease based on points systems use the baseline situation of the risk factors as independent variables. These models do not take into account the variability of the risk factors over time. Predictive models for other types of disease also exist that do consider the temporal variability of a single biological marker in addition to the baseline variables. However, due to their complexity these other models are not used in daily clinical practice. Bearing in mind the clinical relevance of these issues and that cardiovascular diseases are the leading cause of death worldwide we show the properties and viability of a new methodological alternative for constructing cardiovascular risk scores to make predictions of cardiovascular disease with repeated measures of the risk factors and retaining the simplicity of the points systems so often used in clinical practice (construction, statistical validation by simulation and explanation of potential utilization). We have also applied the system clinically upon a set of simulated data solely to help readers understand the procedure constructed. PMID:26893963

  3. Are diagnostic criteria for acute malnutrition affected by hydration status in hospitalized children? A repeated measures study

    Directory of Open Access Journals (Sweden)

    Fegan Gregory

    2011-09-01

    Full Text Available Abstract Introduction Dehydration and malnutrition commonly occur together among ill children in developing countries. Dehydration (change in total body water is known to alter weight. Although muscle tissue has high water content, it is not known whether mid-upper arm circumference (MUAC may be altered by changes in tissue hydration. We aimed to determine whether rehydration alters MUAC, MUAC Z score (MUACz, weight-for-length Z-score (WFLz and classification of nutritional status among hospitalised Kenyan children admitted with signs of dehydration. Study procedure We enrolled children aged from 3 months to 5 years admitted to a rural Kenyan district hospital with clinical signs compatible with dehydration, and without kwashiorkor. Anthropometric measurements were taken at admission and repeated after 48 hours of treatment, which included rehydration by WHO protocols. Changes in weight observed during this period were considered to be due to changes in hydration status. Results Among 325 children (median age 11 months the median weight gain (rehydration after 48 hours was 0.21 kg, (an increase of 2.9% of admission body weight. Each 1% change in weight was associated with a 0.40 mm (95% CI: 0.30 to 0.44 mm, p Conclusion MUAC is less affected by dehydration than WFLz and is therefore more suitable for nutritional assessment of ill children. However, both WFLz and MUAC misclassify SAM among dehydrated children. Nutritional status should be re-evaluated following rehydration, and management adjusted accordingly.

  4. The OSIRIS Weight of Evidence approach: ITS for the endpoints repeated-dose toxicity (RepDose ITS)

    NARCIS (Netherlands)

    Tluczkiewicz, I.; Batke, M.; Kroese, D.; Buist, H.; Aldenberg, T.; Pauné, E.; Grimm, H.; Kühne, R.; Schüürmann, G.; Mangelsdorf, I.; Escher, S.E.

    2013-01-01

    In the FP6 European project OSIRIS, Integrated Testing Strategies (ITSs) for relevant toxicological endpoints were developed to avoid new animal testing and thus to reduce time and costs. The present paper describes the development of an ITS for repeated-dose toxicity called RepDose ITS which evalua

  5. State of Modern Measurement Approaches in Social Work Research Literature

    Science.gov (United States)

    Unick, George J.; Stone, Susan

    2010-01-01

    The need to develop measures that tap into constructs of interest to social work, refine existing measures, and ensure that measures function adequately across diverse populations of interest is critical. Item response theory (IRT) is a modern measurement approach that is increasingly seen as an essential tool in a number of allied professions.…

  6. Diabetic Foot Prevention: Repeatability of the Loran Platform Plantar Pressure and Load Distribution Measurements in Nondiabetic Subjects during Bipedal Standing—A Pilot Study

    Directory of Open Access Journals (Sweden)

    Martha Zequera

    2011-01-01

    Full Text Available This study was designed to assess the repeatability of the Loran Platform and evaluate the variability of plantar pressure and postural balance, during barefoot standing in nondiabetic subjects, for future diabetic foot clinical evaluation. Measurements were taken for eight nondiabetic subjects (4 females, 4 males, aged 47±7.2 years who had no musculoskeletal symptoms. Five variables were measured with the platform in the barefoot standing position. Ten measurements were taken using two different techniques for feet and posture positioning, during three sessions, once a week. For most measurements, no significant effect over time was found with Student's t-test (P<.000125. The ANOVA test of statistical significance confirmed that measurement differences between subjects showed higher variations than measurements taken from the same subject (P<.001. The measurements taken by the Loran Platform system were found to be repeatable.

  7. Gasificaton Transport: A Multiphase CFD Approach & Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Dimitri Gidaspow; Veeraya Jiradilok; Mayank Kashyap; Benjapon Chalermsinsuwan

    2009-02-14

    The objective of this project was to develop predictive theories for the dispersion and mass transfer coefficients and to measure them in the turbulent fluidization regime, using existing facilities. A second objective was to use our multiphase CFD tools to suggest optimized gasifier designs consistent with aims of Future Gen. We have shown that the kinetic theory based CFD codes correctly compute: (1) Dispersion coefficients; and (2) Mass transfer coefficients. Hence, the kinetic theory based CFD codes can be used for fluidized bed reactor design without any such inputs. We have also suggested a new energy efficient method of gasifying coal and producing electricity using a molten carbonate fuel cell. The principal product of this new scheme is carbon dioxide which can be converted into useful products such as marble, as is done very slowly in nature. We believe this scheme is a lot better than the canceled FutureGen, since the carbon dioxide is safely sequestered.

  8. Repeated measurements of blood lactate concentration as a prognostic marker in horses with acute colitis evaluated with classification and regression trees (CART) and random forest analysis.

    Science.gov (United States)

    Petersen, M B; Tolver, A; Husted, L; Tølbøll, T H; Pihl, T H

    2016-07-01

    The objective of this study was to investigate the prognostic value of single and repeated measurements of blood l-lactate (Lac) and ionised calcium (iCa) concentrations, packed cell volume (PCV) and plasma total protein (TP) concentration in horses with acute colitis. A total of 66 adult horses admitted with acute colitis (2 mmol/L (sensitivity, 0.72; specificity, 0.8). In conclusion, blood lactate concentration measured at admission and repeated 6 h later aided the prognostic evaluation of horses with acute colitis in this population with a very high mortality rate. This should allow clinicians to give a more reliable prognosis for the horse.

  9. Cardiac output measurement in newborn infants using the ultrasonic cardiac output monitor: an assessment of agreement with conventional echocardiography, repeatability and new user experience.

    Science.gov (United States)

    Patel, Neil; Dodsworth, Melissa; Mills, John F

    2011-05-01

    To assess (1) agreement between the ultrasonic cardiac output monitor (USCOM) 1A device for measurement of cardiac output in newborn infants and conventional echocardiography (ECHO), (2) repeatability of USCOM measurements and (3) agreement between novice and expert users of the USCOM. A prospective observational study. The Neonatal Unit at the Royal Children's Hospital, Melbourne, Australia. 56 term and near-term infants, with no evidence of structural or functional cardiovascular disease, or haemodynamic shunts. Agreement between ECHO and USCOM was assessed by paired measurements of ventricular outputs by a single experienced user. Repeatability was assessed using five repeated measurements in 10 infants. Agreement between five novices and one expert user was assessed by paired USCOM measurements over 30 training measurements. Agreement between USCOM and ECHO for left ventricular output (LVO) was (bias, ±limits of agreement, mean % error): 14, ±108 ml/kg/min, 43%, and for right ventricular output (RVO): -59, ±160, ml/kg/min, 57%. Intra-observer repeatability was 6.7% for USCOM LVO and 3.6% for ECHO LVO. After five training measurements, the mean difference between USCOM measures of LVO by novice and expert users was less than 50 ml/kg/min, but with variability. Repeatability of USCOM measures is high in newborn infants. New users can be trained quickly, but with high inter-user variability. Agreement between USCOM and conventional ECHO is broad, and worse for RVO and LVO. Further studies are required to assess the ability of the device to detect clinically significant changes in infant cardiac output.

  10. A multivariate approach in measuring innovation performance

    Directory of Open Access Journals (Sweden)

    Elżbieta Roszko-Wójtowicz

    2016-12-01

    Full Text Available The goal of this research is to propose a procedure of innovativeness measurement, taking Summary Innovation Index methodology as a starting point. In contemporary world, innovative activity is perceived as a source of competitiveness and economic growth. New products, utility models, trademarks and creative projects are an important element of present socio-economic reality. In particular, authors focus on selection and application of multivariable statistical analysis to distinguish factors influencing innovativeness of EU economies to the highest degree. The result of quantitative analyses is linear ordering of EU countries by the level of their innovativeness based on the reduced set of diagnostic variables. The rating was compared with the outcome presented in Innovation Union Scoreboard (IUS with Summary Innovation Index (SII. Conducted analysis proves a convergence between authors’ results and existing ratings of innovativeness. Nevertheless, the main conclusion is that the methodology of innovativeness assessment remains an open issue and requires further research. Especially, it should first and foremost concentrate on deeper verification of a small set of variables that have the strongest impact on innovativeness. It is both, in economic and social interest, to get a clear picture of innovativeness driving forces.

  11. Effects of CAG repeat length, HTT protein length and protein context on cerebral metabolism measured using magnetic resonance spectroscopy in transgenic mouse models of Huntington's disease.

    Science.gov (United States)

    Jenkins, Bruce G; Andreassen, Ole A; Dedeoglu, Alpaslan; Leavitt, Blair; Hayden, Michael; Borchelt, David; Ross, Christopher A; Ferrante, Robert J; Beal, M Flint

    2005-10-01

    Huntington's disease is a neurodegenerative illness caused by expansion of CAG repeats at the N-terminal end of the protein huntingtin. We examined longitudinal changes in brain metabolite levels using in vivo magnetic resonance spectroscopy in five different mouse models. There was a large (>50%) exponential decrease in N-acetyl aspartate (NAA) with time in both striatum and cortex in mice with 150 CAG repeats (R6/2 strain). There was a linear decrease restricted to striatum in N171-82Q mice with 82 CAG repeats. Both the exponential and linear decreases of NAA were paralleled in time by decreases in neuronal area measured histologically. Yeast artificial chromosome transgenic mice with 72 CAG repeats, but low expression levels, had less striatal NAA loss than the N171-82Q mice (15% vs. 43%). We evaluated the effect of gene context in mice with an approximate 146 CAG repeat on the hypoxanthine phosphoribosyltransferase gene (HPRT). HPRT mice developed an obese phenotype in contrast to weight loss in the R6/2 and N171-82Q mice. These mice showed a small striatal NAA loss (21%), and a possible increase in brain lipids detectable by magnetic resonance (MR) spectroscopy and decreased brain water T1. Our results indicate profound metabolic defects that are strongly affected by CAG repeat length, as well as gene expression levels and protein context.

  12. Natural language in measuring user emotions: A qualitative approach to quantitative survey-based emotion measurement

    NARCIS (Netherlands)

    Tonetto, L.M.; Desmet, P.M.A.

    2012-01-01

    This paper presents an approach to developing surveys that measure user experiences with the use of natural everyday language. The common approach to develop questionnaires that measure experience is to translate theoretical factors into verbal survey items. This theory-based approach can impair the

  13. The Effects of Group Leader Learning Style on Student Knowledge Gain in a Leadership Camp Setting: A Repeated-Measures Experiment

    Science.gov (United States)

    Brown, Nicholas R.; Terry, Robert, Jr.

    2013-01-01

    Many state FFA associations conduct summer camps focusing on leadership and personal development for FFA members. Interestingly, little research has been conducted on the impact or outcomes of these common activities. The purpose of this split-plot factorial repeated-measures experiment was to assess the level of campers' learning of the…

  14. Power and sample size for the S:T repeated measures design combined with a linear mixed-effects model allowing for missing data.

    Science.gov (United States)

    Tango, Toshiro

    2017-02-13

    Tango (Biostatistics 2016) proposed a new repeated measures design called the S:T repeated measures design, combined with generalized linear mixed-effects models and sample size calculations for a test of the average treatment effect that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size compared with the simple pre-post design. In this article, we present formulas for calculating power and sample sizes for a test of the average treatment effect allowing for missing data within the framework of the S:T repeated measures design with a continuous response variable combined with a linear mixed-effects model. Examples are provided to illustrate the use of these formulas.

  15. Predictors and Variability of Repeat Measurements of Urinary Phenols and Parabens in a Cohort of Shanghai Women and Men

    Science.gov (United States)

    Buckley, Jessie P.; Yang, Gong; Liao, Linda M.; Satagopan, Jaya; Calafat, Antonia M.; Matthews, Charles E.; Cai, Qiuyin; Ji, Bu-Tian; Cai, Hui; Wolff, Mary S.; Rothman, Nathaniel; Zheng, Wei; Xiang, Yong-Bing; Shu, Xiao-Ou; Gao, Yu-Tang; Chow, Wong-Ho

    2014-01-01

    , under certain circumstances, among women. Citation: Engel LS, Buckley JP, Yang G, Liao LM, Satagopan J, Calafat AM, Matthews CE, Cai Q, Ji BT, Cai H, Engel SM, Wolff MS, Rothman N, Zheng W, Xiang YB, Shu XO, Gao YT, Chow WH. 2014. Predictors and variability of repeat measurements of urinary phenols and parabens in a cohort of Shanghai women and men. Environ Health Perspect 122:733–740; http://dx.doi.org/10.1289/ehp.1306830 PMID:24659570

  16. A review of the reporting and handling of missing data in cohort studies with repeated assessment of exposure measures

    Directory of Open Access Journals (Sweden)

    Karahalios Amalia

    2012-07-01

    Full Text Available Abstract Background Retaining participants in cohort studies with multiple follow-up waves is difficult. Commonly, researchers are faced with the problem of missing data, which may introduce biased results as well as a loss of statistical power and precision. The STROBE guidelines von Elm et al. (Lancet, 370:1453-1457, 2007; Vandenbroucke et al. (PLoS Med, 4:e297, 2007 and the guidelines proposed by Sterne et al. (BMJ, 338:b2393, 2009 recommend that cohort studies report on the amount of missing data, the reasons for non-participation and non-response, and the method used to handle missing data in the analyses. We have conducted a review of publications from cohort studies in order to document the reporting of missing data for exposure measures and to describe the statistical methods used to account for the missing data. Methods A systematic search of English language papers published from January 2000 to December 2009 was carried out in PubMed. Prospective cohort studies with a sample size greater than 1,000 that analysed data using repeated measures of exposure were included. Results Among the 82 papers meeting the inclusion criteria, only 35 (43% reported the amount of missing data according to the suggested guidelines. Sixty-eight papers (83% described how they dealt with missing data in the analysis. Most of the papers excluded participants with missing data and performed a complete-case analysis (n = 54, 66%. Other papers used more sophisticated methods including multiple imputation (n = 5 or fully Bayesian modeling (n = 1. Methods known to produce biased results were also used, for example, Last Observation Carried Forward (n = 7, the missing indicator method (n = 1, and mean value substitution (n = 3. For the remaining 14 papers, the method used to handle missing data in the analysis was not stated. Conclusions This review highlights the inconsistent reporting of missing data in cohort studies and the continuing

  17. A review of the reporting and handling of missing data in cohort studies with repeated assessment of exposure measures.

    Science.gov (United States)

    Karahalios, Amalia; Baglietto, Laura; Carlin, John B; English, Dallas R; Simpson, Julie A

    2012-07-11

    Retaining participants in cohort studies with multiple follow-up waves is difficult. Commonly, researchers are faced with the problem of missing data, which may introduce biased results as well as a loss of statistical power and precision. The STROBE guidelines von Elm et al. (Lancet, 370:1453-1457, 2007); Vandenbroucke et al. (PLoS Med, 4:e297, 2007) and the guidelines proposed by Sterne et al. (BMJ, 338:b2393, 2009) recommend that cohort studies report on the amount of missing data, the reasons for non-participation and non-response, and the method used to handle missing data in the analyses. We have conducted a review of publications from cohort studies in order to document the reporting of missing data for exposure measures and to describe the statistical methods used to account for the missing data. A systematic search of English language papers published from January 2000 to December 2009 was carried out in PubMed. Prospective cohort studies with a sample size greater than 1,000 that analysed data using repeated measures of exposure were included. Among the 82 papers meeting the inclusion criteria, only 35 (43%) reported the amount of missing data according to the suggested guidelines. Sixty-eight papers (83%) described how they dealt with missing data in the analysis. Most of the papers excluded participants with missing data and performed a complete-case analysis (n=54, 66%). Other papers used more sophisticated methods including multiple imputation (n=5) or fully Bayesian modeling (n=1). Methods known to produce biased results were also used, for example, Last Observation Carried Forward (n=7), the missing indicator method (n=1), and mean value substitution (n=3). For the remaining 14 papers, the method used to handle missing data in the analysis was not stated. This review highlights the inconsistent reporting of missing data in cohort studies and the continuing use of inappropriate methods to handle missing data in the analysis. Epidemiological journals

  18. Neuro-mechanical determinants of repeated treadmill sprints - Usefulness of an “hypoxic to normoxic recovery” approach

    Science.gov (United States)

    Girard, Olivier; Brocherie, Franck; Morin, Jean-Benoit; Millet, Grégoire P.

    2015-01-01

    To improve our understanding of the limiting factors during repeated sprinting, we manipulated hypoxia severity during an initial set and examined the effects on performance and associated neuro-mechanical alterations during a subsequent set performed in normoxia. On separate days, 13 active males performed eight 5-s sprints (recovery = 25 s) on an instrumented treadmill in either normoxia near sea-level (SL; FiO2 = 20.9%), moderate (MH; FiO2 = 16.8%) or severe normobaric hypoxia (SH; FiO2 = 13.3%) followed, 6 min later, by four 5-s sprints (recovery = 25 s) in normoxia. Throughout the first set, along with distance covered [larger sprint decrement score in SH (−8.2%) compared to SL (−5.3%) and MH (−7.2%); P sprint of the subsequent normoxic set, the distance covered (99.6, 96.4, and 98.3% of sprint 1 in SL, MH, and SH, respectively), the main kinetic (mean vertical, horizontal, and resultant forces) and kinematic (contact time and step frequency) variables as well as surface electromyogram of quadriceps and plantar flexor muscles were fully recovered, with no significant difference between conditions. Despite differing hypoxic severity levels during sprints 1–8, performance and neuro-mechanical patterns did not differ during the four sprints of the second set performed in normoxia. In summary, under the circumstances of this study (participant background, exercise-to-rest ratio, hypoxia exposure), sprint mechanical performance and neural alterations were largely influenced by the hypoxia severity in an initial set of repeated sprints. However, hypoxia had no residual effect during a subsequent set performed in normoxia. Hence, the recovery of performance and associated neuro-mechanical alterations was complete after resting for 6 min near sea level, with a similar fatigue pattern across conditions during subsequent repeated sprints in normoxia. PMID:26441679

  19. Chemonucleolysis technique. New oblique approach requires no measurements.

    Science.gov (United States)

    Romy, M

    1986-01-01

    The author describes a new technique for intradiscal therapy that eliminates the need for measurements. The new technique for entering the lumbar disk for discolysis from the oblique approach is described as simple, accurate and safe.

  20. Multidimensional poverty: an alternative measurement approach for the United States?

    Science.gov (United States)

    Waglé, Udaya R

    2008-06-01

    International poverty research has increasingly underscored the need to use multidimensional approaches to measure poverty. Largely embraced in Europe and elsewhere, this has not had much impact on the way poverty is measured in the United States. In this paper, I use a comprehensive multidimensional framework including economic well-being, capability, and social inclusion to examine poverty in the US. Data from the 2004 General Social Survey support the interconnectedness among these poverty dimensions, indicating that the multidimensional framework utilizing a comprehensive set of information provides a compelling value added to poverty measurement. The suggested demographic characteristics of the various categories of the poor are somewhat similar between this approach and other traditional approaches. But the more comprehensive and accurate measurement outcomes from this approach help policymakers target resources at the specific groups.

  1. Analyzing repeated data collected by mobile phones and frequent text messages. An example of Low back pain measured weekly for 18 weeks

    DEFF Research Database (Denmark)

    Axén, Iben; Bodin, Lennart; Kongsted, Alice

    2012-01-01

    ABSTRACT: BACKGROUND: Repeated data collection is desirable when monitoring fluctuating conditions. Mobile phones can be used to gather such data from large groups of respondents by sending and receiving frequently repeated short questions and answers as text messages. The analysis of repeated data...... for the clinical researcher in order for complex outcome measures to be interpreted in a clinically meaningful way. METHODS: A model data set was formed using data from two clinical studies, where patients with low back pain were followed with weekly text messages for 18 weeks. Different research questions...... questions with appropriate analytical methods 1: How many days with pain do patients experience? This question was answered with data summaries. 2: What is the proportion of participants "recovered" at a specific time point? This question was answered using logistic regression analysis. 3: What is the time...

  2. Evaluation, including effects of storage and repeated freezing and thawing, of a method for measurement of urinary creatinine

    DEFF Research Database (Denmark)

    Garde, A H; Hansen, Åse Marie; Kristiansen, J

    2003-01-01

    The aims of this study were to elucidate to what extent storage and repeated freezing and thawing influenced the concentration of creatinine in urine samples and to evaluate the method for determination of creatinine in urine. The creatinine method was based on the well-known Jaffe's reaction and...

  3. Heart rate variability and DNA methylation levels are altered after short-term metal fume exposure among occupational welders: a repeated-measures panel study

    OpenAIRE

    2014-01-01

    Background: In occupational settings, boilermakers are exposed to high levels of metallic fine particulate matter (PM2.5) generated during the welding process. The effect of welding PM2.5 on heart rate variability (HRV) has been described, but the relationship between PM2.5, DNA methylation, and HRV is not known. Methods: In this repeated-measures panel study, we recorded resting HRV and measured DNA methylation levels in transposable elements Alu and long interspersed nuclear element-1 (LINE...

  4. Repeated measures dose-finding design with time-trend detection in the presence of correlated toxicity data.

    Science.gov (United States)

    Yin, Jun; Paoletti, Xavier; Sargent, Daniel J; Mandrekar, Sumithra J

    2017-08-01

    Phase I trials are designed to determine the safety, tolerability, and recommended phase 2 dose of therapeutic agents for subsequent testing. The dose-finding paradigm has thus traditionally focused on identifying the maximum tolerable dose of an agent or combination therapy under the assumption that there is a non-decreasing relationship between dose-toxicity and dose-efficacy. The dose is typically determined based on the probability of severe toxicity observed during the first treatment cycle. A novel endpoint, the total toxicity profile, was previously developed to account for the multiple toxicity types and grades experienced in the first cycle. More recently, this was extended to a repeated measures design based on the total toxicity profile to account for longitudinal toxicities over multiple treatment cycles in the absence of within-patient correlation. In this work, we propose to extend the design in the presence of within-patient correlation. Furthermore, we provide a framework to detect a toxicity time trend (toxicity increasing, decreasing, or stable) over multiple treatment cycles. We utilize a linear mixed model in the Bayesian framework, with the addition of Bayesian risk functions for decision-making in dose assignment. The performance of this design was evaluated using simulation studies and real data from a phase I trial. We demonstrated that using available toxicity data from all cycles of treatment improves the accuracy of maximum tolerated dose identification and allows for the detection of a time trend. The performance is consistent regardless of the strength of the within-patient correlation. In addition, the use of a quasi-continuous total toxicity profile score significantly increased the power to detect time trends compared to when binary data only were used. The increased interest in molecularly targeted agents and immunotherapies in oncology necessitates innovative phase I study designs. Our proposed framework provides a tool to tackle

  5. Neuro-mechanical determinants of repeated treadmill sprints - Usefulness of an ‘hypoxic to normoxic recovery’ approach

    Directory of Open Access Journals (Sweden)

    Olivier eGIRARD

    2015-09-01

    Full Text Available To improve our understanding of the limiting factors during repeated sprinting, we manipulated hypoxia severity during an initial set and examined the effects on performance and associated neuro-mechanical alterations during a subsequent set performed in normoxia. On separate days, thirteen active males performed eight 5-s sprints (recovery = 25 s on an instrumented treadmill in either normoxia near sea-level (SL; FiO2 = 20.9%, moderate (MH; FiO2 = 16.8% or severe normobaric hypoxia (SH; FiO2 = 13.3% followed, 6 min later, by four 5-s sprints (recovery = 25 s in normoxia. Throughout the first set, along with distance covered [larger sprint decrement score in SH (-8.2% compared to SL (-5.3% and MH (-7.2%; P<0.05], changes in contact time, step frequency and root mean square activity (surface electromyography of the quadriceps (rectus femoris muscle in SH exceeded those in SL and MH (P<0.05. During first sprint of the subsequent normoxic set, the distance covered (99.6%, 96.4% and 98.3% of sprint 1 in SL, MH and SH, respectively, the main kinetic (mean, horizontal and resultant forces and kinematic (contact time and step frequency variables as well as surface electromyogram of quadriceps and plantar flexor muscles were fully recovered, with no significant difference between conditions. Despite differing hypoxic severity levels during sprints 1 to 8, performance and neuro-mechanical patterns did not differ during the four sprints of the second set performed in normoxia. In summary, under the circumstances of this study (participant background, exercise-to-rest ratio, hypoxia exposure, sprint mechanical performance and neural alterations were largerly influenced by the hypoxia severity in an initial set of repeated sprints. However, hypoxia had no residual effect during a subsequent set performed in normoxia. Hence, the recovery of performance and associated neuro-mechanical alterations was complete after resting for 6 min near sea level, with a

  6. Bayesian hierarchical joint modeling of repeatedly measured continuous and ordinal markers of disease severity: Application to Ugandan diabetes data.

    Science.gov (United States)

    Buhule, O D; Wahed, A S; Youk, A O

    2017-08-22

    Modeling of correlated biomarkers jointly has been shown to improve the efficiency of parameter estimates, leading to better clinical decisions. In this paper, we employ a joint modeling approach to a unique diabetes dataset, where blood glucose (continuous) and urine glucose (ordinal) measures of disease severity for diabetes are known to be correlated. The postulated joint model assumes that the outcomes are from distributions that are in the exponential family and hence modeled as multivariate generalized linear mixed effects model associated through correlated and/or shared random effects. The Markov chain Monte Carlo Bayesian approach is used to approximate posterior distribution and draw inference on the parameters. This proposed methodology provides a flexible framework to account for the hierarchical structure of the highly unbalanced data as well as the association between the 2 outcomes. The results indicate improved efficiency of parameter estimates when blood glucose and urine glucose are modeled jointly. Moreover, the simulation studies show that estimates obtained from the joint model are consistently less biased and more efficient than those in the separate models. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Young investigator challenge: Atypia of undetermined significance in thyroid FNA: Standardized terminology without standardized management--a closer look at repeat FNA and quality measures.

    Science.gov (United States)

    Brandler, Tamar C; Aziz, Mohamed S; Coutsouvelis, Constantinos; Rosen, Lisa; Rafael, Oana C; Souza, Fabiola; Jelloul, Fatima-Zahra; Klein, Melissa A

    2016-01-01

    The Bethesda system (TBS) for the reporting of thyroid cytopathology established the category of atypia of undetermined significance (AUS) with a 7% target rate and a 5% to 15% implied malignancy risk. Recent literature has reported a broad range of AUS rates, subsequent malignancy rates, and discrepant results from repeat fine-needle aspiration (FNA) versus surgical follow-up. Therefore, this study examined AUS data from the Hofstra North Shore-LIJ School of Medicine to determine the best clinical follow-up. Thyroid aspirates interpreted as AUS in 2012-2014 at the Hofstra North Shore-LIJ School of Medicine were collected. Repeat FNA and surgical follow-up data were tabulated to establish AUS, secondary AUS (diagnosed upon repeat FNA follow-up of a primary FNA AUS diagnosis), atypia of undetermined significance/malignancy (AUS:M) ratios (according to the TBS categories), and malignancy rates for AUS. The AUS rate was 8.5% (976/11,481), and there was follow-up data for 545 cases. The AUS:M ratio was 2.0. Repeat FNA was performed for 281 cases; 57 proceeded to surgical intervention. Repeat FNA reclassified 71.17% of the cases. The malignancy rates for AUS cases proceeding directly to surgery and for those receiving a surgical intervention after a repeat AUS diagnosis were 33.33% and 43.75%, respectively. Repeat FNA resulted in definitive diagnostic reclassification for 67.61% of primary AUS cases and reduced the number of patients triaged to surgery, with 56.58% of the cases recategorized as benign. Cases undergoing surgery after repeat AUS had a higher malignancy rate than those going straight to surgery, and this emphasizes the value of repeat FNA in selecting surgical candidates. In addition, this study highlights the utility of AUS rate monitoring as a quality measure that has contributed to the ability of the Hofstra North Shore-LIJ School of Medicine to adhere closely to TBS recommendations. © 2016 American Cancer Society.

  8. Pragmatic Approach for Multistage Phasor Measurement Unit Placement

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thoegersen, Poul

    2016-01-01

    Effective phasor measurement unit (PMU) placement is a key to the implementation of efficient and economically feasible wide area measurement systems in modern power systems. This paper proposes a pragmatic approach for cost-effective stage-wise deployment of PMUs while considering realistic...

  9. The effects of repeated-sprint training on field-based fitness measures: a meta-analysis of controlled and non-controlled trials.

    Science.gov (United States)

    Taylor, Jonathan; Macpherson, Tom; Spears, Iain; Weston, Matthew

    2015-06-01

    Repeated-sprint training appears to be an efficient and practical means for the simultaneous development of different components of fitness relevant to team sports. Our objective was to systematically review the literature and meta-analyse the effect of repeated-sprint training on a selection of field-based measures of athletic performance, i.e. counter-movement jump, 10 m sprint, 20 m sprint, 30 m sprint, repeated-sprint ability and high-intensity intermittent running performance. The SPORTDiscus, PubMed, MEDLINE and Web of Science databases were searched for original research articles. Search terms included 'repeated-sprint training', 'sprint training', 'aerobic endurance', 'repeated-sprint ability', 'counter-movement jump' and 'sprint performance'. Inclusion criteria included intervention consisting of a series of ≤10 s sprints with ≤60 s recovery; trained participants; intervention duration of 2-12 weeks; field-based fitness measures; running- or cycling-based intervention; published up to, and including, February 2014. Our final dataset included six trials for counter-movement jump (two controlled trials), eight trials for 10 m sprint, four trials for 20 m sprint (three controlled trials), two trials for 30 m sprint, eight trials for repeated-sprint ability and three trials for high-intensity intermittent running performance. Analyses were conducted using comprehensive meta-analysis software. Uncertainty in the meta-analysed effect of repeated-sprint training was expressed as 95% confidence limits (CL), along with the probability that the true value of the effect was trivial, beneficial or harmful. Magnitude-based inferences were based on standardised thresholds for small, moderate and large changes of 0.2, 0.6 and 1.2 standard deviations, respectively. Repeated-sprint training had a likely small beneficial effect in non-controlled counter-movement jump trials (effect size 0.33; 95% CL ±0.30), with a possibly moderate beneficial effect in controlled

  10. MEASURING INFLATION THROUGH STOCHASTIC APPROACH TO INDEX NUMBERS FOR PAKISTAN

    Directory of Open Access Journals (Sweden)

    Zahid Asghar

    2010-09-01

    Full Text Available This study attempts to estimate the rate of inflation in Pakistan through stochastic approach to index numbers which provides not only point estimate but also confidence interval for the rate of inflation. There are two types of approaches to index number theory namely: the functional economic approaches and the stochastic approach. The attraction of stochastic approach is that it estimates the rate of inflation in which uncertainty and statistical ideas play a major roll of screening index numbers. We have used extended stochastic approach to index numbers for measuring inflation by allowing for the systematic changes in the relative prices. We use CPI data covering the period July 2001--March 2008 for Pakistan.

  11. A learning based approach for green software measurements

    OpenAIRE

    Dahab, Sarah; Maag, Stephane; Bagnato, Alessandra; Almeida Da Silva, Marcos Aurelio

    2016-01-01

    International audience; Measuring specific software quality requirements in a continuous way and at runtime all along the development processes is crucial. Moreover, considering principles of measurement theory, it is still very complex to integrate green metrics in a common standardized and autonomous framework. In our approach, we propose an automated solution based on continuous analysis of SW green measurements, using at runtime a machine learning algorithm. The method allows to suggest a...

  12. A new measure-correlate-predict approach for resource assessment

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Landberg, L. [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark); Madsen, H. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    In order to find reasonable candidate site for wind farms, it is of great importance to be able to calculate the wind resource at potential sites. One way to solve this problem is to measure wind speed and direction at the site, and use these measurements to predict the resource. If the measurements at the potential site cover less than e.g. one year, which most likely will be the case, it is not possible to get a reliable estimate of the long-term resource, using this approach. If long-term measurements from e.g. some nearby meteorological station are available, however, then statistical methods can be used to find a relation between the measurements at the site and at the meteorological station. This relation can then be used to transform the long-term measurements to the potential site, and the resource can be calculated using the transformed measurements. Here, a varying-coefficient model, estimated using local regression, is applied in order to establish a relation between the measurements. The approach is evaluated using measurements from two sites, located approximately two kilometres apart, and the results show that the resource in this case can be predicted accurately, although this approach has serious shortcomings. (au)

  13. Evolutionary dynamics of leucine-rich repeat receptor-like kinases and related genes in plants:A phylogenomic approach

    Institute of Scientific and Technical Information of China (English)

    Tao Shi; Hongwen Huang; Michael J.Sanderson; Frans E.Tax

    2014-01-01

    Leucine-rich repeat (LRR) receptor-like kinases (RLKs), evolutionarily related LRR receptor-like proteins (RLPs) and receptor-like cytoplasmic kinases (RLCKs) have important roles in plant signaling, and their gene subfamilies are large with a complicated history of gene duplication and loss. In three pairs of closely related lineages, including Arabidopsis thaliana and A. lyrata (Arabidopsis), Lotus japonicus, and Medicago truncatula (Legumes), Oryza sativa ssp. japonica, and O. sativa ssp. indica (Rice), we find that LRR RLKs comprise the largest group of these LRR-related subfamilies, while the related RLCKs represent the smal est group. In addition, comparison of orthologs indicates a high frequency of reciprocal gene loss of the LRR RLK/LRR RLP/RLCK subfamilies. Furthermore, pairwise comparisons show that reciprocal gene loss is often associated with lineage-specific duplication(s) in the alternative lineage. Last, analysis of genes in A. thaliana involved in development revealed that most are highly conserved orthologs without species-specific duplication in the two Arabidopsis species and originated from older Arabidopsis-specific or rosid-specific duplications. We discuss potential pitfal s related to functional prediction for genes that have undergone frequent turnover (duplications, losses, and domain architecture changes), and conclude that prediction based on phylogenetic relationships wil likely outperform that based on sequence similarity alone.

  14. Deployment Repeatability

    Science.gov (United States)

    2016-04-01

    controlled to great precision, but in a Cubesat , there may be no attitude determination at all. Such a Cubesat might treat sun angle and tumbling rates as...could be sensitive to small differences in motor controller timing. In these cases, the analyst might choose to model the entire deployment path, with...knowledge of the material damage model or motor controller timing precision. On the other hand, if many repeated and environmentally representative

  15. Evaluation of Multiple Imputation in Missing Data Analysis: An Application on Repeated Measurement Data in Animal Science

    Directory of Open Access Journals (Sweden)

    Gazel Ser

    2015-12-01

    Full Text Available The purpose of this study was to evaluate the performance of multiple imputation method in case that missing observation structure is at random and completely at random from the approach of general linear mixed model. The application data of study was consisted of a total 77 heads of Norduz ram lambs at 7 months of age. After slaughtering, pH values measured at five different time points were determined as dependent variable. In addition, hot carcass weight, muscle glycogen level and fasting durations were included as independent variables in the model. In the dependent variable without missing observation, two missing observation structures including Missing Completely at Random (MCAR and Missing at Random (MAR were created by deleting the observations at certain rations (10% and 25%. After that, in data sets that have missing observation structure, complete data sets were obtained using MI (multiple imputation. The results obtained by applying general linear mixed model to the data sets that were completed using MI method were compared to the results regarding complete data. In the mixed model which was applied to the complete data and MI data sets, results whose covariance structures were the same and parameter estimations and standard estimations were rather close to the complete data are obtained. As a result, in this study, it was ensured that reliable information was obtained in mixed model in case of choosing MI as imputation method in missing observation structure and rates of both cases.

  16. A novel approach to propagate flavivirus infectious cDNA clones in bacteria by introducing tandem repeat sequences upstream of virus genome.

    Science.gov (United States)

    Pu, Szu-Yuan; Wu, Ren-Huang; Tsai, Ming-Han; Yang, Chi-Chen; Chang, Chung-Ming; Yueh, Andrew

    2014-07-01

    Despite tremendous efforts to improve the methodology for constructing flavivirus infectious cDNAs, the manipulation of flavivirus cDNAs remains a difficult task in bacteria. Here, we successfully propagated DNA-launched type 2 dengue virus (DENV2) and Japanese encephalitis virus (JEV) infectious cDNAs by introducing seven repeats of the tetracycline-response element (7×TRE) and a minimal cytomegalovirus (CMVmin) promoter upstream of the viral genome. Insertion of the 7×TRE-CMVmin sequence upstream of the DENV2 or JEV genome decreased the cryptic E. coli promoter (ECP) activity of the viral genome in bacteria, as measured using fusion constructs containing DENV2 or JEV segments and the reporter gene Renilla luciferase in an empty vector. The growth kinetics of recombinant viruses derived from DNA-launched DENV2 and JEV infectious cDNAs were similar to those of parental viruses. Similarly, RNA-launched DENV2 infectious cDNAs were generated by inserting 7×TRE-CMVmin, five repeats of the GAL4 upstream activating sequence, or five repeats of BamHI linkers upstream of the DENV2 genome. All three tandem repeat sequences decreased the ECP activity of the DENV2 genome in bacteria. Notably, 7×TRE-CMVmin stabilized RNA-launched JEV infectious cDNAs and reduced the ECP activity of the JEV genome in bacteria. The growth kinetics of recombinant viruses derived from RNA-launched DENV2 and JEV infectious cDNAs displayed patterns similar to those of the parental viruses. These results support a novel methodology for constructing flavivirus infectious cDNAs, which will facilitate research in virology, viral pathogenesis and vaccine development of flaviviruses and other RNA viruses. © 2014 The Authors.

  17. Measuring students’ approaches to learning in different clinical rotations

    Directory of Open Access Journals (Sweden)

    Emilia Ova

    2012-11-01

    Full Text Available Abstract Background Many studies have explored approaches to learning in medical school, mostly in the classroom setting. In the clinical setting, students face different conditions that may affect their learning. Understanding students’ approaches to learning is important to improve learning in the clinical setting. The aim of this study was to evaluate the Study Process Questionnaire (SPQ as an instrument for measuring clinical learning in medical education and also to show whether learning approaches vary between rotations. Methods All students involved in this survey were undergraduates in their clinical phase. The SPQ was adapted to the clinical setting and was distributed in the last week of the clerkship rotation. A longitudinal study was also conducted to explore changes in learning approaches. Results Two hundred and nine students participated in this study (response rate 82.0%. The SPQ findings supported a two-factor solution involving deep and surface approaches. These two factors accounted for 45.1% and 22.5%, respectively, of the variance. The relationships between the two scales and their subscales showed the internal consistency and factorial validity of the SPQ to be comparable with previous studies. The clinical students in this study had higher scores for deep learning. The small longitudinal study showed small changes of approaches to learning with different rotation placement but not statistically significant. Conclusions The SPQ was found to be a valid instrument for measuring approaches to learning among clinical students. More students used a deep approach than a surface approach. Changes of approach not clearly occurred with different clinical rotations.

  18. Phylogenomic approaches to common problems encountered in the analysis of low copy repeats: The sulfotransferase 1A gene family example

    Directory of Open Access Journals (Sweden)

    Benner Steven A

    2005-03-01

    Full Text Available Abstract Background Blocks of duplicated genomic DNA sequence longer than 1000 base pairs are known as low copy repeats (LCRs. Identified by their sequence similarity, LCRs are abundant in the human genome, and are interesting because they may represent recent adaptive events, or potential future adaptive opportunities within the human lineage. Sequence analysis tools are needed, however, to decide whether these interpretations are likely, whether a particular set of LCRs represents nearly neutral drift creating junk DNA, or whether the appearance of LCRs reflects assembly error. Here we investigate an LCR family containing the sulfotransferase (SULT 1A genes involved in drug metabolism, cancer, hormone regulation, and neurotransmitter biology as a first step for defining the problems that those tools must manage. Results Sequence analysis here identified a fourth sulfotransferase gene, which may be transcriptionally active, located on human chromosome 16. Four regions of genomic sequence containing the four human SULT1A paralogs defined a new LCR family. The stem hominoid SULT1A progenitor locus was identified by comparative genomics involving complete human and rodent genomes, and a draft chimpanzee genome. SULT1A expansion in hominoid genomes was followed by positive selection acting on specific protein sites. This episode of adaptive evolution appears to be responsible for the dopamine sulfonation function of some SULT enzymes. Each of the conclusions that this bioinformatic analysis generated using data that has uncertain reliability (such as that from the chimpanzee genome sequencing project has been confirmed experimentally or by a "finished" chromosome 16 assembly, both of which were published after the submission of this manuscript. Conclusion SULT1A genes expanded from one to four copies in hominoids during intra-chromosomal LCR duplications, including (apparently one after the divergence of chimpanzees and humans. Thus, LCRs may

  19. A Quantitative Approach for Measuring Technological Forecasting Capability

    OpenAIRE

    Ayhan, Mustafa Batuhan; Oztemel, Ercan

    2013-01-01

    Successful technological forecasting is important to invest scarce funds to emerging technologies. A generic model to measure the success of forecasting overall technological changes is introduced in this paper, called degree of Technological Forecasting Capability. It measures the success rate of forecasts in manufacturing processes based on four important aspects of a manufacturing system; Flow Time, Quantity/Day, Scrap Ratio, and New Investment Revenue. The proposed approach has been verif...

  20. Repeatability and Reproducibility of Retinal Neuronal and Axonal Measures on Spectral-Domain Optical Coherence Tomography in Patients with Cognitive Impairment

    Directory of Open Access Journals (Sweden)

    Edwin Hong-Teck Loh

    2017-08-01

    Full Text Available BackgroundWith increasing interest in determining if measurement of retinal neuronal structure with spectral-domain optical coherence tomography (SD-OCT is useful in accessing neurodegenerative process in cognitive decline and development of dementia, it is important to evaluate whether the SD-OCT measurements are repeatable and reproducible in these patients.MethodsThis is a retrospective cohort study. Patients with Alzheimer’s disease (AD or mild cognitive impairment (MCI with no change in global clinical dementia rating (CDR score at 1-year follow-up were eligible to be included. Ganglion cell-inner plexiform layer (GC-IPL and retinal nerve fiber layer (RNFL parameters were measured with SD-OCT at baseline, 6-month, and 1-year follow-up visits. At baseline, SD-OCT scans were repeated to access intra-visit repeatability of the SD-OCT measurement. SD-OCT measurement over three visits was used to access inter-visit reproducibility. We calculated intraclass correlation coefficients (ICC and coefficients of variation (CoVs.ResultsWe included 32 patients with stable AD and 29 patients with stable MCI in the final analysis. For GC-IPL measures, the average intra-visit ICC was 0.969 (range: 0.948–0.985, and CoV was 1.81% (range: 1.14–2.40; while the average inter-visit ICC was 0.968 (0.941–0.985, and CoV was 1.91% (range: 1.24–2.32. The average ICC and CoV of intra-visit RNFL measured were 0.965 (range: 0.937–0.986 and 2.32% (range: 1.34–2.90%, respectively. The average ICC and CoV of inter-visit RNFL measures were 0.927 (range: 0.845–0.961 and 3.83% (range: 2.71–5.25%, respectively.ConclusionBoth GC-IPL and RNFL measurements had good intra-visit repeatability and inter-visit reproducibility over 1 year in elderly patients with no decline in cognitive function, suggesting that SD-OCT is a reliable tool to assess neurodegenerative process over time.

  1. Measuring the health of populations: the veil of ignorance approach.

    Science.gov (United States)

    Pinto-Prades, José-Luis; Abellán-Perpiñán, José-María

    2005-01-01

    We report the results from two surveys designed to explore whether an application of Harsanyi's principle of choice form behind a veil of ignorance (VEI) can be used in order to measure the health of populations. This approach was tentatively recommended by Murray et al. (Bull. World Health Organ 2000; 78: 981-994; Summary Measures of population health: Concepts, Ethics, Measurement and Applications, WHO, 2002.) as an appropriate way of constructing summary measures of population health (SMPH) for comparative purposes. The operationalization of the VEI approach used in this paper was suggested by Nord (Summary Measures of Population Health: Concepts, Ethics, Measurement and Applications, WHO, 2002.). We test if VEI and person trade-off (PTO) methods generate similar quality-of-life weights. In addition, we compare VEI and PTO weights with individual utilities estimated by means of the conventional standard gamble (SG) and a variation of it we call double gamble. Finally, psychometric properties like feasibility, reliability, and consistency are examined. Our main findings are next: (1) VEI and PTO approaches generate very different weights; (2) it seems that differences between PTO and VEI are not due to the 'rule of rescue'; (3) the VEI resembled more a DG than a classical SG; (4) PTO, VEI, and DG exhibited good feasibility, reliability and consistency.

  2. Repeatability and Reproducibility of Retinal Nerve Fiber Layer Parameters Measured by Scanning Laser Polarimetry with Enhanced Corneal Compensation in Normal and Glaucomatous Eyes

    Directory of Open Access Journals (Sweden)

    Mirian Ara

    2015-01-01

    Full Text Available Objective. To assess the intrasession repeatability and intersession reproducibility of peripapillary retinal nerve fiber layer (RNFL thickness parameters measured by scanning laser polarimetry (SLP with enhanced corneal compensation (ECC in healthy and glaucomatous eyes. Methods. One randomly selected eye of 82 healthy individuals and 60 glaucoma subjects was evaluated. Three scans were acquired during the first visit to evaluate intravisit repeatability. A different operator obtained two additional scans within 2 months after the first session to determine intervisit reproducibility. The intraclass correlation coefficient (ICC, coefficient of variation (COV, and test-retest variability (TRT were calculated for all SLP parameters in both groups. Results. ICCs ranged from 0.920 to 0.982 for intravisit measurements and from 0.910 to 0.978 for intervisit measurements. The temporal-superior-nasal-inferior-temporal (TSNIT average was the highest (0.967 and 0.946 in normal eyes, while nerve fiber indicator (NFI; 0.982 and inferior average (0.978 yielded the best ICC in glaucomatous eyes for intravisit and intervisit measurements, respectively. All COVs were under 10% in both groups, except NFI. TSNIT average had the lowest COV (2.43% in either type of measurement. Intervisit TRT ranged from 6.48 to 12.84. Conclusions. The reproducibility of peripapillary RNFL measurements obtained with SLP-ECC was excellent, indicating that SLP-ECC is sufficiently accurate for monitoring glaucoma progression.

  3. Repeatability and Reproducibility of Retinal Nerve Fiber Layer Parameters Measured by Scanning Laser Polarimetry with Enhanced Corneal Compensation in Normal and Glaucomatous Eyes.

    Science.gov (United States)

    Ara, Mirian; Ferreras, Antonio; Pajarin, Ana B; Calvo, Pilar; Figus, Michele; Frezzotti, Paolo

    2015-01-01

    To assess the intrasession repeatability and intersession reproducibility of peripapillary retinal nerve fiber layer (RNFL) thickness parameters measured by scanning laser polarimetry (SLP) with enhanced corneal compensation (ECC) in healthy and glaucomatous eyes. One randomly selected eye of 82 healthy individuals and 60 glaucoma subjects was evaluated. Three scans were acquired during the first visit to evaluate intravisit repeatability. A different operator obtained two additional scans within 2 months after the first session to determine intervisit reproducibility. The intraclass correlation coefficient (ICC), coefficient of variation (COV), and test-retest variability (TRT) were calculated for all SLP parameters in both groups. ICCs ranged from 0.920 to 0.982 for intravisit measurements and from 0.910 to 0.978 for intervisit measurements. The temporal-superior-nasal-inferior-temporal (TSNIT) average was the highest (0.967 and 0.946) in normal eyes, while nerve fiber indicator (NFI; 0.982) and inferior average (0.978) yielded the best ICC in glaucomatous eyes for intravisit and intervisit measurements, respectively. All COVs were under 10% in both groups, except NFI. TSNIT average had the lowest COV (2.43%) in either type of measurement. Intervisit TRT ranged from 6.48 to 12.84. The reproducibility of peripapillary RNFL measurements obtained with SLP-ECC was excellent, indicating that SLP-ECC is sufficiently accurate for monitoring glaucoma progression.

  4. A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Campos, E [Argonne National Lab. (ANL), Argonne, IL (United States); Sisterson, Douglas [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-12-01

    : 1) identifying how the uncertainty of individual ARM measurements is currently expressed, 2) identifying a consistent approach to measurement uncertainty, and then 3) reclassifying ARM instrument measurement uncertainties in a common framework.

  5. Repeatability of Brain Volume Measurements Made with the Atlas-based Method from T1-weighted Images Acquired Using a 0.4 Tesla Low Field MR Scanner.

    Science.gov (United States)

    Goto, Masami; Suzuki, Makoto; Mizukami, Shinya; Abe, Osamu; Aoki, Shigeki; Miyati, Tosiaki; Fukuda, Michinari; Gomi, Tsutomu; Takeda, Tohoru

    2016-10-11

    An understanding of the repeatability of measured results is important for both the atlas-based and voxel-based morphometry (VBM) methods of magnetic resonance (MR) brain volumetry. However, many recent studies that have investigated the repeatability of brain volume measurements have been performed using static magnetic fields of 1-4 tesla, and no study has used a low-strength static magnetic field. The aim of this study was to investigate the repeatability of measured volumes using the atlas-based method and a low-strength static magnetic field (0.4 tesla). Ten healthy volunteers participated in this study. Using a 0.4 tesla magnetic resonance imaging (MRI) scanner and a quadrature head coil, three-dimensional T1-weighted images (3D-T1WIs) were obtained from each subject, twice on the same day. VBM8 software was used to construct segmented normalized images [gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images]. The regions-of-interest (ROIs) of GM, WM, CSF, hippocampus (HC), orbital gyrus (OG), and cerebellum posterior lobe (CPL) were generated using WFU PickAtlas. The percentage change was defined as[100 × (measured volume with first segmented image - mean volume in each subject)/(mean volume in each subject)]The average percentage change was calculated as the percentage change in the 6 ROIs of the 10 subjects. The mean of the average percentage changes for each ROI was as follows: GM, 0.556%; WM, 0.324%; CSF, 0.573%; HC, 0.645%; OG, 1.74%; and CPL, 0.471%. The average percentage change was higher for the orbital gyrus than for the other ROIs. We consider that repeatability of the atlas-based method is similar between 0.4 and 1.5 tesla MR scanners. To our knowledge, this is the first report to show that the level of repeatability with a 0.4 tesla MR scanner is adequate for the estimation of brain volume change by the atlas-based method.

  6. An approach to measure wheelchair stability. Concept and benefits.

    Science.gov (United States)

    Stefanov, Dimitar H; Pasco, Damien

    2014-01-01

    Wheelchair stability is dependent on user's body characteristics that can shift significantly the original center of mass in the cases of limb amputation, severe skeletal deformities or obesity. The center of gravity may change with the installation of additional devices such as oxygen cylinders or ventilators on the wheelchair. Therefore, quantitative evaluation and prediction of the behavior of the user-wheelchair system in a variety of static and dynamic situations is essential for user's safety and for the optimal tuning of the human-wheelchair system. In this paper we discuss an approach for wheelchair stability assessment that only requires two inclinations and weight measurements. We also discuss the algorithm associated to the procedure based on the use of the reaction forces in the contact points of the wheels measured by the load cells. Further, the paper includes an analysis of the influence of the errors in measurement of the input parameters on the output results and demonstrates that the proposed approach possesses high accuracy. The advantage of the proposed approach is the use of a reliable procedure based on three simple steps and five weight measurements with four independent load scales which may lead to the design of an affordable and accurate measurement system.

  7. Library Network Statistics and Performance Measures: Approaches and Issues

    Directory of Open Access Journals (Sweden)

    John Carlo Bertot

    2001-07-01

    Full Text Available Library networked statistics and performance measures are important indicators of the use, uses, and users of networked services that libraries offer their patrons. This article focuses on three efforts to develop and standardize library network statistics and performance measures. In particular, the article discusses, compares, and contrasts selected aspects of the International Standards Organization (ISO, U.S. public library network statistics, and Association of Research Library (ARL efforts. The three approaches attempt to capture, describe, and present library networked activities in similar ways through similar approaches – yet they differ in key areas. It is important to note that there are a number of national and international efforts underway that continue to research the library network statistics and performance measure environment.

  8. Fixed-flexion knee radiography using a new positioning device produced highly repeatable measurements of joint space width: ELSA-Brasil Musculoskeletal Study (ELSA-Brasil MSK).

    Science.gov (United States)

    Telles, Rosa Weiss; Costa-Silva, Luciana; Machado, Luciana A C; Reis, Rodrigo Citton Padilha Dos; Barreto, Sandhi Maria

    To describe the performance of a non-fluoroscopic fixed-flexion PA radiographic protocol with a new positioning device, developed for the assessment of knee osteoarthritis (OA) in Brazilian Longitudinal Study of Adult Health Musculoskeletal Study (ELSA-Brasil MSK). A test-retest design including 19 adults (38 knee images) was conducted. Feasibility of the radiographic protocol was assessed by image quality parameters and presence of radioanatomic alignment according to intermargin distance (IMD) values. Repeatability was assessed for IMD and joint space width (JSW) measured at three different locations. Approximately 90% of knee images presented excellent quality. Frequencies of nearly perfect radioanatomic alignment (IMD ≤1mm) ranged from 29% to 50%, and satisfactory alignment was found in up to 71% and 76% of the images (IMD ≤1.5mm and ≤1.7mm, respectively). Repeatability analyses yielded the following results: IMD [SD of mean difference=1.08; coefficient of variation (%CV)=54.68%; intraclass correlation coefficient (ICC) (95%CI)=0.59 (0.34-0.77)]; JSW [SD of mean difference=0.34-0.61; %CV=4.48%-9.80%; ICC (95%CI)=0.74 (0.55-0.85)-0.94 (0.87-0.97)]. Adequately reproducible measurements of IMD and JSW were found in 68% and 87% of the images, respectively. Despite the difficulty in achieving consistent radioanatomic alignment between subsequent radiographs in terms of IMD, the protocol produced highly repeatable JSW measurements when these were taken at midpoint and 10mm from the medial extremity of the medial tibial plateau. Therefore, measurements of JSW at these locations can be considered adequate for the assessment of knee OA in ELSA-Brasil MSK. Copyright © 2016. Published by Elsevier Editora Ltda.

  9. Alternative Measuring Approaches in Gamma Scanning on Spent Nuclear Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Sihm Kvenangen, Karen

    2007-06-15

    In the future, the demand for energy is predicted to grow and more countries plan to utilize nuclear energy as their source of electric energy. This gives rise to many important issues connected to nuclear energy, such as finding methods that can verify that the spent nuclear fuel has been handled safely and used in ordinary power producing cycles as stated by the operators. Gamma ray spectroscopy is one method used for identification and verification of spent nuclear fuel. In the specific gamma ray spectroscopy method called gamma scanning the gamma radiation from the fission products Cs-137, Cs-134 and Eu-154 are measured in a spent fuel assembly. From the results, conclusions can be drawn about the fuels characteristics. This degree project examines the possibilities of using alternative measuring approaches when using the gamma scanning method. The focus is on examining how to increase the quality of the measured data. How to decrease the measuring time as compared with the present measuring strategy, has also been investigated. The main part of the study comprises computer simulations of gamma scanning measurements. The simulations have been validated with actual measurements on spent nuclear fuel at the central interim storage, Clab. The results show that concerning the quality of the measuring data the conventional strategy is preferable, but with other starting positions and with a more optimized equipment. When focusing on the time aspect, the helical measuring strategy can be an option, but this needs further investigation.

  10. Complete clinical responses to cancer therapy caused by multiple divergent approaches: a repeating theme lost in translation

    Directory of Open Access Journals (Sweden)

    Coventry BJ

    2012-05-01

    Full Text Available Brendon J Coventry, Martin L AshdownDiscipline of Surgery, University of Adelaide, Royal Adelaide Hospital and Faculty of Medicine, University of Melbourne, AustraliaAbstract: Over 50 years of cancer therapy history reveals complete clinical responses (CRs from remarkably divergent forms of therapies (eg, chemotherapy, radiotherapy, surgery, vaccines, autologous cell transfers, cytokines, monoclonal antibodies for advanced solid malignancies occur with an approximately similar frequency of 5%–10%. This has remained frustratingly almost static. However, CRs usually underpin strong durable 5-year patient survival. How can this apparent paradox be explained?Over some 20 years, realization that (1 chronic inflammation is intricately associated with cancer, and (2 the immune system is delicately balanced between responsiveness and tolerance of cancer, provides a greatly significant insight into ways cancer might be more effectively treated. In this review, divergent aspects from the largely segmented literature and recent conferences are drawn together to provide observations revealing some emerging reasoning, in terms of "final common pathways" of cancer cell damage, immune stimulation, and auto-vaccination events, ultimately leading to cancer cell destruction. Created from this is a unifying overarching concept to explain why multiple approaches to cancer therapy can provide complete responses at almost equivalent rates. This "missing" aspect provides a reasoned explanation for what has, and is being, increasingly reported in the mainstream literature – that inflammatory and immune responses appear intricately associated with, if not causative of, complete responses induced by divergent forms of cancer therapy. Curiously, whether by chemotherapy, radiation, surgery, or other means, therapy-induced cell injury results, leaving inflammation and immune system stimulation as a final common denominator across all of these mechanisms of cancer

  11. Repeated-Sprint Cycling Does Not Induce Respiratory Muscle Fatigue in Active Adults: Measurements from The Powerbreathe® Inspiratory Muscle Trainer

    Directory of Open Access Journals (Sweden)

    Clare Minahan, Beth Sheehan, Rachel Doutreband, Tom Kirkwood, Daniel Reeves, Troy Cross

    2015-03-01

    Full Text Available This study examined respiratory muscle strength using the POWERbreathe® inspiratory muscle trainer (i.e., ‘S-Index’ before and after repeated-sprint cycling for comparison with maximal inspiratory pressure (MIP values obtained during a Mueller maneuver. The S-Index was measured during six trials across two sessions using the POWERbreathe® and MIP was measured during three trials in a single session using a custom-made manometer in seven recreationally active adults. Global respiratory muscle strength was measured using both devices before and after the performance of sixteen, 6-s sprints on a cycle ergometer. Intraclass correlation coefficients for the POWERbreathe® S-index indicated excellent (p 0.99 and during the Mueller maneuver (p > 0.99. The POWERbreathe® S-Index is a moderately reliable, but not equivalent, measure of MIP determined during a Mueller maneuver. Furthermore, repeated-sprint cycling does not induce globalized respiratory muscle fatigue in recreationally-active adults.

  12. Repeatability of measurements of packed cell volume and egg count as indicators of endoparasite load and their relationship with sheep productivity.

    Science.gov (United States)

    Bekele, T; Kasali, O B; Rege, J E

    1991-12-01

    Monthly measurements of packed cell volume (PCV) and nematode and trematode eggs per gram (EPG) were made in Ethiopian highland sheep at Debre Berhan, Dejen, Deneba, Tulu Meko and Wereilu from June 1988 to December 1989. High frequencies of low PCV, high nematode EPG and high trematode EPG were found at Tulu Meko. Among the productivity traits examined, body condition scores and live-weights were significantly (P less than 0.05) associated with differences in PCV and nematode and trematode EPG levels at most sites. The lambing interval was, however, not significantly (P greater than 0.05) affected by these variables. Monthly repeatabilities of PCV, body weight and body condition scores were 0.44 +/- 0.01, 0.71 +/- 0.01 and 0.35 +/- 0.01, respectively, while those of nematode (0.09 +/- 0.01) and trematode EPGs (0.20 +/- 0.02) were much lower. The high repeatability for PCV indicates that it was less affected by the variable factors influencing egg output, and hence it could be utilized in conjunction with nematode and trematode EPG levels for endoparasite monitoring. Repeatability of the lambing interval across parities was 0.43 +/- 0.14.

  13. Relationship between measures of aerobic fitness, speed and repeated sprint ability in full and part time youth soccer players.

    Science.gov (United States)

    Gibson, N; Currie, J; Johnston, R; Hill, J

    2013-02-01

    The aim of the study was to investigate the relationship between repeated sprint ability (RSA) involving changes in direction, short linear sprinting and aerobic capacity in young elite soccer players. A secondary aim was to assess any differences in performance of these assessments between players of different age groups. Thirty-two male adolescent soccer players belonging to the same elite club academy were assessed for RSA comprising 6 x 40m efforts interspersed by 25s recovery, linear sprinting speed over 15m, and aerobic capacity via the YYIE2 assessment. There was a significant correlation between performance in the YYIE2 and RSA total time, RSA fastest sprint and RSA percentage decrement (r = -0.71, -0.53, and -0.52 respectively. Psprint (Psprinting. Assessments of RSA over 40m and incorporating changes of direction appear to be significantly correlated with YYIE2 performance in young elite level soccer players. In addition older players performed significantly better in the YYIE2 assessment and RSA protocol but not in short linear sprinting. These results have implications for the design of assessment protocols for young elite soccer players of different ages.

  14. Very long Detection Times after High and repeated intake of Heroin and Methadone, measured in Oral Fluid

    Directory of Open Access Journals (Sweden)

    Vindenes V.

    2014-12-01

    Full Text Available When detection times for psychoactive drugs in oral fluid are reported, they are most often based on therapeutic doses administered in clinical studies. Repeated ingestions of high doses, as seen after drug abuse, are however likely to cause positive samples for extended time periods. Findings of drugs of abuse in oral fluid might lead to negative sanctions, and the knowledge of detection times of these drugs are important to ensure correct interpretation. The aim of this study was to investigate the detection times of opioids in oral fluid. 25 patients with a history of heavy drug abuse admitted to a detoxification ward were included. Oral fluid and urine were collected daily and, if the patient gave consent, a blood sample was drawn during the first five days after admission. Morphine, codeine and/or 6-monoacetyl morphine (6-MAM were found in oral fluid and/or urine from 20 patients. The maximum detection times in oral fluid for codeine, morphine and 6-MAM were 1, 3 and 8 days, respectively. Positive oral fluid samples were interspersed with negative samples, mainly for concentrations around cut off. Elimination curves for methadone in oral fluid were found for two subjects, and the detection times were 5 and 8 days. Oral fluid is likely to become a good method for detection of drug abuse in the future

  15. Methods for measuring denitrification: Diverse approaches to a difficult problem

    DEFF Research Database (Denmark)

    Groffman, Peter M.; Altabet, Mark A.; Böhlke, J. K.

    2006-01-01

    , and global scales. Unfortunately, this process is very difficult to measure, and existing methods are problematic for different reasons in different places at different times. In this paper, we review the major approaches that have been taken to measure denitrification in terrestrial and aquatic environments...... based on stable isotopes, (8) in situ gradients with atmospheric environmental tracers, and (9) molecular approaches. Our review makes it clear that the prospects for improved quantification of denitrification vary greatly in different environments and at different scales. While current methodology allows...... for the production of accurate estimates of denitrification at scales relevant to water and air quality and ecosystem fertility questions in some systems (e.g., aquatic sediments, well-defined aquifers), methodology for other systems, especially upland terrestrial areas, still needs development. Comparison of mass...

  16. The reliability of repeated TMS measures in older adults and in patients with subacute and chronic stroke

    Directory of Open Access Journals (Sweden)

    Heidi M. Schambra

    2015-09-01

    Full Text Available The reliability of transcranial magnetic stimulation (TMS measures in healthy older adults and stroke patients has been insufficiently characterized. We determined whether common TMS measures could reliably evaluate change in individuals and in groups using the smallest detectable change (SDC, or could tell subjects apart using the intraclass correlation coefficient (ICC. We used a single-rater test-retest design in older healthy, subacute stroke, and chronic stroke subjects. At twice daily sessions on two consecutive days, we recorded resting motor threshold, test stimulus intensity, recruitment curves, short-interval intracortical inhibition and facilitation, and long-interval intracortical inhibition. Using variances estimated from a random effects model, we calculated the SDC and ICC for each TMS measure. For all TMS measures in all groups, SDCs for single subjects were large; only with modest group sizes did the SDCs become low. Thus, while these TMS measures cannot be reliably used as a biomarker to detect individual change, they can reliably detect change exceeding measurement noise in moderate-sized groups. For several of the TMS measures, ICCs were universally high, suggesting that they can reliably discriminate between subjects. Though most TMS measures have sufficient reliability in particular contexts, work establishing their validity, responsiveness, and clinical relevance is still needed.

  17. Measuring and monitoring IT using a balanced scorecard approach.

    Science.gov (United States)

    Gash, Deborah J; Hatton, Todd

    2007-01-01

    Ensuring that the information technology department is aligned with the overall health system strategy and is performing at a consistently high level is a priority at Saint Luke's Health System in Kansas City, Mo. The information technology department of Saint Luke's Health System has been using the balanced scorecard approach described in this article to measure and monitor its performance for four years. This article will review the structure of the IT department's scorecard; the categories and measures used; how benchmarks are determined; how linkage to the organizational scorecard is made; how results are reported; how changes are made to the scorecard; and tips for using a scorecard in other IT departments.

  18. Network Complexity Measures. An Information-Theoretic Approach.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    2015-04-01

    Full Text Available Quantitative graph analysis by using structural indices has been intricate in a sense that it often remains unclear which structural graph measures is the most suitable one, see [1, 12, 13]. In general, quantitative graph analysis deals with quantifying structural information of networks by using a measurement approach [5]. As special problem thereof is to characterize a graph quantitatively, that means to determine a measure that captures structural features of a network meaningfully. Various classical structural graph measures have been used to tackle this problem [13]. A fruitful approach by using information-theoretic [21] and statistical methods is to quantify the structural information content of a graph [1, 8, 18]. In this note, we sketch some classical information measures. Also, we briefly address the problem what kind of measures capture structural information uniquely. This relates to determine the discrimination power (or also called uniqueness of a graph measure, that is, how is the ability of the measures to discriminate non-isomorphic graphs structurally. [1] D. Bonchev. Information Theoretic Indices for Characterization of Chemical Structures. Research Studies Press, Chichester, 1983. [5] M. Dehmer and F. Emmert-Streib. Quantitative Graph Theory. Theory and Applications. CRC Press, 2014. [8] M. Dehmer, M. Grabner, and K. Varmuza. Information indices with high discriminative power for graphs. PLoS ONE, 7:e31214, 2012. [12] F. Emmert-Streib and M. Dehmer. Exploring statistical and population aspects of network complexity. PLoS ONE, 7:e34523, 2012. [13] F. Harary. Graph Theory. Addison Wesley Publishing Company, 1969. Reading, MA, USA. [18] A. Mowshowitz. Entropy and the complexity of the graphs I: An index of the relative complexity of a graph. Bull. Math. Biophys., 30:175–204, 1968. [21] C. E. Shannon and W. Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.

  19. Approach for measuring the angle of hallux valgus

    Directory of Open Access Journals (Sweden)

    Jin Zhou

    2013-01-01

    Materials and Methods: Fifteen age, body weight, and height matched male students were included and those with foot disorders, deformities, or injuries were excluded from the study. The dorsal protrusions of the first metatarsal and the hallux were marked by palpating from three experienced observers; then their barefoot model in standing was collected by a three dimensional laser scanning system. The AoH was defined in the X-Y plane by the angle between the line joining the marks of centre of head and centre of base of metatarsal shaft and the one connecting the marks of the centre of metatarsal head and the hallux. The same procedure was repeated a week later. Besides, other measures based on the footprint, outline, and the radiography were also available for comparisons. Paired t-test, linear regression, and reliability analysis were applied for statistical analysis with significant level of 0.05 and 95% confidence interval. Results: There were no significant differences recorded between the new method and the radiographic method ( P = 0.069. The AoH was superior to the methods of footprint and outline and it displayed a relative higher correlation with the radiographic method (r = 0.94, r2 = 0.89. Moreover both the inter and intraobserver reliabilities of this method were proved to be good. Conclusion: This new method can be used for hallux valgus inspection and evaluation.

  20. A transfer function approach to measuring cell inheritance

    Directory of Open Access Journals (Sweden)

    Errington Rachel J

    2011-02-01

    Full Text Available Abstract Background The inheritance of cellular material between parent and daughter cells during mitosis is highly influential in defining the properties of the cell and therefore the population lineage. This is of particular relevance when studying cell population evolution to assess the impact of a disease or the perturbation due to a drug treatment. The usual technique to investigate inheritance is to use time-lapse microscopy with an appropriate biological marker, however, this is time consuming and the number of inheritance events captured are too low to be statistically meaningful. Results Here we demonstrate the use of a high throughput fluorescence measurement technique e.g. flow cytometry, to measure the fluorescence from quantum dot markers which can be used to target particular cellular sites. By relating, the fluorescence intensity measured between two time intervals to a transfer function we are able to deconvolve the inheritance of cellular material during mitosis. To demonstrate our methodology we use CdTe/ZnS quantum dots to measure the ratio of endosomes inherited by the two daughter cells during mitosis in the U2-OS, human osteoscarcoma cell line. The ratio determined is in excellent agreement with results obtained previously using a more complex and computational intensive bespoke stochastic model. Conclusions The use of a transfer function approach allows us to utilise high throughput measurement of large cell populations to derive statistically relevant measurements of the inheritance of cellular material. This approach can be used to measure the inheritance of organelles, proteins etc. and also particles introduced to cells for drug delivery.

  1. Digital approach for measuring dentin translucency in forensic age estimation

    Directory of Open Access Journals (Sweden)

    Simranjit Singh

    2013-01-01

    Full Text Available Background: Dentin translucency is best suited for age estimation not only in terms of accuracy but also in terms of simplicity. Conventionally, translucency has been measured using calipers. Computer-based methods have been proposed for the same, although these required the use of custom-built software programs. Objectives: The objectives of the study were to use a simple digital method to measure dentinal translucency on sectioned teeth and to compare digital measurements to conventionally obtained translucency measurements. Materials and Methods: Fifty extracted permanent teeth were collected and were sectioned to 250 μm. Translucency measurements were obtained using the digital method and compared with those obtained using a caliper. Results: Correlation coefficients of translucency measurements to age were statistically significant for both methods (P < 0.001, and marginally higher for the conventional approach (r = 0.4671. Application of derived linear regression equations on an independent sample (n = 10 revealed a similar ability of both the methods to assess age to within ±5 years of the actual age. Conclusion: The translucency measurements obtained by the two methods were very similar, with no clear superiority of one method over the other. Hence, further studies on a large scale are warranted to determine which method is more reliable to estimate the age.

  2. The near-infrared spectroscopy-derived deoxygenated haemoglobin breaking-point is a repeatable measure that demarcates exercise intensity domains.

    Science.gov (United States)

    Iannetta, Danilo; Qahtani, Ahmad; Mattioni Maturana, Felipe; Murias, Juan Manuel

    2017-09-01

    A breaking-point in the near-infrared spectroscopy (NIRS)-derived deoxygenated haemoglobin ([HHb]) profile towards the end of a ramp incremental (RI) cycling test has been associated to the respiratory compensation point (RCP). Despite the physiological value of this measure, its repeatability remains unknown. The aim was to examine the repeatability of the [HHb] breaking-point ([HHb]BP) and its association to RCP during a RI cycling test. A repeated measures design was performed on 11 males (30.5±8.4 year; 76.5±8.4kg) and 4 females (30.5±5.9 year; 61.9±4.4 Kg). Gas exchange and NIRS [HHb] data were collected during RI tests performed on two different days separated by 48h. The [HHb]BP and the RCP were determined and compared for each trial. The [HHb]BP and the respiratory compensation point (RCP) occurred at the same VO2 in test 1 and test 2 ([HHb]BP: 3.49±0.52Lmin(-1) test 1; 3.48±0.45Lmin(-1) test 2; RCP: 3.38±0.40Lmin(-1) test 1; 3.38±0.44Lmin(-1) test 2) (P>0.05). The VO2 associated with the [HHb]BP and the VO2 at RCP were not significantly different from each other either in test 1 as well as in test 2 (P>0.05). Neither test 1 nor test 2 showed significant mean average error between the VO2 at the [HHb]BP and RCP using Bland & Altman plots. The [HHb]BP is a repeatable measure that consistently occurs towards the end of a RI test. The association between the [HHb]BP and the RCP reinforces the idea that these parameters may share similar mechanistic basis. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  3. Cross-Layer Approach using k-NN Based Adaptive Modulation Coding (AMC and Incremental Redundancy Hybrid Automatic Repeat Request (IR-HARQ for MIMO

    Directory of Open Access Journals (Sweden)

    J. Sofia Priya Dharshini

    2014-09-01

    Full Text Available In MIMO Technology, a cross layer design enhances the spectral efficiency, reliability and throughput of the network. In this paper, a cross-layer approach using k-NN based Adaptive Modulation Coding (AMC and Incremental Redundancy Hybrid Automatic Repeat Request (IR-HARQ is proposed for MIMO Systems. The proposed cross layer approach connects physical layer and data link layer to enhance the performance of MIMO network. By means of MIMO fading channels, the coded symbols are forwarded in the physical layer on a frame by frame fashion subsequently using Space Time Block Coding (STBC. The receiver computes the signal to noise ratio (SNR and forwards back to the AMC controller. The controller selects a suitable MCS for the next transmission through k-NN classifier supervised learning algorithm. IR-HARQ is utilized at the data link layer to regulate packet retransmissions. The obtained results prove that the proposed technique has better performance in terms of throughput, BER and spectral efficiency

  4. Attribute measure recognition approach and its applications to emitter recognition

    Institute of Scientific and Technical Information of China (English)

    GUAN Xin; HE You; YI Xiao

    2005-01-01

    This paper studies the emitter recognition problem. A new recognition method based on attribute measure for emitter recognition is put forward. The steps of the method are presented. The approach to determining the weight coefficient is also discussed. Moreover, considering the temporal redundancy of emitter information detected by multi-sensor system, this new recognition method is generalized to multi-sensor system. A method based on the combination of attribute measure and D-S evidence theory is proposed. The implementation of D-S reasoning is always restricted by basic probability assignment function. Constructing basic probability assignment function based on attribute measure is presented in multi-sensor recognition system. Examples of recognizing the emitter purpose and system are selected to demonstrate the method proposed. Experimental results show that the performance of this new method is accurate and effective.

  5. Metropolitan digital library services evaluation: Measures and approaches

    Institute of Scientific and Technical Information of China (English)

    LIU; Wei; LOU; Xiangying; ZHANG; Chunjing

    2008-01-01

    This paper reviews the evaluation model and measurements according to metropolitan digital library activities.Through literature review and historical research,the authors argue that the evaluation of the digital library(DL)is still in a research stage and not yet of value to the real achievement of the DL in operation.Because of the variety of the understandings of the digital library and the complexity of the technical factors,we can put forward a set of reference models,measurements and approaches to combine with the various researches on the evaluation theory and practice in the digital library area.The authors primarily discuss and conclude with a digital library evaluation model and measurement index system according to the requirements of the world metropolitan libraries.

  6. Directionality switchable gain stabilized linear repeater

    Science.gov (United States)

    Ota, Takayuki; Ohmachi, Tadashi; Aida, Kazuo

    2004-10-01

    We propose a new approach to realize a bidirectional linear repeater suitable for future optical internet networks and fault location in repeater chain with OTDR. The proposed approach is the linear repeater of simple configuration whose directionality is rearranged dynamically by electrical control signal. The repeater is composed of a magneto-optical switch, a circulator, a dynamically gain stabilized unidirectional EDFA, and control circuits. The repeater directionality is rearranged as fast as 0.1ms by an electrical control pulse. It is experimentally confirmed that OTDR with the directionality switchable repeater is feasible for repeater chain. The detailed design and performance of the repeater are also discussed, including the multi-pass interference (MPI) which may arise in the proposed repeater, the effect of the MPI on SNR degradation of the repeater chain and the feed-forward EDFA gain control circuit.

  7. Technology and education: First approach for measuring temperature with Arduino

    Science.gov (United States)

    Carrillo, Alejandro

    2017-04-01

    This poster session presents some ideas and approaches to understand concepts of thermal equilibrium, temperature and heat in order to bulid a man-nature relationship in a harmonious and responsible manner, emphasizing the interaction between science and technology, without neglecting the relationship of the environment and society, an approach to sustainability. It is proposed the development of practices that involve the use of modern technology, of easy access and low cost to measure temperature. We believe that the Arduino microcontroller and some temperature sensors can open the doors of innovation to carry out such practices. In this work we present some results of simple practices presented to a population of students between the ages of 16 and 17 years old. The practices in this proposal are: Zero law of thermodynamics and the concept of temperature, calibration of thermometers and measurement of temperature for heating and cooling of three different substances under the same physical conditions. Finally the student is asked to make an application that involves measuring of temperature and other physical parameters. Some suggestions are: to determine the temperature at which we take some food, measure the temperature difference at different rooms of a house, housing constructions that favour optimal condition, measure the temperature of different regions, measure of temperature trough different colour filters, solar activity and UV, propose applications to understand current problems such as global warming, etc. It is concluded that the Arduino practices and electrical sensors increase the cultural horizon of the students while awaking their interest to understand their operation, basic physics and its application from a modern perspective.

  8. Bayesian adjustment for covariate measurement errors: a flexible parametric approach.

    Science.gov (United States)

    Hossain, Shahadut; Gustafson, Paul

    2009-05-15

    In most epidemiological investigations, the study units are people, the outcome variable (or the response) is a health-related event, and the explanatory variables are usually environmental and/or socio-demographic factors. The fundamental task in such investigations is to quantify the association between the explanatory variables (covariates/exposures) and the outcome variable through a suitable regression model. The accuracy of such quantification depends on how precisely the relevant covariates are measured. In many instances, we cannot measure some of the covariates accurately. Rather, we can measure noisy (mismeasured) versions of them. In statistical terminology, mismeasurement in continuous covariates is known as measurement errors or errors-in-variables. Regression analyses based on mismeasured covariates lead to biased inference about the true underlying response-covariate associations. In this paper, we suggest a flexible parametric approach for avoiding this bias when estimating the response-covariate relationship through a logistic regression model. More specifically, we consider the flexible generalized skew-normal and the flexible generalized skew-t distributions for modeling the unobserved true exposure. For inference and computational purposes, we use Bayesian Markov chain Monte Carlo techniques. We investigate the performance of the proposed flexible parametric approach in comparison with a common flexible parametric approach through extensive simulation studies. We also compare the proposed method with the competing flexible parametric method on a real-life data set. Though emphasis is put on the logistic regression model, the proposed method is unified and is applicable to the other generalized linear models, and to other types of non-linear regression models as well. (c) 2009 John Wiley & Sons, Ltd.

  9. Variation in repeated mouth-opening measurements in head and neck cancer patients with and without trismus

    NARCIS (Netherlands)

    Jager-Wittenaar, H.; Dijkstra, P. U.; Vissink, A.; van Oort, R. P.; Roodenburg, J. L. N.

    2009-01-01

    Trismus after head and neck cancer treatment may severely limit mandibular functioning. Interventions aimed at reducing trismus can only be evaluated when the amount of variation associated with these measurements is known. The aim of this Study was to analyse the variation in mouth-opening measurem

  10. Repeated measurements of mite and pet allergen levels in house dust over a time period of 8 years

    NARCIS (Netherlands)

    Antens, C. J. M.; Oldenwening, M.; Wolse, A.; Gehring, U.; Smit, H. A.; Aalberse, R. C.; Kerkhof, M.; Gerritsen, J.; de Jongste, J. C.; Brunekreef, B.

    2006-01-01

    Background Studies of the association between indoor allergen exposure and the development of allergic diseases have often measured allergen exposure at one point in time. Objective We investigated the variability of house dust mite (Der p 1, Der f 1) and cat (Fel d 1) allergen in Dutch homes over a

  11. Variation in repeated mouth-opening measurements in head and neck cancer patients with and without trismus

    NARCIS (Netherlands)

    Jager-Wittenaar, H.; Dijkstra, P. U.; Vissink, A.; van Oort, R. P.; Roodenburg, J. L. N.

    2009-01-01

    Trismus after head and neck cancer treatment may severely limit mandibular functioning. Interventions aimed at reducing trismus can only be evaluated when the amount of variation associated with these measurements is known. The aim of this Study was to analyse the variation in mouth-opening measurem

  12. Variation in repeated mouth-opening measurements in head and neck cancer patients with and without trismus

    NARCIS (Netherlands)

    Jager-Wittenaar, H.; Dijkstra, P. U.; Vissink, A.; van Oort, R. P.; Roodenburg, J. L. N.

    Trismus after head and neck cancer treatment may severely limit mandibular functioning. Interventions aimed at reducing trismus can only be evaluated when the amount of variation associated with these measurements is known. The aim of this Study was to analyse the variation in mouth-opening

  13. Repeated measurements of mite and pet allergen levels in house dust over a time period of 8 years

    NARCIS (Netherlands)

    Antens, C. J. M.; Oldenwening, M.; Wolse, A.; Gehring, U.; Smit, H. A.; Aalberse, R. C.; Kerkhof, M.; Gerritsen, J.; de Jongste, J. C.; Brunekreef, B.

    2006-01-01

    Background Studies of the association between indoor allergen exposure and the development of allergic diseases have often measured allergen exposure at one point in time. Objective We investigated the variability of house dust mite (Der p 1, Der f 1) and cat (Fel d 1) allergen in Dutch homes over a

  14. An Algebraic Approach to Unital Quantities and their Measurement

    Science.gov (United States)

    Domotor, Zoltan; Batitsky, Vadim

    2016-06-01

    The goals of this paper fall into two closely related areas. First, we develop a formal framework for deterministic unital quantities in which measurement unitization is understood to be a built-in feature of quantities rather than a mere annotation of their numerical values with convenient units. We introduce this idea within the setting of certain ordered semigroups of physical-geometric states of classical physical systems. States are assumed to serve as truth makers of metrological statements about quantity values. A unital quantity is presented as an isomorphism from the target system's ordered semigroup of states to that of positive reals. This framework allows us to include various derived and variable quantities, encountered in engineering and the natural sciences. For illustration and ease of presentation, we use the classical notions of length, time, electric current and mean velocity as primordial examples. The most important application of the resulting unital quantity calculus is in dimensional analysis. Second, in evaluating measurement uncertainty due to the analog-to-digital conversion of the measured quantity's value into its measuring instrument's pointer quantity value, we employ an ordered semigroup framework of pointer states. Pointer states encode the measuring instrument's indiscernibility relation, manifested by not being able to distinguish the measured system's topologically proximal states. Once again, we focus mainly on the measurement of length and electric current quantities as our motivating examples. Our approach to quantities and their measurement is strictly state-based and algebraic in flavor, rather than that of a representationalist-style structure-preserving numerical assignment.

  15. Effects of ambient air pollution on functional status in patients with chronic congestive heart failure: a repeated-measures study

    Directory of Open Access Journals (Sweden)

    Phillips Russell S

    2007-09-01

    Full Text Available Abstract Background Studies using administrative data report a positive association between ambient air pollution and the risk of hospitalization for congestive heart failure (HF. Circulating levels of B-type natriuretic peptide (BNP are directly associated with cardiac hemodynamics and symptom severity in patients with HF and, therefore, serves as a marker of functional status. We tested the hypothesis that BNP levels would be positively associated with short-term changes in ambient pollution levels among 28 patients with chronic stable HF and impaired systolic function. Methods BNP was measured in whole blood at 0, 6, and 12 weeks. We used linear mixed models to evaluate the association between fine particulate matter (PM2.5, carbon monoxide, sulfur dioxide, nitrogen dioxide, ozone, and black carbon and log(BNP. Lags of 0 to 3 days were considered in separate models. We calculated the intraclass correlation coefficient and within-subject coefficient of variation as measures of reproducibility. Results We found no association between any pollutant and measures of BNP at any lag. For example, a 10 μg/m3 increase in PM2.5 was associated with a 0.8% (95% CI: -16.4, 21.5; p = 0.94 increase in BNP on the same day. The within-subject coefficient of variation was 45% on the natural scale and 9% on the log scale. Conclusion These results suggest that serial BNP measurements are unlikely to be useful in a longitudinal study of air pollution-related acute health effects. The magnitude of expected ambient air pollution health effects appears small in relation to the considerable within-person variability in BNP levels in this population.

  16. A technique for conditioning and calibrating force-sensing resistors for repeatable and reliable measurement of compressive force.

    Science.gov (United States)

    Hall, Rick S; Desmoulin, Geoffrey T; Milner, Theodore E

    2008-12-01

    Miniature sensors that could measure forces applied by the fingers and hand without interfering with manual dexterity or range of motion would have considerable practical value in ergonomics and rehabilitation. In this study, techniques have been developed to use inexpensive pressure-sensing resistors (FSRs) to accurately measure compression force. The FSRs are converted from pressure-sensing to force-sensing devices. The effects of nonlinear response properties and dependence on loading history are compensated by signal conditioning and calibration. A fourth-order polynomial relating the applied force to the current voltage output and a linearly weighted sum of prior outputs corrects for sensor hysteresis and drift. It was found that prolonged (>20h) shear force loading caused sensor gain to change by approximately 100%. Shear loading also had the effect of eliminating shear force effects on sensor output, albeit only in the direction of shear loading. By applying prolonged shear loading in two orthogonal directions, the sensors were converted into pure compression sensors. Such preloading of the sensor is, therefore, required prior to calibration. The error in compression force after prolonged shear loading and calibration was consistently industrial design applications where measurements of finger and hand force are needed.

  17. Repeated measures of body mass index and C-reactive protein in relation to all-cause mortality and cardiovascular disease

    DEFF Research Database (Denmark)

    O'Doherty, Mark G; Jørgensen, Torben; Borglykke, Anders

    2014-01-01

    Obesity has been linked with elevated levels of C-reactive protein (CRP), and both have been associated with increased risk of mortality and cardiovascular disease (CVD). Previous studies have used a single 'baseline' measurement and such analyses cannot account for possible changes in these which...... body mass index (BMI) and CRP with all-cause mortality and CVD. Being overweight (≥25-obese (≥30-....79-0.94) and 0.80 (0.72-0.89). A similar relationship was found, but only for overweight in Glostrup, HR (95 % CI) 0.88 (0.76-1.02); and moderately obese in Tromsø, HR (95 % CI) 0.79 (0.62-1.01). Associations were not evident between repeated measures of BMI and CVD. Conversely, increasing CRP concentrations...

  18. A New Approach in Measuring Local Migration and Population

    Directory of Open Access Journals (Sweden)

    Zongli Tang

    2008-12-01

    and report results of a promising pilot application to Massachusetts. This model operationalizes Ravenstein’s classic “push-pull” paradigm, which posits that local migration is determined by the area’s relative attractiveness or a compound function of distinct factors that push migrants out of the area or pull them in. The attraction factors and changes are measured using varied data sources, including decennial census migration flow data and data on group quarters and school enrollments. This model yields timely population estimates with accuracy superior to the corresponding estimates based on the Census Bureau’s methodology. Such results warrant further applications to test and refine this promising approach.

  19. An Electrostatics Approach to the Determination of Extremal Measures

    Energy Technology Data Exchange (ETDEWEB)

    Meinguet, Jean [Universite Catholique de Louvain, Institut Mathematique, Chemin du Cyclotron 2 (Belgium)], E-mail: meinguet@anma.ucl.ac.be

    2000-12-15

    One of the most important aspects of the minimal energy (or induced equilibrium) problem in the presence of an external field - sometimes referred to as the Gauss variation problem - is the determination of the support of its solution (the so-called extremal measure associated with the field). A simple electrostatic interpretation is presented here, which is apparently new and anyway suggests a novel, rather systematic approach to the solution. By way of illustration, the classical results for Jacobi, Laguerre and Freud weights are explicitly recovered by this alternative method.

  20. A statistical approach designed for finding mathematically defined repeats in shotgun data and determining the length distribution of clone-inserts

    DEFF Research Database (Denmark)

    Zhong, Lan; Zhang, Kunlin; Huang, Xiangang

    2003-01-01

    The large amount of repeats, especially high copy repeats, in the genomes of higher animals and plants makes whole genome assembly (WGA) quite difficult. In order to solve this problem, we tried to identify repeats and mask them prior to assembly even at the stage of genome survey. It is known...... that repeats of different copy number have different probabilities of appearance in shotgun data, so based on this principle, we constructed a statistical model and inferred criteria for mathematically defined repeats (MDRs) at different shotgun coverages. According to these criteria, we developed software......-inserts using our model. In our simulated genomes of human and rice, the length distribution of repeats is different, so their optimal length distributions of clone-inserts were not the same. Thus with optimal length distribution of clone-inserts, a given genome could be assembled better at lower coverage...

  1. A nuclear data approach for the Hubble constant measurements

    Directory of Open Access Journals (Sweden)

    Pritychenko Boris

    2017-01-01

    Full Text Available An extraordinary number of Hubble constant measurements challenges physicists with selection of the best numerical value. The standard U.S. Nuclear Data Program (USNDP codes and procedures have been applied to resolve this issue. The nuclear data approach has produced the most probable or recommended Hubble constant value of 67.2(69 (km/sec/Mpc. This recommended value is based on the last 20 years of experimental research and includes contributions from different types of measurements. The present result implies (14.55 ± 1.51 × 109 years as a rough estimate for the age of the Universe. The complete list of recommended results is given and possible implications are discussed.

  2. Measuring core inflation in India: An asymmetric trimmed mean approach

    Directory of Open Access Journals (Sweden)

    Naresh Kumar Sharma

    2015-12-01

    Full Text Available The paper seeks to obtain an optimal asymmetric trimmed mean-based core inflation measure in the class of trimmed mean measures when the distribution of price changes is leptokurtic and skewed to the right for any given period. Several estimators based on asymmetric trimmed mean approach are constructed and estimates generated by use of these estimators are evaluated on the basis of certain established empirical criteria. The paper also provides the method of trimmed mean expression “in terms of percentile score.” This study uses 69 monthly price indices which are constituent components of Wholesale Price Index for the period, April 1994 to April 2009, with 1993–1994 as the base year. Results of the study indicate that an optimally trimmed estimator is found when we trim 29.5% from the left-hand tail and 20.5% from the right-hand tail of the distribution of price changes.

  3. A nuclear data approach for the Hubble constant measurements

    Science.gov (United States)

    Pritychenko, Boris

    2017-09-01

    An extraordinary number of Hubble constant measurements challenges physicists with selection of the best numerical value. The standard U.S. Nuclear Data Program (USNDP) codes and procedures have been applied to resolve this issue. The nuclear data approach has produced the most probable or recommended Hubble constant value of 67.2(69) (km/sec)/Mpc. This recommended value is based on the last 20 years of experimental research and includes contributions from different types of measurements. The present result implies (14.55 ± 1.51) × 109 years as a rough estimate for the age of the Universe. The complete list of recommended results is given and possible implications are discussed.

  4. A nuclear data approach for the Hubble constant measurements

    Energy Technology Data Exchange (ETDEWEB)

    Pritychenko, B. [Brookhaven National Laboratory (BNL), Upton, NY (United States)

    2015-06-09

    An extraordinary number of Hubble constant measurements challenges physicists with selection of the best numerical value. The standard U.S. Nuclear Data Program (USNDP) codes and procedures have been applied to resolve this issue. The nuclear data approach has produced the most probable or recommended Hubble constant value of 67.00(770) (km/sec)/Mpc. This recommended value is based on the last 25 years of experimental research and includes contributions from different types of measurements. The present result implies (14.6±1.7) x 109 years as a rough estimate for the age of the Universe. The complete list of recommended results is given and possible implications are discussed.

  5. A nuclear data approach for the Hubble constant measurements

    Energy Technology Data Exchange (ETDEWEB)

    Pritychenko, B. [Brookhaven National Laboratory (BNL), Upton, NY (United States)

    2015-06-09

    An extraordinary number of Hubble constant measurements challenges physicists with selection of the best numerical value. The standard U.S. Nuclear Data Program (USNDP) codes and procedures have been applied to resolve this issue. The nuclear data approach has produced the most probable or recommended Hubble constant value of 67.00(770) (km/sec)/Mpc. This recommended value is based on the last 25 years of experimental research and includes contributions from different types of measurements. The present result implies (14.6±1.7) x 109 years as a rough estimate for the age of the Universe. The complete list of recommended results is given and possible implications are discussed.

  6. A new approach in measuring graduate employability skills

    Science.gov (United States)

    Zakaria, Mohd Hafiz; Yatim, Bidin; Ismail, Suzilah

    2014-06-01

    Globalization makes graduate recruitment for an organization becomes more complex because employers believe that a holistic workforce is the key success of an organization. Currently, although graduates are said to possess specific skills but they still lack of employability skills, and this lead to increment of training cost either by government or even employers. Therefore, graduate level of employability skills should be evaluated before entering work market. In this study, a valid and reliable instrument embedding a new approach of measuring employability skills was developed using Situational Judgment Test (SJT). The instrument comprises of twelve (12) items measuring communication skill, professional ethics and morality, entrepreneurial skill, critical thinking in problem solving and personal quality. Instrument's validity was achieved through expert opinion and the reliability (in terms of stability) was based on the Chi-Square for homogeneity test. Generally, the instrument is beneficial to graduates, employers, government agencies, university, and workforce recruitment agencies when evaluating the level of employability skills.

  7. Duct Leakage Repeatability Testing

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sherman, Max [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-01-01

    Duct leakage often needs to be measured to demonstrate compliance with requirements or to determine energy or Indoor Air Quality (IAQ) impacts. Testing is often done using standards such as ASTM E1554 (ASTM 2013) or California Title 24 (California Energy Commission 2013 & 2013b), but there are several choices of methods available within the accepted standards. Determining which method to use or not use requires an evaluation of those methods in the context of the particular needs. Three factors that are important considerations are the cost of the measurement, the accuracy of the measurement and the repeatability of the measurement. The purpose of this report is to evaluate the repeatability of the three most significant measurement techniques using data from the literature and recently obtained field data. We will also briefly discuss the first two factors. The main question to be answered by this study is to determine if differences in the repeatability of these tests methods is sufficient to indicate that any of these methods is so poor that it should be excluded from consideration as an allowed procedure in codes and standards.

  8. Repeated-measure validation of craniofacial metrics from three-dimensional surface scans: application to medical genetics

    Science.gov (United States)

    Lauer, Eric A.; Corner, Brian D.; Li, Peng; Beecher, Robert M.; Deutsch, Curtis

    2002-03-01

    Traditionally, medical geneticists have employed visual inspection (anthroposcopy) to clinically evaluate dysmorphology. In the last 20 years, there has been an increasing trend towards quantitative assessment to render diagnosis of anomalies more objective and reliable. These methods have focused on direct anthropometry, using a combination of classical physical anthropology tools and new instruments tailor-made to describe craniofacial morphometry. These methods are painstaking and require that the patient remain still for extended periods of time. Most recently, semiautomated techniques (e.g., structured light scanning) have been developed to capture the geometry of the face in a matter of seconds. In this paper, we establish that direct anthropometry and structured light scanning yield reliable measurements, with remarkably high levels of inter-rater and intra-rater reliability, as well as validity (contrasting the two methods).

  9. New Approach for Measured Surface Localization Based on Umbilical Points

    Science.gov (United States)

    Xiao, Xiao-Ping; Yin, Ming; Heng, Liang; Yin, Guo-Fu; Li, Zi-Sheng

    2017-09-01

    Measured surface localization (MSL) is one of the key essentials for the assessment of form error in precision manufacturing. Currently, the researches on MSL have focused on the corresponding relation search between two surfaces, the performance improvement of localization algorithms and the uncertainty analysis of localization. However, low efficiency, limitation of localization algorithms and mismatch of multiple similarities of feature points with no prior are the common disadvantages for MSL. In order to match feature points quickly and fulfill MSL efficiently, this paper presents a new localization approach for measured surfaces by extracting the generic umbilics and estimating their single complex variables, describing the match methods of ambiguous relation at umbilics, presenting the initial localization process of one pair matched points, refining MSL on the basis of obtained closet points for some measured points by the improvement directed projection method. In addition, the proposed algorithm is simulated in two different types of surfaces, two different localization types and multiple similar surfaces, also tested with the part of B-spline surface machined and bottle mould with no knowledge, finally the initial and accurate rigid body transformation matrix, localization errors between two surfaces and execution time are got. The experimental results show that the proposed method is feasible, more accurate in localization and high in efficiency. The proposed research can not only improve the accuracy and performance of form error assessment, but also provide an effective guideline for the integration of different types of measured surfaces.

  10. 躯体X线测量板的研制和应用%Validation and repeatability of Limb X Film Measuring Plate

    Institute of Scientific and Technical Information of China (English)

    卫小春; 柴旭峰; 许趁心

    2008-01-01

    Objective To study the efficacy and repeatability of the Limb X Film Measuring Plate (LXMP)in clinical x film measurement of lengths and angles of limbs.Methods The LXMP was de- signed and manufactured for testing.Three known points in 9 templates were taken as golden standards.which were to be compared with measuring results of these projected points on the X film for testing efficacy of LXMP.Twenty-one patients with knees osteoarthritis were selected.The lengths and angles of their lower limbs were measured twice with the help of LXMP by an observer at different time intervals for testing in. Tra-observer repeatability.Two observers measured once on films at the same time for testing inter-observer repeatability.The correlation coefficients,mean absolute difierences between the repeated measurements,and error rate were calculated. Results The points on LXMP could be seen clearly on X film.Validation tests showed the differences in angle measurement between golden standards and measuring results for those pro- jeted points on the X film were insignificant.with mean absolute difference being O.10.-0.21,error rate being 0.12%-3.15%and r being close to 1.The differences in length measurement were insignificant ei- ther,with mean absolute differenee being 0.05 cm-0.16 cm.error rate being 0.12%-o.28%and r being close to 1.The intra-observer repeatability test for beth angles and lengths showed that r was close to I.mean absolute diffefences 0.55.and 0.1-0.37cm and error rate 0.3l%-O.97%.while the inter-observer re- peatability test showed that r was close to 1.mean absolute difierences o.39.and0.05-0.13cm and error rate O.16%-O.35%. Conclusion The Limb X Film Measuring Plate Can be used for accurately mcasuring limb lengths and angles with good efficacy and repeatability.%目的 介绍躯体X线测量板的制作方法 ,探讨其有效性、重复性,为临床X线测量提供一套精确、实用、经济的方法 和工具.方法 设计和制作躯体X线

  11. Repeated measurements of cerebral blood flow in the left superior temporal gyrus reveal tonic hyperactivity in patients with auditory verbal hallucinations: A possible trait marker

    Directory of Open Access Journals (Sweden)

    Philipp eHoman

    2013-06-01

    Full Text Available Background: The left superior temporal gyrus (STG has been suggested to play a key role in auditory verbal hallucinations in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant auditory verbal hallucinations and 19 healthy controls underwent perfusion magnetic resonance imaging with arterial spin labeling. Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF, which consisted of the regional CBF measurement in the left superior temporal gyrus (STG and the global CBF measurement in the whole brain.Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001 and to the global CBF in patients (p < 0.004 at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007, and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001. After TMS, PANSS (p = 0.003 and PSYRATS (p = 0.01 scores decreased significantly in patients.Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of auditory verbal hallucinations in schizophrenia.

  12. A Statistical Approach Designed for Finding Mathematically Defined Repeats in Shotgun Data and Determining the Length Distribution of Clone—Inserts

    Institute of Scientific and Technical Information of China (English)

    LanZhong; KunlinZhang; XiangangHuang; PeixiangNi; YujunHan; KaiWang; JunWang; SonggangLi

    2003-01-01

    The large amount of repeats,especially high copy repeats,in the genomes of higher animals and plants makes whole genome assembly(WGA)quite difficult.In order to solve this problem,we tried to identify repeats and mask them prior to assembly even at the stage of genome survey.It is known that repeats of different copy number have different probabilities of appearance in shotgun data,so based on this principle,we constructed a statistical model and inferred criteria for mathematically defined repeats(MDRs)at different shotgun coverages.According to these criteria,we developed software MDRmasker to identify and mask MDRs in shotgun data.With repeats masked prior to assembly,the speed of as sembly was increased with lower error probability.In addition,clone-insert size affects the accuracy of repeat assembly and scaffold construction.We also designed length distribution of clone-inserts using our model.In our simulated genomes of human and rice,the length distribution of repeats in different,so their optimal length distributions of clone-inserts were not the same.Thus with optimal length distribution of clone-inserts,a given genome could be assembled better at lower coverage.

  13. A Statistical Approach Designed for Finding Mathematically De fined Repeats in Shotgun Data and Determining the Length Distri bution of Clone-Inserts

    Institute of Scientific and Technical Information of China (English)

    Lan Zhong; Kunlin Zhang; Xiangang Huang; Peixiang Ni; Yujun Han; Kai Wang; Jun Wang; Songgang Li

    2003-01-01

    The large amount of repeats, especially high copy repeats, in the genomes of higher animals and plants makes whole genome assembly (WGA) quite difficult. In order to solve this problem, we tried to identify repeats and mask them prior to assembly even at the stage of genome survey. It is known that repeats of different copy number have different probabilities of appearance in shotgun data, so based on this principle, we constructed a statistical model and inferred criteria for mathematically defined repeats (MDRs) at different shotgun coverages. According to these criteria, we developed software MDRmasker to identify and mask MDRs in shotgun data. With repeats masked prior to assembly, the speed of assembly was increased with lower error probability. In addition, clone-insert size affects the accuracy of repeat assembly and scaffold construction. We also designed length distribution of clone-inserts using our model. In our simulated genomes of human and rice, the length distribution of repeats is different, so their optimal length distributions of clone-inserts were not the same. Thus with optimal length distribution of clone-inserts, a given genome could be assembled better at lower coverage.

  14. Two to five repeated measurements per patient reduced the required sample size considerably in a randomized clinical trial for patients with inflammatory rheumatic diseases

    Directory of Open Access Journals (Sweden)

    Smedslund Geir

    2013-02-01

    Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.

  15. The Effects of Repeat Testing, Malingering, and Traumatic Brain Injury on Computerized Measures of Visuospatial Memory Span.

    Science.gov (United States)

    Woods, David L; Wyma, John M; Herron, Timothy J; Yund, E W

    2015-01-01

    Spatial span tests (SSTs) such as the Corsi Block Test (CBT) and the SST of the Wechsler Memory Scale are widely used to assess deficits in spatial working memory. We conducted three experiments to evaluate the test-retest reliability and clinical sensitivity of a new computerized spatial span test (C-SST) that incorporates psychophysical methods to improve the precision of spatial span measurement. In Experiment 1, we analyzed C-SST test-retest reliability in 49 participants who underwent three test sessions at weekly intervals. Intraclass correlation coefficients (ICC) were higher for a psychophysically derived mean span (MnS) metric (0.83) than for the maximal span and total correct metrics used in traditional spatial-span tests. Response times (ReTs) also showed high ICCs (0.93) that correlated negatively with MnS scores and correlated positively with response-time latencies from other tests of processing speed. Learning effects were insignificant. Experiment 2 examined the performance of Experiment 1 participants when instructed to feign symptoms of traumatic brain injury (TBI): 57% showed abnormal MnS z-scores. A MnS z-score cutoff of 3.0 correctly classified 36% of simulated malingerers and 91% of the subgroup of 11 control participants with abnormal spans. Malingerers also made more substitution errors than control participants with abnormal spans (sensitivity = 43%, specificity = 91%). In addition, malingerers showed no evidence of ReT slowing, in contrast to significant abnormalities seen on other malingered tests of processing speed. As a result, differences between ReT z-scores and z-scores on other processing speed tests showed very high sensitivity and specificity in distinguishing malingering and control participants with either normal or abnormal spans. Experiment 3 examined C-SST performance in a group of patients with predominantly mild TBI: neither MnS nor ReT z-scores showed significant group-level abnormalities. The C-SST improves the

  16. Measuring and understanding soil water repellency through novel interdisciplinary approaches

    Science.gov (United States)

    Balshaw, Helen; Douglas, Peter; Doerr, Stefan; Davies, Matthew

    2017-04-01

    Food security and production is one of the key global issues faced by society. It has become evermore essential to work the land efficiently, through better soil management and agronomy whilst protecting the environment from air and water pollution. The failure of soil to absorb water - soil water repellency - can lead to major environmental problems such as increased overland flow and soil erosion, poor uptake of agricultural chemicals and increased risk of groundwater pollution due to the rapid transfer of contaminants and nutrient leaching through uneven wetting and preferential flow pathways. Understanding the causes of soil hydrophobicity is essential for the development of effective methods for its amelioration, supporting environmental stability and food security. Organic compounds deposited on soil mineral or aggregate surfaces have long been recognised as a major factor in causing soil water repellency. It is widely accepted that the main groups of compounds responsible are long-chain acids, alkanes and other organic compounds with hydrophobic properties. However, when reapplied to sands and soils, the degree of water repellency induced by these compounds and mixtures varied widely with compound type, amount and mixture, in a seemingly unpredictable way. Our research to date involves two new approaches for studying soil wetting. 1) We challenge the theoretical basis of current ideas on the measured water/soil contact angle measurements. Much past and current discussion involves Wenzel and Cassie-Baxter models to explain anomalously high contact angles for organics on soils, however here we propose that these anomalously high measured contact angles are a consequence of the measurement of a water drop on an irregular non-planar surface rather than the thermodynamic factors of the Cassie-Baxter and Wenzel models. In our analysis we have successfully used a much simpler geometric approach for non-flat surfaces such as soil. 2) Fluorescent and phosphorescent

  17. The effect of technical replicate (repeats) on Nix Pro Color Sensor™ measurement precision for meat: A case-study on aged beef colour stability.

    Science.gov (United States)

    Holman, Benjamin W B; Collins, Damian; Kilgannon, Ashleigh K; Hopkins, David L

    2017-09-04

    The Nix Pro Colour Sensor™ (NIX) can be potentially used to measure meat colour, but procedural guidelines that assure measurement reproducibility and repeatability (precision) must first be established. Technical replicate number (r) will minimise response variation, measureable as standard error of predicted mean (SEM), and contribute to improved precision. Consequently, we aimed to explore the effects of r on NIX precision when measuring aged beef colour (colorimetrics; L*, a*, b*, hue and chroma values). Each colorimetric SEM declined with increasing r to indicate improved precision and followed a diminishing rate of improvement that allowed us to recommend r=7 for meat colour studies using the NIX. This definition was based on practical limitations and a* variability, as additional r would be required if other colorimetrics or advanced levels of precision are necessary. Beef ageing and display period, holding temperature, loin and sampled portion were also found to contribute to colorimetric variation, but were incorporated within our definition of r. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  18. Automated genotyping of dinucleotide repeat markers

    Energy Technology Data Exchange (ETDEWEB)

    Perlin, M.W.; Hoffman, E.P. [Carnegie Mellon Univ., Pittsburgh, PA (United States)]|[Univ. of Pittsburgh, PA (United States)

    1994-09-01

    The dinucleotide repeats (i.e., microsatellites) such as CA-repeats are a highly polymorphic, highly abundant class of PCR-amplifiable markers that have greatly streamlined genetic mapping experimentation. It is expected that over 30,000 such markers (including tri- and tetranucleotide repeats) will be characterized for routine use in the next few years. Since only size determination, and not sequencing, is required to determine alleles, in principle, dinucleotide repeat genotyping is easily performed on electrophoretic gels, and can be automated using DNA sequencers. Unfortunately, PCR stuttering with these markers generates not one band for each allele, but a pattern of bands. Since closely spaced alleles must be disambiguated by human scoring, this poses a key obstacle to full automation. We have developed methods that overcome this obstacle. Our model is that the observed data is generated by arithmetic superposition (i.e., convolution) of multiple allele patterns. By quantitatively measuring the size of each component band, and exploiting the unique stutter pattern associated with each marker, closely spaced alleles can be deconvolved; this unambiguously reconstructs the {open_quotes}true{close_quotes} allele bands, with stutter artifact removed. We used this approach in a system for automated diagnosis of (X-linked) Duchenne muscular dystrophy; four multiplexed CA-repeats within the dystrophin gene were assayed on a DNA sequencer. Our method accurately detected small variations in gel migration that shifted the allele size estimate. In 167 nonmutated alleles, 89% (149/167) showed no size variation, 9% (15/167) showed 1 bp variation, and 2% (3/167) showed 2 bp variation. We are currently developing a library of dinucleotide repeat patterns; together with our deconvolution methods, this library will enable fully automated genotyping of dinucleotide repeats from sizing data.

  19. Measuring energy efficiency in economics: Shadow value approach

    Science.gov (United States)

    Khademvatani, Asgar

    For decades, academic scholars and policy makers have commonly applied a simple average measure, energy intensity, for studying energy efficiency. In contrast, we introduce a distinctive marginal measure called energy shadow value (SV) for modeling energy efficiency drawn on economic theory. This thesis demonstrates energy SV advantages, conceptually and empirically, over the average measure recognizing marginal technical energy efficiency and unveiling allocative energy efficiency (energy SV to energy price). Using a dual profit function, the study illustrates how treating energy as quasi-fixed factor called quasi-fixed approach offers modeling advantages and is appropriate in developing an explicit model for energy efficiency. We address fallacies and misleading results using average measure and demonstrate energy SV advantage in inter- and intra-country energy efficiency comparison. Energy efficiency dynamics and determination of efficient allocation of energy use are shown through factors impacting energy SV: capital, technology, and environmental obligations. To validate the energy SV, we applied a dual restricted cost model using KLEM dataset for the 35 US sectors stretching from 1958 to 2000 and selected a sample of the four sectors. Following the empirical results, predicted wedges between energy price and the SV growth indicate a misallocation of energy use in stone, clay and glass (SCG) and communications (Com) sectors with more evidence in the SCG compared to the Com sector, showing overshoot in energy use relative to optimal paths and cost increases from sub-optimal energy use. The results show that energy productivity is a measure of technical efficiency and is void of information on the economic efficiency of energy use. Decomposing energy SV reveals that energy, capital and technology played key roles in energy SV increases helping to consider and analyze policy implications of energy efficiency improvement. Applying the marginal measure, we also

  20. Measuring e-Government Maturity: A meta-synthesis approach

    Directory of Open Access Journals (Sweden)

    Chaushi Agron

    2015-12-01

    Full Text Available Many governments in the world have created e-government initiatives including developed and developing countries. In order to better understand e-government evolution, different maturity models have been developed by many authors. In this paper the most cited e-government maturity models are analyzed using the meta-synthesis approach. As a result, five stages of e-government maturity are identified. The comparative results show the supported stages by each e-government initiative as important elements in the decision making process. This paper is attempting to show that although there are many models for measuring e-government maturity, they all converge on one common model. The contribution of this paper is in simplifying work for researchers when choosing the right maturity model.

  1. Hydrometeor classification from polarimetric radar measurements: a clustering approach

    Directory of Open Access Journals (Sweden)

    J. Grazioli

    2015-01-01

    Full Text Available A data-driven approach to the classification of hydrometeors from measurements collected with polarimetric weather radars is proposed. In a first step, the optimal number of hydrometeor classes (nopt that can be reliably identified from a large set of polarimetric data is determined. This is done by means of an unsupervised clustering technique guided by criteria related both to data similarity and to spatial smoothness of the classified images. In a second step, the nopt clusters are assigned to the appropriate hydrometeor class by means of human interpretation and comparisons with the output of other classification techniques. The main innovation in the proposed method is the unsupervised part: the hydrometeor classes are not defined a priori, but they are learned from data. The approach is applied to data collected by an X-band polarimetric weather radar during two field campaigns (from which about 50 precipitation events are used in the present study. Seven hydrometeor classes (nopt = 7 have been found in the data set, and they have been identified as light rain (LR, rain (RN, heavy rain (HR, melting snow (MS, ice crystals/small aggregates (CR, aggregates (AG, and rimed-ice particles (RI.

  2. A robust polynomial fitting approach for contact angle measurements.

    Science.gov (United States)

    Atefi, Ehsan; Mann, J Adin; Tavana, Hossein

    2013-05-14

    Polynomial fitting to drop profile offers an alternative to well-established drop shape techniques for contact angle measurements from sessile drops without a need for liquid physical properties. Here, we evaluate the accuracy of contact angles resulting from fitting polynomials of various orders to drop profiles in a Cartesian coordinate system, over a wide range of contact angles. We develop a differentiator mask to automatically find a range of required number of pixels from a drop profile over which a stable contact angle is obtained. The polynomial order that results in the longest stable regime and returns the lowest standard error and the highest correlation coefficient is selected to determine drop contact angles. We find that, unlike previous reports, a single polynomial order cannot be used to accurately estimate a wide range of contact angles and that a larger order polynomial is needed for drops with larger contact angles. Our method returns contact angles with an accuracy of contact angles in a wide range with a fourth-order polynomial. We show that this approach returns dynamic contact angles with less than 0.7° error as compared to ADSA-P, for the solid-liquid systems tested. This new approach is a powerful alternative to drop shape techniques for estimating contact angles of drops regardless of drop symmetry and without a need for liquid properties.

  3. An integrated approach to biomonitoring exposure to styrene and styrene-(7,8)-oxide using a repeated measurements sampling design.

    Science.gov (United States)

    Fustinoni, S; Campo, L; Manini, P; Buratti, M; Waidyanatha, S; De Palma, G; Mutti, A; Foa, V; Colombi, A; Rappaport, S M

    2008-09-01

    The aim of this work was to investigate urinary analytes and haemoglobin and albumin adducts as biomarkers of exposure to airborne styrene (Sty) and styrene-(7,8)-oxide (StyOX) and to evaluate the influence of smoking habit and genetic polymorphism of metabolic enzymes GSTM1 and GSTT1 on these biomarkers. We obtained three or four air and urine samples from each exposed worker (eight reinforced plastics workers and 13 varnish workers), one air and urine samples from 22 control workers (automobile mechanics) and one blood sample from all subjects. Median levels of exposure to Sty and StyOX, respectively, were 18.2 mg m(-3) and 133 microg m(-3) for reinforced plastics workers, 3.4 mg m(-3) and 12 microg m(-3) for varnish workers, and <0.3 mg m(-3) and <5 microg m(-3) for controls. Urinary levels of styrene, mandelic acid, phenylglyoxylic acid, phenylglycine (PHG), 4-vinylphenol (VP) and mercapturic acids (M1+M2), as well as cysteinyl adducts of serum albumin (but not those of haemoglobin) were significantly associated with exposure status (controls

  4. Lumbar spine and pelvic posture between standing and sitting: a radiologic investigation including reliability and repeatability of the lumbar lordosis measure.

    Science.gov (United States)

    De Carvalho, Diana E; Soave, David; Ross, Kim; Callaghan, Jack P

    2010-01-01

    Sitting has been identified as a cause of mechanical low back pain. The purpose of this study was to use plain film x-rays to measure lumbar spine and pelvic posture differences between standing and sitting. Eight male subjects were radiographed standing and sitting in an automobile seat. Measures of lumbar lordosis, intervertebral disk angles, lumbosacral angle, lumbosacral lordosis, and sacral tilt were completed. One-way analysis of variance (alpha = .05) was conducted on the variables stated above. A Bland-Altman analysis was conducted to assess agreement and repeatability of the lumbar lordosis angle using 2 raters. Lumbar lordosis values in standing (average, 63 degrees +/- 15 degrees ) and sacral inclination (average, 43 degrees +/- 10 degrees ) decreased by 43 degrees and 44 degrees , respectively, in sitting. Intervertebral joint angles in sitting underwent substantial flexion (L1/L2-5 degrees [+/-3 degrees ], L2/L3-7 degrees [+/-3 degrees ], L3/L4-8 degrees [+/-3 degrees ], L4/L5-13 degrees [+/-3 degrees ], and L5/S1-4 degrees [+/-10 degrees ]). Measures of lumbar lordosis; intervertebral disk angles between L2/L3, L3/L4, and L4/L5; lumbosacral lordosis; lumbosacral angle; and sacral tilt were significantly decreased between standing and sitting (P posture should be investigated because they may play a role in preventing injury and low back pain. Copyright 2010 National University of Health Sciences. Published by Mosby, Inc. All rights reserved.

  5. Differentiated cell behavior: a multiscale approach using measure theory.

    Science.gov (United States)

    Colombi, Annachiara; Scianna, Marco; Tosin, Andrea

    2015-11-01

    This paper deals with the derivation of a collective model of cell populations out of an individual-based description of the underlying physical particle system. By looking at the spatial distribution of cells in terms of time-evolving measures, rather than at individual cell paths, we obtain an ensemble representation stemming from the phenomenological behavior of the single component cells. In particular, as a key advantage of our approach, the scale of representation of the system, i.e., microscopic/discrete vs. macroscopic/continuous, can be chosen a posteriori according only to the spatial structure given to the aforesaid measures. The paper focuses in particular on the use of different scales based on the specific functions performed by cells. A two-population hybrid system is considered, where cells with a specialized/differentiated phenotype are treated as a discrete population of point masses while unspecialized/undifferentiated cell aggregates are represented by a continuous approximation. Numerical simulations and analytical investigations emphasize the role of some biologically relevant parameters in determining the specific evolution of such a hybrid cell system.

  6. Fourier transform approach in modulation technique of experimental measurements.

    Science.gov (United States)

    Khazimullin, M V; Lebedev, Yu A

    2010-04-01

    An application of Fourier transform approach in modulation technique of experimental studies is considered. This method has obvious advantages compared with traditional lock-in amplifiers technique--simple experimental setup, a quickly available information on all the required harmonics, high speed of data processing using fast Fourier transform algorithm. A computationally simple, fast and accurate Fourier coefficients interpolation (FCI) method has been implemented to obtain a useful information from harmonics of a multimode signal. Our analysis shows that in this case FCI method has a systematical error (bias) of a signal parameters estimation, which became essential for the short data sets. Hence, a new differential Fourier coefficients interpolation (DFCI) method has been suggested, which is less sensitive to a presence of several modes in a signal. The analysis has been confirmed by simulations and measurements of a quartz wedge birefringence by means of the photoelastic modulator. The obtained bias, noise level, and measuring speed are comparable and even better than in lock-in amplifier technique. Moreover, presented DFCI method is expected to be promised candidate for using in actively developing imaging systems based on the modulation technique requiring fast digital signal processing of large data sets.

  7. Constructing three emotion knowledge tests from the invariant measurement approach

    Directory of Open Access Journals (Sweden)

    Ana R. Delgado

    2017-09-01

    Full Text Available Background Psychological constructionist models like the Conceptual Act Theory (CAT postulate that complex states such as emotions are composed of basic psychological ingredients that are more clearly respected by the brain than basic emotions. The objective of this study was the construction and initial validation of Emotion Knowledge measures from the CAT frame by means of an invariant measurement approach, the Rasch Model (RM. Psychological distance theory was used to inform item generation. Methods Three EK tests—emotion vocabulary (EV, close emotional situations (CES and far emotional situations (FES—were constructed and tested with the RM in a community sample of 100 females and 100 males (age range: 18–65, both separately and conjointly. Results It was corroborated that data-RM fit was sufficient. Then, the effect of type of test and emotion on Rasch-modelled item difficulty was tested. Significant effects of emotion on EK item difficulty were found, but the only statistically significant difference was that between “happiness” and the remaining emotions; neither type of test, nor interaction effects on EK item difficulty were statistically significant. The testing of gender differences was carried out after corroborating that differential item functioning (DIF would not be a plausible alternative hypothesis for the results. No statistically significant sex-related differences were found out in EV, CES, FES, or total EK. However, the sign of d indicate that female participants were consistently better than male ones, a result that will be of interest for future meta-analyses. Discussion The three EK tests are ready to be used as components of a higher-level measurement process.

  8. 基于同频RF直放站的回波测量方法%Technique of measuring the echo for RF repeaters in single frequency networks

    Institute of Scientific and Technical Information of China (English)

    陈哲; 宋葛

    2011-01-01

    直放站的同频转发模式存在收发天线间互耦的问题,接收天线在接收主基站的信号时,同时接收到直放站发射天线发射的信号,以同频RF直放站调试隔离度为技术背景,提出了同频RF直放站的回波测量方法.该方法利用CAZAC码实现的初估计快速地估计出回波信道的特性,自适应LMS算法的使用可以进一步跟踪信道的变化.通过MATLAB仿真后,在数字硬件平台上用FPGA实现该方法,并用数字电视信号CMMB做为信号源对系统进行了测试,结果说明此种测量方法准确、实用.%The signal transition mode of repeater which works on the same frequency has some troubles on signal coupling between receive and transmit aerials,the signal from transmit aerial may leak out into receive aerial. This paper presents a method of measuring the echo under the context of isolation test for RF repeaters in single frequency networks. The experimental apparatus firstly uses CAZAC code as the method of channel initial estimation, secondly uses LMS arithmetic to track the change of the channel. After the MATLAB simulation realization has been achieved by FPGA on real digital hardware board. The result of test which uses DTV signal (CMMB) as the source proves the function of method achieved accurately.

  9. DESIGNING COMPANY PERFORMANCE MEASUREMENT SYSTEM USING BALANCE SCORECARD APPROACH

    Directory of Open Access Journals (Sweden)

    Cecep Mukti Soleh

    2015-05-01

    Full Text Available This research aimed to design how to measure company performance by using balance scorecard approach in coal transportation services industry. Depth interview was used to obtain qualitative data determination of strategic objectives, key performance indicators, strategic initiatives, and in charge units for each balanced scorecard perspectives while the quantitative data were obtained from weighting through questionnaires and analyzed using paired comparison to get a perspective what mostly affected the performance of the company. To measure the achievement of corporate performance, each KPI used (1 the scoring system with the methods that higher is better, lower is better and precise is better; (2 traffic light system with the help of green, yellow, red for identification of target achievement. This research result shows that in the balance scorecard perspective, the most influences on the overall performance of the company include the customer's perspective (31%, financial perspective (29%, internal business processes (21%, learning, and growth 19%. Keywords: balance scorecard, paired comparison, coal transportation serviceABSTRAKPenelitian ini bertujuan untuk merancang pengukuran kinerja perusahaan dengan menggunakan pendekatan balance scorecard di industri jasa pengangkutan batu bara. Data kualitatif diperoleh melalui indepth interview digunakan untuk menentukan sasaran strategik, indikator kinerja utama, inisiatif strategi dan penanggungjawab setiap divisi setiap perspektif balance scorecard, sedangkan data kuantitatif digunakan untuk pembobotan melalui kuesioner dan dianalisis dengan menggunakan metode paired comparisson untuk mendapatkan perspektif yang paling berpengaruh terhadap kinerja perusahaan. Ukuran pencapaian kinerja perusahaan dari setiap KPI menggunakan; (1 scoring system dengan bantuan metode higher is better, lower is better dan precise is better;(2 traffic light system dengan menggunakan bantuan warna hijau, kuning, merah

  10. Calculation of measurement uncertainty in quantitative analysis of genetically modified organisms using intermediate precision--a practical approach.

    Science.gov (United States)

    Zel, Jana; Gruden, Kristina; Cankar, Katarina; Stebih, Dejan; Blejec, Andrej

    2007-01-01

    Quantitative characterization of nucleic acids is becoming a frequently used method in routine analysis of biological samples, one use being the detection of genetically modified organisms (GMOs). Measurement uncertainty is an important factor to be considered in these analyses, especially where precise thresholds are set in regulations. Intermediate precision, defined as a measure between repeatability and reproducibility, is a parameter describing the real situation in laboratories dealing with quantitative aspects of molecular biology methods. In this paper, we describe the top-down approach to calculating measurement uncertainty, using intermediate precision, in routine GMO testing of food and feed samples. We illustrate its practicability in defining compliance of results with regulations. The method described is also applicable to other molecular methods for a variety of laboratory diagnostics where quantitative characterization of nucleic acids is needed.

  11. All-photonic quantum repeaters

    Science.gov (United States)

    Azuma, Koji; Tamaki, Kiyoshi; Lo, Hoi-Kwong

    2015-01-01

    Quantum communication holds promise for unconditionally secure transmission of secret messages and faithful transfer of unknown quantum states. Photons appear to be the medium of choice for quantum communication. Owing to photon losses, robust quantum communication over long lossy channels requires quantum repeaters. It is widely believed that a necessary and highly demanding requirement for quantum repeaters is the existence of matter quantum memories. Here we show that such a requirement is, in fact, unnecessary by introducing the concept of all-photonic quantum repeaters based on flying qubits. In particular, we present a protocol based on photonic cluster-state machine guns and a loss-tolerant measurement equipped with local high-speed active feedforwards. We show that, with such all-photonic quantum repeaters, the communication efficiency scales polynomially with the channel distance. Our result paves a new route towards quantum repeaters with efficient single-photon sources rather than matter quantum memories. PMID:25873153

  12. Seasonal variation in objectively measured physical activity, sedentary time, cardio-respiratory fitness and sleep duration among 8–11 year-old Danish children: a repeated-measures study

    DEFF Research Database (Denmark)

    Hjorth, Mads F.; Chaput, Jean-Philippe; Michaelsen, Kim;

    2013-01-01

    BACKGROUND: Understanding fluctuations in lifestyle indicators is important to identify relevant time periods to intervene in order to promote a healthy lifestyle; however, objective assessment of multiple lifestyle indicators has never been done using a repeated-measures design. The primary aim...... was, therefore, to examine between-season and within-week variation in physical activity, sedentary behaviour, cardio-respiratory fitness and sleep duration among 8–11 year-old children. METHODS: A total of 1021 children from nine Danish schools were invited to participate and 834 accepted. Due...

  13. Measuring functional connectivity in stroke: Approaches and considerations.

    Science.gov (United States)

    Siegel, Joshua S; Shulman, Gordon L; Corbetta, Maurizio

    2017-08-01

    Recent research has demonstrated the importance of global changes to the functional organization of brain network following stroke. Resting functional magnetic resonance imaging (R-fMRI) is a non-invasive tool that enables the measurement of functional connectivity (FC) across the entire brain while placing minimal demands on the subject. For these reasons, it is a uniquely appealing tool for studying the distant effects of stroke. However, R-fMRI studies rely on a number of premises that cannot be assumed without careful validation in the context of stroke. Here, we describe strategies to identify and mitigate confounds specific to R-fMRI research in cerebrovascular disease. Five main topics are discussed: (a) achieving adequate co-registration of lesioned brains, (b) identifying and removing hemodynamic lags in resting BOLD, (c) identifying other vascular disruptions that affect the resting BOLD signal, (d) selecting an appropriate control cohort, and (e) acquiring sufficient fMRI data to reliably identify FC changes. For each topic, we provide guidelines for steps to improve the interpretability and reproducibility of FC-stroke research. We include a table of confounds and approaches to identify and mitigate each. Our recommendations extend to any research using R-fMRI to study diseases that might alter cerebrovascular flow and dynamics or brain anatomy.

  14. Within-Subject Associations between Mood Dimensions and Non-exercise Activity: An Ambulatory Assessment Approach Using Repeated Real-Time and Objective Data.

    Science.gov (United States)

    Reichert, Markus; Tost, Heike; Reinhard, Iris; Zipf, Alexander; Salize, Hans-Joachim; Meyer-Lindenberg, Andreas; Ebner-Priemer, Ulrich W

    2016-01-01

    A physically active lifestyle has been related to positive health outcomes and high life expectancy, but the underlying psychological mechanisms maintaining physical activity are rarely investigated. Tremendous technological progress yielding sophisticated methodological approaches, i.e., ambulatory assessment, have recently enabled the study of these mechanisms in everyday life. In practice, accelerometers allow to continuously and objectively monitor physical activity. The combination with e-diaries makes it feasible to repeatedly assess mood states in real-time and real life and to relate them to physical activity. This state-of-the-art methodology comes with several advantages, like bypassing systematic distortions of retrospective methods, avoiding distortions seen in laboratory settings, and revealing an objective physical activity assessment. Most importantly, ambulatory assessment studies enable to analyze how physical activity and mood wax and wane within persons over time in contrast to existing studies on physical activity and mood which mostly investigated between-person associations. However, there are very few studies on how mood dimensions (i.e., feeling well, energetic and calm) drive non-exercise activity (NEA; such as climbing stairs) within persons. Recent reviews argued that some of these studies have methodological limitations, e.g., scarcely representative samples, short study periods, physical activity assessment via self-reports, and low sampling frequencies. To overcome these limitations, we conducted an ambulatory assessment study in a community-based sample of 106 adults over 1 week. Participants were asked to report mood ratings on e-diaries and to wear an accelerometer in daily life. We conducted multilevel analyses to investigate whether mood predicted NEA, which was defined as the mean acceleration within the 10-min interval directly following an e-diary assessment. Additionally, we analyzed the effects of NEA on different time frames

  15. A Combined Approach to Measure Micropollutant Behaviour during Riverbank Filtration

    Science.gov (United States)

    van Driezum, Inge; Saracevic, Ernis; Derx, Julia; Kirschner, Alexander; Sommer, Regina; Farnleitner, Andreas; Blaschke, Alfred Paul

    2016-04-01

    Riverbank filtration (RBF) systems are widely used as natural treatment process. The advantages of RBF over surface water abstraction are the elimination of for example suspended solids, biodegradable compounds (like specific micropollutants), bacteria and viruses (Hiscock and Grischek, 2002). However, in contrast to its importance, remarkably less is known on the respective external (e.g. industrial or municipal sewage) and the internal (e.g. wildlife and agricultural influence) sources of contaminants, the environmental availability and fate of the various hazardous substances, and its potential transport during soil and aquifer passage. The goal of this study is to get an insight in the behaviour of various micropollutants and microbial indicators during riverbank filtration. Field measurements were combined with numerical modelling approaches. The study area comprises an alluvial backwater and floodplain area downstream of Vienna. The river is highly dynamic, with discharges ranging from 900 m3/s during low flow to 11000 m3/s during flood events. Samples were taken in several monitoring wells along a transect extending from the river towards a backwater river in the floodplain. Three of the piezometers were situated in the first 20 meters away from the river in order to obtain information about micropollutant behaviour close to the river. A total of 9 different micropollutants were analysed in grab samples taken under different river flow conditions (n=33). Following enrichment using SPE, analysis was performed using high performance liquid chromatography-tandem mass spectrometry. Faecal indicators (E. coli and enterococci) and bacterial spores were enumerated in sample volumes of 1 L each using cultivation based methods (ISO 16649-1, ISO 7899-2:2000 and ISO 6222). The analysis showed that some compounds, e.g. ibuprofen and diclofenac, were only found in the river. These compounds were already degraded in the first ten meters away from the river. Analysis of

  16. Estimating summary measures of health: a structured workbook approach

    Directory of Open Access Journals (Sweden)

    Le Petit Christel

    2005-05-01

    . Conclusion The structured workbook approach offers researchers an efficient, easy to use, and easy to understand set of tools for estimating HALY and PAF summary measures for their country or region of interest.

  17. A New Approach in Measuring Local Migration and Population

    Directory of Open Access Journals (Sweden)

    Tang, Zongli

    2008-01-01

    Full Text Available EnglishThis paper presents a new model for measuring local migration and population and report results of a promising pilot application to Massachusetts. This model operationalizes Ravenstein’s classic “push-pull” paradigm, which posits that local migration is determined by the area’s relative attractiveness or a compound function of distinct factors that push migrants out of the area or pull them in.The attraction factors and changes are measured using varied data sources,including decennial census migration flow data and data on group quarters and school enrollments. This model yields timely population estimates with accuracy superior to the corresponding estimates based on the Census Bureau’s methodology. Such results warrant further applications to test and refine this promising approach.FrenchCet article introduit un nouveau modèle pour mesurer la migration locale de lapopulation et pour présenter les résultats d’un projet pilote prometteur auMassachusetts. Ce modèle opérationalise le paradigme classique de push et pullde Ravenstein, qui postule que la migration locale est déterminée par l’attraitrelatif d’une région ou par une composante de plusieurs facteurs distincts quipousse les migrant hors de leur région ou les attire vers une autre. Leschangements et les facteurs d’attrait sont mesurés à l’aide de plusieurs sourcesde données, tel que les données sur la mobilité des migrations du recensementdécennal et les données sur les logements de groupe et les inscriptions scolaires.Ce modèle offre des estimations propices qui font preuve d’un niveaud’exactitude supérieur aux estimations correspondantes basées sur laméthodologie du Census Bureau. De tels résultats démontrent l’importanced’applications futures pour tester et raffiner cette approche encourageante.

  18. Evaluation of the running-based anaerobic sprint test as a measure of repeated sprint ability in collegiate-level soccer players.

    Science.gov (United States)

    Keir, Daniel A; Thériault, Francis; Serresse, Olivier

    2013-06-01

    Repeated sprint ability (RSA) refers to an individual's ability to perform maximal sprints of short duration in succession with little recovery between sprints. The running-based anaerobic sprint test (RAST) has been adapted from the Wingate anaerobic test (WAnT) protocol as a tool to assess RSA and anaerobic power. The purpose of this study was to evaluate the relationship between performance variables and physiological responses obtained during the RAST and the WAnT using 8 collegiate-level soccer players. Participants performed a single trial of both the WAnT and the RAST. Breath-by-breath gas exchange was monitored throughout each trial, and blood lactate (BL) measures were recorded postexercise. The oxygen uptake (VO2) profile suggested that the RAST required greater contributions from aerobic metabolism although there was no difference in VO2peak (p < 0.05). Peak BL values were also similar between the RAST and the WAnT (p < 0.05). Neither peak physiological values nor performance variables (peak and mean power) were significantly correlated between protocols. The weak association in physiological responses indicates that different combinations of metabolic contributions exist between protocols, suggesting that individual performances on each test are not related in collegiate soccer players. Further studies on these relationships with players of other competitive levels and team sport athletes are warranted.

  19. Using the American alligator and a repeated-measures design to place constraints on in vivo shoulder joint range of motion in dinosaurs and other fossil archosaurs.

    Science.gov (United States)

    Hutson, Joel D; Hutson, Kelda N

    2013-01-15

    Using the extant phylogenetic bracket of dinosaurs (crocodylians and birds), recent work has reported that elbow joint range of motion (ROM) studies of fossil dinosaur forearms may be providing conservative underestimates of fully fleshed in vivo ROM. As humeral ROM occupies a more central role in forelimb movements, the placement of quantitative constraints on shoulder joint ROM could improve fossil reconstructions. Here, we investigated whether soft tissues affect the more mobile shoulder joint in the same manner in which they affect elbow joint ROM in an extant archosaur. This test involved separately and repeatedly measuring humeral ROM in Alligator mississippiensis as soft tissues were dissected away in stages to bare bone. Our data show that the ROMs of humeral flexion and extension, as well as abduction and adduction, both show a statistically significant increase as flesh is removed, but then decrease when the bones must be physically articulated and moved until they separate from one another and/or visible joint surfaces. A similar ROM pattern is inferred for humeral pronation and supination. All final skeletonized ROMs were less than initial fully fleshed ROMs. These results are consistent with previously reported elbow joint ROM patterns from the extant phylogenetic bracket of dinosaurs. Thus, studies that avoid separation of complementary articular surfaces may be providing fossil shoulder joint ROMs that underestimate in vivo ROM in dinosaurs, as well as other fossil archosaurs.

  20. Knowledge and skill retention of in-service versus preservice nursing professionals following an informal training program in pediatric cardiopulmonary resuscitation: a repeated-measures quasiexperimental study.

    Science.gov (United States)

    Sankar, Jhuma; Vijayakanthi, Nandini; Sankar, M Jeeva; Dubey, Nandkishore

    2013-01-01

    Our objective was to compare the impact of a training program in pediatric cardiopulmonary resuscitation (CPR) on the knowledge and skills of in-service and preservice nurses at prespecified time points. This repeated-measures quasiexperimental study was conducted in the pediatric emergency and ICU of a tertiary care teaching hospital between January and March 2011. We assessed the baseline knowledge and skills of nursing staff (in-service nurses) and final year undergraduate nursing students (preservice nurses) using a validated questionnaire and a skill checklist, respectively. The participants were then trained on pediatric CPR using standard guidelines. The knowledge and skills were reassessed immediately after training and at 6 weeks after training. A total of 74 participants-28 in-service and 46 preservice professionals-were enrolled. At initial assessment, in-service nurses were found to have insignificant higher mean knowledge scores (6.6 versus 5.8, P = 0.08) while the preservice nurses had significantly higher skill scores (6.5 versus 3.2, P nurses performing better in knowledge test (10.5 versus 9.1, P = 0.01) and the preservice nurses performing better in skill test (9.8 versus 7.4, P skills of in-service and preservice nurses in pediatric CPR improved with training. In comparison to preservice nurses, the in-service nurses seemed to retain knowledge better with time than skills.

  1. Direct and accurate measurement of CAG repeat configuration in the ataxin-1 (ATXN-1) gene by "dual-fluorescence labeled PCR-restriction fragment length analysis".

    Science.gov (United States)

    Lin, Jiang X; Ishikawa, Kinya; Sakamoto, Masaki; Tsunemi, Taiji; Ishiguro, Taro; Amino, Takeshi; Toru, Shuta; Kondo, Ikuko; Mizusawa, Hidehiro

    2008-01-01

    Spinocerebellar ataxia type 1 (SCA1; OMIM: #164400) is an autosomal dominant cerebellar ataxia caused by an expansion of CAG repeat, which encodes polyglutamine, in the ataxin-1 (ATXN1) gene. Length of polyglutamine in the ATXN1 protein is the critical determinant of pathogenesis of this disease. Molecular diagnosis of SCA1 is usually undertaken by assessing the length of CAG repeat configuration using primers spanning this configuration. However, this conventional method may potentially lead to misdiagnosis in assessing polyglutamine-encoding CAG repeat length, since CAT interruptions may be present within the CAG repeat configuration, not only in normal controls but also in neurologically symptomatic subjects. We developed a new method for assessing actual CAG repeat numbers not interrupted by CAT sequences. Polymerase chain reaction using a primer pair labeled with two different fluorescences followed by restriction enzyme digestion with SfaNI which recognizes the sequence "GCATC(N)(5)", lengths of actual CAG repeats that encode polyglutamine were directly detected. We named this method "dual fluorescence labeled PCR-restriction fragment length analysis". We found that numbers of actual CAG repeat encoding polyglutamine do not overlap between our cohorts of normal chromosomes (n=385) and SCA1 chromosomes (n=5). We conclude that the present method is a useful way for molecular diagnosis of SCA1.

  2. Simulated Conversations With Virtual Humans to Improve Patient-Provider Communication and Reduce Unnecessary Prescriptions for Antibiotics: A Repeated Measure Pilot Study

    Science.gov (United States)

    2017-01-01

    Background Despite clear evidence that antibiotics do not cure viral infections, the problem of unnecessary prescribing of antibiotics in ambulatory care persists, and in some cases, prescribing patterns have increased. The overuse of antibiotics for treating viral infections has created numerous economic and clinical consequences including increased medical costs due to unnecessary hospitalizations, antibiotic resistance, disruption of gut bacteria, and obesity. Recent research has underscored the importance of collaborative patient-provider communication as a means to reduce the high rates of unnecessary prescriptions for antibiotics. However, most patients and providers do not feel prepared to engage in such challenging conversations. Objectives The aim of this pilot study was to assess the ability of a brief 15-min simulated role-play conversation with virtual humans to serve as a preliminary step to help health care providers and patients practice, and learn how to engage in effective conversations about antibiotics overuse. Methods A total of 69 participants (35 providers and 34 patients) completed the simulation once in one sitting. A pre-post repeated measures design was used to assess changes in patients’ and providers’ self-reported communication behaviors, activation, and preparedness, intention, and confidence to effectively communicate in the patient-provider encounter. Changes in patients’ knowledge and beliefs regarding antibiotic use were also evaluated. Results Patients experienced a short-term positive improvement in beliefs about appropriate antibiotic use for infection (F1,30=14.10, P=.001). Knowledge scores regarding the correct uses of antibiotics improved immediately postsimulation, but decreased at the 1-month follow-up (F1,30=31.16, P.10) Patients with lower levels of activation exhibited positive, short-term benefits in increased intent and confidence to discuss their needs and ask questions in the clinic visit, positive attitudes

  3. Telomere shortening unrelated to smoking, body weight, physical activity, and alcohol intake: 4,576 general population individuals with repeat measurements 10 years apart.

    Directory of Open Access Journals (Sweden)

    Maren Weischer

    2014-03-01

    Full Text Available Cross-sectional studies have associated short telomere length with smoking, body weight, physical activity, and possibly alcohol intake; however, whether these associations are due to confounding is unknown. We tested these hypotheses in 4,576 individuals from the general population cross-sectionally, and with repeat measurement of relative telomere length 10 years apart. We also tested whether change in telomere length is associated with mortality and morbidity in the general population. Relative telomere length was measured with quantitative polymerase chain reaction. Cross-sectionally at the first examination, short telomere length was associated with increased age (P for trend across quartiles = 3 × 10(-77, current smoking (P = 8 × 10(-3, increased body mass index (P = 7 × 10(-14, physical inactivity (P = 4 × 10(-17, but not with increased alcohol intake (P = 0.10. At the second examination 10 years later, 56% of participants had lost and 44% gained telomere length with a mean loss of 193 basepairs. Change in leukocyte telomere length during 10 years was associated inversely with baseline telomere length (P<1 × 10(-300 and age at baseline (P = 1 × 10(-27, but not with baseline or 10-year inter-observational tobacco consumption, body weight, physical activity, or alcohol intake. Prospectively during a further 10 years follow-up after the second examination, quartiles of telomere length change did not associate with risk of all-cause mortality, cancer, chronic obstructive pulmonary disease, diabetes mellitus, ischemic cerebrovascular disease, or ischemic heart disease. In conclusion, smoking, increased body weight, and physical inactivity were associated with short telomere length cross-sectionally, but not with telomere length change during 10 years observation, and alcohol intake was associated with neither. Also, change in telomere length did not associate prospectively with mortality or morbidity in the general population.

  4. A Multiple-plane Approach to Measure the Structural Properties of Functionally Active Regions in the Human Cortex

    Science.gov (United States)

    Wang, Xin; Garfinkel, Sarah N.; King, Anthony P.; Angstadt, Mike; Dennis, Michael J.; Xie, Hong; Welsh, Robert C.; Tamburrino, Marijo B.; Liberzon, Israel

    2009-01-01

    Advanced magnetic resonance imaging (MRI) techniques provide the means of studying both the structural and the functional properties of various brain regions, allowing us to address the relationship between the structural changes in human brain regions and the activity of these regions. However, analytical approaches combining functional (fMRI) and structural (sMRI) information are still far from optimal. In order to improve the accuracy of measurement of structural properties in active regions, the current study tested a new analytical approach that repeated a surface-based analysis at multiple planes crossing different depths of cortex. Twelve subjects underwent a fear conditioning study. During these tasks, fMRI and sMRI scans were acquired. The fMRI images were carefully registered to the sMRI images with an additional correction for cortical borders. The fMRI images were then analyzed with the new multiple-plane surface-based approach as compared to the volume-based approach, and the cortical thickness and volume of an active region were measured. The results suggested (1) using an additional correction for cortical borders and an intermediate template image produced an acceptable registration of fMRI and sMRI images; (2) surface-based analysis at multiple depths of cortex revealed more activity than the same analysis at any single depth; (3) projection of active surface vertices in a ribbon fashion improved active volume estimates; and (4) correction with gray matter segmentation removed non-cortical regions from the volumetric measurement of active regions. In conclusion, the new multiple-plane surface-based analysis approaches produce improved measurement of cortical thickness and volume of active brain regions. These results support the use of novel approaches for combined analysis of functional and structural neuroimaging. PMID:19922802

  5. Are "classical" tests of repeated-sprint ability in football externally valid? A new approach to determine in-game sprinting behaviour in elite football players.

    Science.gov (United States)

    Schimpchen, Jan; Skorski, Sabrina; Nopp, Stephan; Meyer, Tim

    2016-01-01

    The aim of this study was to investigate the occurrence of repeated sprinting bouts in elite football. Furthermore, the construct validity of current tests assessing repeated-sprint ability (RSA) was analysed using information of sprinting sequences as they actually occurred during match-play. Sprinting behaviour in official competition was analysed for 19 games of the German national team between August 2012 and June 2014. A sprinting threshold was individually calculated based on the peak velocity reached during in-game sprinting. Players performed 17.2 ± 3.9 sprints per game and during the entire 19 games a total of 35 bouts of repeated sprinting (a minimum of three consecutive sprints with a recovery duration repeated sprinting per player every 463 min. No general decrement in maximal sprinting speed was observed during bouts with up to five consecutive sprints. Results of the present study question the importance of RSA as it is classically defined. They indicate that shorter accelerations are more important in game-specific situations which do not reach speeds necessary to qualify them as sprints. The construct validity of classic tests of RSA in football is not supported by these observations.

  6. Novel Approach to Repeated Arterial Blood Sampling in Small Animal PET : Application in a Test-Retest Study with the Adenosine A1 Receptor Ligand [C-11]MPDX

    NARCIS (Netherlands)

    Sijbesma, Jürgen W A; Zhou, Xiaoyun; Vállez García, David; Houwertjes, Martin C; Doorduin, Janine; Kwizera, Chantal; Maas, Bram; Meerlo, Peter; Dierckx, Rudi A; Slart, Riemer H J A; Elsinga, Philip H; van Waarde, Aren

    2016-01-01

    Small animal positron emission tomography (PET) can be used to detect small changes in neuroreceptor availability. This often requires rapid arterial blood sampling. However, current catheterization procedures do not allow repeated blood sampling. We have developed a procedure which allows arterial

  7. Quantum repeaters with entangled coherent states

    CERN Document Server

    Sangouard, Nicolas; Gisin, Nicolas; Laurat, Julien; Tualle-Brouri, Rosa; Grangier, Philippe

    2009-01-01

    Entangled coherent states can be prepared remotely by subtracting non-locally a single photon from two quantum superpositions of coherent states, the so-called "Schroedinger's cat" state. Such entanglement can further be distributed over longer distances by successive entanglement swapping operations using linear optics and photon-number resolving detectors. The aim of this paper is to evaluate the performance of this approach to quantum repeaters for long distance quantum communications. Despite many attractive features at first sight, we show that, when using state-of-the-art photon counters and quantum memories, they do not achieve higher entanglement generation rates than repeaters based on single-photon entanglement. We discuss potential developments which may take better advantage of the richness of entanglement based on continuous variables, including in particular efficient parity measurements.

  8. The measurement of human capital: a multivariate macro-approach

    NARCIS (Netherlands)

    Klomp, J.G.

    2013-01-01

    We examine the human capital status of 123 countries by employing factor analysis on various national human capital indicators for the period 2000–2008 to construct two new measures. The first measure is based on advanced human capital, while the second is based on basic human capital. Our measures

  9. A Multi-Dimensional Approach to Measuring News Media Literacy

    Science.gov (United States)

    Vraga, Emily; Tully, Melissa; Kotcher, John E.; Smithson, Anne-Bennett; Broeckelman-Post, Melissa

    2015-01-01

    Measuring news media literacy is important in order for it to thrive in a variety of educational and civic contexts. This research builds on existing measures of news media literacy and two new scales are presented that measure self-perceived media literacy (SPML) and perceptions of the value of media literacy (VML). Research with a larger sample…

  10. Personal selling constructs and measures: Emic versus etic approaches to cross-national research

    NARCIS (Netherlands)

    J. Herché (Joel); M.J. Swenson (Michael); W.J.M.I. Verbeke (Willem)

    1996-01-01

    textabstractEvaluates transportability of personal selling measures across cultural boundaries. Concept of measurement development; Emic and etic approaches to developing measures for cross-cultural applications; Cross-national dimensionality, reliability and construct validity of adaptive selling (

  11. An evaluation of a structured learning program as a component of the clinical practicum in undergraduate nurse education: A repeated measures analysis.

    Science.gov (United States)

    Watt, Elizabeth; Murphy, Maria; MacDonald, Lee; Pascoe, Elizabeth; Storen, Heather; Scanlon, Andrew

    2016-01-01

    There is evidence that nursing students experience stress and anxiety and a reduction in self-efficacy when undertaking clinical placements. Previous reports have identified that a structured three-day program within the Bachelor of Nursing (BN) clinical practicum reduces the students self-report of anxiety and increases self-efficacy. However, it is unreported whether these improved outcomes are sustained for the duration of the clinical placement. The aim of this study was to evaluate the duration of the effect of a three-day structured learning program within the clinical placement on final year Bachelor of Nursing student's report of anxiety and self-efficacy pre- and post-program participation in this intervention and following completion of the clinical practicum. A repeated measures design. University-based Clinical School of Nursing, acute care clinical practicum. Final year Bachelor of Nursing students. The intervention comprised the three-day program on starting the clinical practicum. A questionnaire included the anxiety subscale of The Hospital Anxiety & Depression Scale (The HAD) and the General Self-Efficacy Scale (GSES-12). The questionnaire was completed on day one (time one), upon completion of the three-day program (time two) and upon completion of placement on day 18 (time three). The questionnaire response rate varied over time. There was a statistically significant effect in reducing anxiety over time: F(1.73,74.46)=25.20, plearning program and the benefit of the intervention is sustained for the clinical placement duration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Which are the most useful scales for predicting repeat self-harm? A systematic review evaluating risk scales using measures of diagnostic accuracy

    Science.gov (United States)

    Quinlivan, L; Cooper, J; Davies, L; Hawton, K; Gunnell, D; Kapur, N

    2016-01-01

    Objectives The aims of this review were to calculate the diagnostic accuracy statistics of risk scales following self-harm and consider which might be the most useful scales in clinical practice. Design Systematic review. Methods We based our search terms on those used in the systematic reviews carried out for the National Institute for Health and Care Excellence self-harm guidelines (2012) and evidence update (2013), and updated the searches through to February 2015 (CINAHL, EMBASE, MEDLINE, and PsychINFO). Methodological quality was assessed and three reviewers extracted data independently. We limited our analysis to cohort studies in adults using the outcome of repeat self-harm or attempted suicide. We calculated diagnostic accuracy statistics including measures of global accuracy. Statistical pooling was not possible due to heterogeneity. Results The eight papers included in the final analysis varied widely according to methodological quality and the content of scales employed. Overall, sensitivity of scales ranged from 6% (95% CI 5% to 6%) to 97% (CI 95% 94% to 98%). The positive predictive value (PPV) ranged from 5% (95% CI 3% to 9%) to 84% (95% CI 80% to 87%). The diagnostic OR ranged from 1.01 (95% CI 0.434 to 2.5) to 16.3 (95%CI 12.5 to 21.4). Scales with high sensitivity tended to have low PPVs. Conclusions It is difficult to be certain which, if any, are the most useful scales for self-harm risk assessment. No scales perform sufficiently well so as to be recommended for routine clinical use. Further robust prospective studies are warranted to evaluate risk scales following an episode of self-harm. Diagnostic accuracy statistics should be considered in relation to the specific service needs, and scales should only be used as an adjunct to assessment. PMID:26873046

  13. A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Campos, E [Argonne National Laboratory; Sisterson, DL [Argonne National Laboratory

    2015-10-01

    The Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess, and report measurement quality. Therefore, an easily-accessible, well-articulated estimate of ARM measurement uncertainty is needed.

  14. Exploring the joint measurability using an information-theoretic approach

    Science.gov (United States)

    Hsu, Li-Yi

    2016-10-01

    We explore the legal purity parameters for the joint measurements. Instead of direct unsharpening the measurements, we perform the quantum cloning before the sharp measurements. The necessary fuzziness in the unsharp measurements is equivalently introduced in the imperfect cloning process. Based on the information causality and the consequent noisy nonlocal computation, one can derive the information-theoretic quadratic inequalities that must be satisfied by any physical theory. On the other hand, to guarantee the classicality, the linear Bell-type inequalities deduced by these quadratic ones must be obeyed. As for the joint measurability, the purity parameters must be chosen to obey both types of inequalities. Finally, the quadratic inequalities for purity parameters in the joint measurability region are derived.

  15. Solid-state dosimeters: A new approach for mammography measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brateman, Libby F., E-mail: bratel@radiology.ufl.edu [Department of Radiology, University of Florida College of Medicine Box 100374, Gainesville, Florida 32610-0374 (United States); Heintz, Philip H. [Department of Radiology, University of New Mexico, MSC10 5530, Albuquerque, New Mexico 87131 (United States)

    2015-02-15

    Purpose: To compare responses of modern commercially available solid-state dosimeters (SStDs) used in mammography medical physics surveys for two major vendors of current digital mammography units. To compare differences in dose estimates among SStD responses with ionization chamber (IC) measurements for several target/filter (TF) combinations and report their characteristics. To review scientific bases for measurements of quantities required for mammography for traditional measurement procedures and SStDs. Methods: SStDs designed for use with modern digital mammography units were acquired for evaluation from four manufacturers. Each instrument was evaluated under similar conditions with the available mammography beams provided by two modern full-field digital mammography units in clinical use: a GE Healthcare Senographe Essential (Essential) and a Hologic Selenia Dimensions 5000 (Dimensions), with TFs of Mo/Mo, Mo/Rh; and Rh/Rh and W/Rh, W/Ag, and W/Al, respectively. Measurements were compared among the instruments for the TFs over their respective clinical ranges of peak tube potentials for kVp and half-value layer (HVL) measurements. Comparisons for air kerma (AK) and their associated relative calculated average glandular doses (AGDs), i.e., using fixed mAs, were evaluated over the limited range of 28–30 kVp. Measurements were compared with reference IC measurements for AK, reference HVLs and calculated AGD, for two compression paddle heights for AK, to evaluate scatter effects from compression paddles. SStDs may require different positioning from current mammography measurement protocols. Results: Measurements of kVp were accurate in general for the SStDs (within −1.2 and +1.1 kVp) for all instruments over a wide range of set kVp’s and TFs and most accurate for Mo/Mo and W/Rh. Discrepancies between measurements and reference values were greater for HVL and AK. Measured HVL values differed from reference values by −6.5% to +3.5% depending on the SStD and

  16. Defining and Measuring Entrepreneurship for Regional Research: A New Approach

    Science.gov (United States)

    Low, Sarah A.

    2009-01-01

    In this dissertation, I develop a definition and regional measure of entrepreneurship that will aid entrepreneurship research and economic development policy. My new indicators represent an improvement over current measures of entrepreneurship. The chief contribution of these new indicators is that they incorporate innovation, which others ignore.…

  17. An axiomatic approach to the measurement of envy

    NARCIS (Netherlands)

    Bosmans, K.G.M.; Öztürk, Z.E.

    2013-01-01

    We characterize a class of envy measures. There are three key axioms. Decomposability requires that overall envy is the sum of the envy within and between subgroups. The other two axioms deal with the two-individual setting and specify how the envy measure should react to simple changes in the indiv

  18. Measurement of dynamic efficiency: a directional distance function parametric approach

    NARCIS (Netherlands)

    Serra, T.; Oude Lansink, A.G.J.M.; Stefanou, S.E.

    2011-01-01

    This research proposes a parametric estimation of the structural dynamic efficiency measures proposed by Silva and Oude Lansink (2009). Overall, technical and allocative efficiency measurements are derived based on a directional distance function and the duality between this function and the optimal

  19. The Relative Importance of Job Factors: A New Measurement Approach.

    Science.gov (United States)

    Nealey, Stanley M.

    This paper reports on a new two-phase measurement technique that permits a direct comparison of the perceived relative importance of economic vs. non-economic factors in a job situation in accounting for personnel retention, the willingness to produce, and job satisfaction. The paired comparison method was used to measure the preferences of 91…

  20. An axiomatic approach to the measurement of envy

    NARCIS (Netherlands)

    Bosmans, K.G.M.; Öztürk, Z.E.

    2013-01-01

    We characterize a class of envy measures. There are three key axioms. Decomposability requires that overall envy is the sum of the envy within and between subgroups. The other two axioms deal with the two-individual setting and specify how the envy measure should react to simple changes in the indiv

  1. Measuring Service Quality in the Networked Environment: Approaches and Considerations.

    Science.gov (United States)

    Bertot, John Carlo

    2001-01-01

    This article offers a number of statistics and performance measures that libraries may find useful in determining the overall quality of their network-based services; identifies a number of service quality criteria; and provides a framework to assist librarians in selecting statistics and performance measures based on service quality criteria.…

  2. An approach to measure ciliate grazing on living heterotrophic nanoflagellates

    DEFF Research Database (Denmark)

    Christoffersen, K.; González, J. M.

    2003-01-01

    . Here we present an approach for the assessment of ciliate grazing on living heterotrophic nanoflagellates. Stationary phase cultures of a heterotrophic nanoflagellate (Cafeteria sp.) were live-stained by allowing them to take up fluorescently labelled macromolecules. Controls revealed that this label...

  3. Productivity growth and efficiency measurement : a dual approach

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.

    2000-01-01

    This paper derives output technical efficiency from a dual system of input demand and output supply equations using the concept of virtual prices. The underlying production function is firm-specific through intercept terms and slope parameters. The approach is used to decompose total factor

  4. Productivity growth and efficiency measurement : a dual approach

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.

    2000-01-01

    This paper derives output technical efficiency from a dual system of input demand and output supply equations using the concept of virtual prices. The underlying production function is firm-specific through intercept terms and slope parameters. The approach is used to decompose total factor producti

  5. THE SYSTEM APPROACH TO MEASURING CHANNEL MODELLING AS THE MECHANISM OF MAINTENANCE OF TRUST TO RESULTS OF MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    P. S. Serenkov

    2012-01-01

    Full Text Available Necessity of system approach development to measurement modeling for the purpose of maintenance of the trust set level to their results is proved. The decision of a measuring problem subject to determined aim is considered as creation of models sequence: measurement process model and complex measuring channel model. As a demonstrative basis of maintenance of trust to result of measurements the complex of criteria of completeness and irredundant is formulated.

  6. MODERN MEASUREMENT APPROACHES OF BROADBAND WIRELESS TRANSCEIVER MODULES PARAMETERS

    Directory of Open Access Journals (Sweden)

    A. A. Murauyou

    2015-01-01

    Full Text Available The article contains results of development and use of non-standard methods for measuring parameters of broadband radio transceiver modules for compliance with the technical regulations of the Customs Union. 

  7. International Human Trafficking: Measuring Clandestinity by the Structural Equation Approach

    Directory of Open Access Journals (Sweden)

    Alexandra Rudolph

    2017-06-01

    Full Text Available Worldwide human trafficking is the third most often registered international criminal activity, ranked only after drug and weapon trafficking. This article focusses on three questions: 1 How can human trafficking be measured? 2 What are the causes and indicators of this criminal activity which exploits individuals? 3 Which countries observe a high (or low level of human trafficking inflow? We apply the Multiple Indicators Multiple Causes structural equation model to measure human trafficking inflows in a way which includes all potential causes and indicators in one estimation model. The human trafficking measurement focusses on international human trafficking. We use freely available existing data and thus generate an objective measure of the extent of trafficking. Countries are ranked according to their potential to be a destination country based on various characteristics of the trafficking process.

  8. New approach to intracardiac hemodynamic measurements in small animals

    DEFF Research Database (Denmark)

    Eskesen, Kristian; Olsen, Niels T; Dimaano, Veronica L

    2012-01-01

    Invasive measurements of intracardiac hemodynamics in animal models have allowed important advances in the understanding of cardiac disease. Currently they are performed either through a carotid arteriotomy or via a thoracotomy and apical insertion. Both of these techniques have disadvantages...

  9. Design of psychosocial factors questionnaires: a systematic measurement approach.

    Science.gov (United States)

    Villalobos, Gloria H; Vargas, Angélica M; Rondón, Martin A; Felknor, Sarah A

    2013-01-01

    Evaluation of psychosocial factors requires instruments that measure dynamic complexities. This study explains the design of a set of questionnaires to evaluate work and non-work psychosocial risk factors for stress-related illnesses. The measurement model was based on a review of literature. Content validity was performed by experts and cognitive interviews. Pilot testing was carried out with a convenience sample of 132 workers. Cronbach's alpha evaluated internal consistency and concurrent validity was estimated by Spearman correlation coefficients. Three questionnaires were constructed to evaluate exposure to work and non-work risk factors. Content validity improved the questionnaires coherence with the measurement model. Internal consistency was adequate (α = 0.85-0.95). Concurrent validity resulted in moderate correlations of psychosocial factors with stress symptoms. Questionnaires' content reflected a wide spectrum of psychosocial factors sources. Cognitive interviews improved understanding of questions and dimensions. The structure of the measurement model was confirmed. Copyright © 2012 Wiley Periodicals, Inc.

  10. An approach to the low-resistance measurement

    National Research Council Canada - National Science Library

    Radetić Radojle; Pavlov-Kagadejev Marijana; Brodić Darko; Milivojević Nikola

    2013-01-01

    .... It presents the design process of the instrument made for resistance measuring. In order to achieve desired objectives, a great number of experiments have been carried out during the development...

  11. Improved calendar time approach for measuring long-run anomalies

    Directory of Open Access Journals (Sweden)

    Anupam Dutta

    2015-12-01

    Full Text Available Although a large number of recent studies employ the buy-and-hold abnormal return (BHAR methodology and the calendar time portfolio approach to investigate the long-run anomalies, each of the methods is a subject to criticisms. In this paper, we show that a recently introduced calendar time methodology, known as Standardized Calendar Time Approach (SCTA,, controls well for heteroscedasticity problem which occurs in calendar time methodology due to varying portfolio compositions. In addition, we document that SCTA has higher power than the BHAR methodology and the Fama–French three-factor model while detecting the long-run abnormal stock returns. Moreover, when investigating the long-term performance of Canadian initial public offerings, we report that the market period (i.e. the hot and cold period markets does not have any significant impact on calendar time abnormal returns based on SCTA.

  12. Measuring Neighborhood Walkable Environments: A Comparison of Three Approaches

    Directory of Open Access Journals (Sweden)

    Yen-Cheng Chiang

    2017-06-01

    Full Text Available Multiple studies have revealed the impact of walkable environments on physical activity. Scholars attach considerable importance to leisure and health-related walking. Recent studies have used Google Street View as an instrument to assess city streets and walkable environments; however, no study has compared the validity of Google Street View assessments of walkable environment attributes to assessments made by local residents and compiled from field visits. In this study, we involved nearby residents and compared the extent to which Google Street View assessments of the walkable environment correlated with assessments from local residents and with field visits. We determined the assessment approaches (local resident or field visit assessments that exhibited the highest agreement with Google Street View. One city with relatively high-quality walkable environments and one city with relatively low-quality walkable environments were examined, and three neighborhoods from each city were surveyed. Participants in each neighborhood used one of three approaches to assess the walkability of the environment: 15 local residents assessed the environment using a map, 15 participants made a field visit to assess the environment, and 15 participants used Google Street View to assess the environment, yielding a total of 90 valid samples for the two cities. Findings revealed that the three approaches to assessing neighborhood walkability were highly correlated for traffic safety, aesthetics, sidewalk quality, and physical barriers. Compared with assessments from participants making field visits, assessments by local residents were more highly correlated with Google Street View assessments. Google Street View provides a more convenient, low-cost, efficient, and safe approach to assess neighborhood walkability. The results of this study may facilitate future large-scale walkable environment surveys, effectively reduce expenses, and improve survey efficiency.

  13. Measuring Neighborhood Walkable Environments: A Comparison of Three Approaches.

    Science.gov (United States)

    Chiang, Yen-Cheng; Sullivan, William; Larsen, Linda

    2017-06-03

    Multiple studies have revealed the impact of walkable environments on physical activity. Scholars attach considerable importance to leisure and health-related walking. Recent studies have used Google Street View as an instrument to assess city streets and walkable environments; however, no study has compared the validity of Google Street View assessments of walkable environment attributes to assessments made by local residents and compiled from field visits. In this study, we involved nearby residents and compared the extent to which Google Street View assessments of the walkable environment correlated with assessments from local residents and with field visits. We determined the assessment approaches (local resident or field visit assessments) that exhibited the highest agreement with Google Street View. One city with relatively high-quality walkable environments and one city with relatively low-quality walkable environments were examined, and three neighborhoods from each city were surveyed. Participants in each neighborhood used one of three approaches to assess the walkability of the environment: 15 local residents assessed the environment using a map, 15 participants made a field visit to assess the environment, and 15 participants used Google Street View to assess the environment, yielding a total of 90 valid samples for the two cities. Findings revealed that the three approaches to assessing neighborhood walkability were highly correlated for traffic safety, aesthetics, sidewalk quality, and physical barriers. Compared with assessments from participants making field visits, assessments by local residents were more highly correlated with Google Street View assessments. Google Street View provides a more convenient, low-cost, efficient, and safe approach to assess neighborhood walkability. The results of this study may facilitate future large-scale walkable environment surveys, effectively reduce expenses, and improve survey efficiency.

  14. A General Approach to Welfare Measurement through National Income Accounting

    OpenAIRE

    Geir B. Asheim; Buchholz, Wolfgang

    2003-01-01

    We develop a framework for analyzing national income accounting using a revealed welfare approach that is sufficiently general to cover, e.g., both the standard discounted utilitarian and maximin criteria as special cases. We show that the basic welfare properties of comprehensive national income accounting, which were previously ascribed only to the discounted utilitarian case, in fact extend to this more general framework. In particular, it holds under a wide range of circumstances that rea...

  15. Multi scale risk measurement in electricity market:a wavelet based value at risk approach

    Institute of Scientific and Technical Information of China (English)

    Guu; Sy-Ming; Lai; Kin; Keung

    2008-01-01

    Value at risk (VaR) is adopted to measure the risk level in the electricity market. To estimate VaR at higher accuracy and reliability, the wavelet variance decomposed approach for value at risk estimates (WVDVaR) is proposed. Empirical studies conduct in five Australian electricity markets, which evaluate the performances of both the proposed approach and the traditional ARMA-GARCH approach using the Kupiec backtesting procedure. Experimental results suggest that the proposed approach measures electricity ...

  16. Measurement error adjustment in essential fatty acid intake from a food frequency questionnaire: alternative approaches and methods

    Directory of Open Access Journals (Sweden)

    Satia Jessie A

    2007-09-01

    Full Text Available Abstract Background We aimed at assessing the degree of measurement error in essential fatty acid intakes from a food frequency questionnaire and the impact of correcting for such an error on precision and bias of odds ratios in logistic models. To assess these impacts, and for illustrative purposes, alternative approaches and methods were used with the binary outcome of cognitive decline in verbal fluency. Methods Using the Atherosclerosis Risk in Communities (ARIC study, we conducted a sensitivity analysis. The error-prone exposure – visit 1 fatty acid intake (1987–89 – was available for 7,814 subjects 50 years or older at baseline with complete data on cognitive decline between visits 2 (1990–92 and 4 (1996–98. Our binary outcome of interest was clinically significant decline in verbal fluency. Point estimates and 95% confidence intervals were compared between naïve and measurement-error adjusted odds ratios of decline with every SD increase in fatty acid intake as % of energy. Two approaches were explored for adjustment: (A External validation against biomarkers (plasma fatty acids in cholesteryl esters and phospholipids and (B Internal repeat measurements at visits 2 and 3. The main difference between the two is that Approach B makes a stronger assumption regarding lack of error correlations in the structural model. Additionally, we compared results from regression calibration (RCAL to those from simulation extrapolation (SIMEX. Finally, using structural equations modeling, we estimated attenuation factors associated with each dietary exposure to assess degree of measurement error in a bivariate scenario for regression calibration of logistic regression model. Results and conclusion Attenuation factors for Approach A were smaller than B, suggesting a larger amount of measurement error in the dietary exposure. Replicate measures (Approach B unlike concentration biomarkers (Approach A may lead to imprecise odds ratios due to larger

  17. Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Calyam, Prasad

    2014-09-15

    The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federation policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.

  18. Association of Repeatedly Measured High-Sensitivity-Assayed Troponin I with Cardiovascular Disease Events in a General Population from the MORGAM/BiomarCaRE Study

    DEFF Research Database (Denmark)

    Hughes, Maria F; Ojeda, Francisco; Saarela, Olli;

    2017-01-01

    BACKGROUND: High-sensitivity troponin I (hs-cTnI) concentrations reflect myocardial stress. The role of hs-cTnI in predicting long-term changes in the risk of cardiovascular disease (CVD) in general populations is not clearly defined. METHODS: We investigated whether the change in 3 repeated meas...

  19. LOCAL PUBLIC 0 EXPENDITURE AUTONOMY – MEASURING APPROACH

    Directory of Open Access Journals (Sweden)

    Irina BILAN

    2013-06-01

    Full Text Available The decentralization process was continuous in Romania starting with 1990, generating the implication of local authorities in local public finance, as a result of exclusives, shared and delegate competences and, so, the necessity of ensuring a good management of resources and expenditures. Therefore, the decentralization of competences / responsibilities from State to local governments was a major Romanian political theme and a first rank component of management of local public finance, as main driving instrument for local development. Specific legal framework of local responsibilities is established both to European and national level. Researchers based on regulation and practice have tried to quantify the responsibilities developing different models to measure local revenue and expenditures autonomy. The paper aims is to identify some models for measuring local expenditure autonomy and to apply for Romania. The study is oriented to measure local expenditure autonomy in Romania using Bell, Ebel, Kaiser and Rojchaichainthorn's model.

  20. Contextual Values Approach to the Generalized Measurement of Observables

    CERN Document Server

    Dressel, J

    2011-01-01

    We present a detailed motivation for and definition of the contextual values of an observable, which were introduced in Dressel et al., Phys. Rev. Lett. 102, 040402 (2010). The theory extends the well-established theory of generalized state measurements by bridging the gap between partial state collapse and the observables that represent physically relevant information about the system. To emphasize the general utility of the concept, we first construct the full theory of contextual values within an operational formulation of classical probability theory, paying special attention to observable construction, detector coupling, generalized measurement, and measurement disturbance. We then extend the results to quantum probability theory built as a superstructure on the classical theory, pointing out both the classical correspondences to and the full quantum generalizations of both Lueder's rule and the Aharonov-Bergmann-Lebowitz rule in the process. In both cases the contextual values of a system observable for...

  1. Velocity of detonation (VOD measurement techniques - practical approach

    Directory of Open Access Journals (Sweden)

    Aruna Dhanraj Tete

    2013-06-01

    Full Text Available Velocity of Detonation (VOD is an important measure characteristics parameter of explosive material. The performance of explosive invariably depends on the velocity of detonation. The power/ strength of explosive to cause fragmentation of the solid structure determine the efficiency of the Blast performed. It is an established fact that measuring velocity of detonation gives a good indication of the strength and hence the performance of the explosive. In this survey various VOD measurement techniques such as electric, nonelectric and fibre optic have been discussed. To aid the discussion some commercially available VOD meter comparison are also presented. After review of the existing units available commercially and study of their respective merits and demerits, feature of an ideal system is proposed. 

  2. New Approach to Measuring Traffic Queue at Intersections

    Institute of Scientific and Technical Information of China (English)

    SONG Lei; SHI Zhong-ke

    2008-01-01

    To measure the length of traffic queue, a vehicle motion model at intersections was built, and based on it the effective traffic queue was defined. Color images segmentation and frame differencing technique were used to detect the foreground and the moving vehicles by detecting regions of the images, and then measure the length of effective traffic queue. By processing the image sequence acquired at certain intersection, the results prove that it is able to work out the traffic queue effectively by using the two techniques.

  3. A computer simulation approach to measurement of human control strategy

    Science.gov (United States)

    Green, J.; Davenport, E. L.; Engler, H. F.; Sears, W. E., III

    1982-01-01

    Human control strategy is measured through use of a psychologically-based computer simulation which reflects a broader theory of control behavior. The simulation is called the human operator performance emulator, or HOPE. HOPE was designed to emulate control learning in a one-dimensional preview tracking task and to measure control strategy in that setting. When given a numerical representation of a track and information about current position in relation to that track, HOPE generates positions for a stick controlling the cursor to be moved along the track. In other words, HOPE generates control stick behavior corresponding to that which might be used by a person learning preview tracking.

  4. A new approach to evaluate railways efficiency considering safety measures

    Directory of Open Access Journals (Sweden)

    Ali Noroozzadeh

    2013-04-01

    Full Text Available Safety is one of the main reasons for choosing railway to other transportation modes and improvement of transportation safety has attracted many researchers in recent years. In this paper, we aim to investigate the influence of safety measures on railways performance evaluation, empirically. The proposed model of this paper uses data envelopment analysis (DEA to estimate the railways efficiency scores in the presence of safety measure. According to three proposed factors, the most appropriate model is selected to compare its result with output-oriented DEA model. The results of the survey are surprising since inefficient railroads become efficient through adding undesirable outputs in evaluation model.

  5. XML Genetic Structure of SSR1 & SSR2 loci from Iranian Mycobacterium Avium Subspecies Paratuberculosis Isolates by a Short Sequence Repeat Analysis Approach

    Directory of Open Access Journals (Sweden)

    Aida Chalesh (MSc

    2016-01-01

    Full Text Available Background and Objective: Paratuberculosis has been repeatedly reported from Iranian ruminant herds. The extrem fastidious nature of Mycobacterium avium subspecies paratuberculsos hinders genomic diversity studies of the pathogen. Short Sequence Repeat analysis is one of the genome-based approches recently developed to overcome this difficulty. In this study we describe the application of SSR genotyping on three Iranian MAP type strains plus the III & V vaccinal strain. Methods: All the bacteria were examined by PCR-F57 and PCR-IS900 experiments in order to authenticate their identity as MAP. SSR genotyping using SSR1 & SSR2 loci was conducted according to the Amonsin method. PCR amplicons were sequenced to guarantee the accuracy of findings. Results: At SSR1 locus two allels were identified, a larger allel of 770 bp and a smaller allel of 763 bp long. At SSR2 only a single allele, 800 bp long, was detected. Two Iranian bovine and ovine MAP isolates along with the vaccinal III & V strain shared a single SSR1/SSR2 pattern while a different SSR1/SSR2 was represented by the third (caprine Iranian MAP isolate. Conclusion: While finding a shared SSR type between the two Iranian MAP isolates and the III & V strain might represent a mutual ancestral background but this has to be assessed through further studies. Detection of two SSR genotypes between three Iranian type strains is likely a reflection of more MAP clones in Iran.

  6. Repeated electromagnetic induction measurements for mapping soil moisture at the field scale: comparison with data from a wireless soil moisture monitoring network

    Science.gov (United States)

    Martini, Edoardo; Werban, Ulrike; Zacharias, Steffen; Pohle, Marco; Dietrich, Peter; Wollschläger, Ute

    2016-04-01

    Electromagnetic induction (EMI) methods are widely used for soil mapping, as they allow fast and relatively low-cost surveys of soil apparent electrical conductivity (ECa) at various scales. Soil ECa is well known to be influenced by both the volumetric content and the electrical conductivity (EC) of soil water, as well as by soil temperature and by the volume of the solid particles and their EC. Among other applications, EMI has become widely used to determine soil water content or to study hydrological processes within the field of hydrogeophysics. Although the use of non-invasive EMI for imaging soil spatial properties is very attractive, the dependence of ECa on several properties and states challenges any interpretation with respect to individual soil properties or states such as θ. The major aim of this study was to further investigate the potential of repeated EMI measurements to map soil moisture at the hillslope scale, with particular focus on the temporal variability of the spatial patterns of ECa and soil moisture, respectively, and on the stability of the ECa-soil moisture relationship over time. To this end, we compared time series of EMI measurements with high-resolution soil moisture data for a non-intensively managed hillslope area in the Schäfertal catchment (Central Germany) for which the spatial distribution of soil properties and soil water dynamics were known in detail. Soil water and temperature dynamics were observed in 40 soil profiles at hourly resolution during 14 months using a wireless monitoring network. During this period of time, ECa was mapped on seven occasions using an EM38-DD device. For the investigated site, ECa showed small temporal variations (ranging between 0 and 24 mS/m) whereas the temporal range of soil moisture was very large (from very dry to soil saturation). Furthermore, temporal changes of the spatial pattern of ECa differed from temporal changes of the spatial pattern of soil moisture. The ECa-soil moisture

  7. Preference-based approaches to measuring the benefits of perinatal care.

    Science.gov (United States)

    Petrou, Stavros; Henderson, Jane

    2003-12-01

    Studies that measure benefits of health care interventions in natural or physical units cannot incorporate the several health changes that might occur within a single measure, and they overlook individuals' preferences for those health changes. This paper discusses and critically appraises the application of preference-based approaches to the measurement of the benefits of perinatal care that have developed out of economic theory. These include quality adjusted life year (QALY)-based approaches, monetary-based approaches, and discrete choice experiments. QALY-based approaches use scaling techniques, such as the rating scale, standard gamble approach, and time trade-off approach, or multi-attribute utility measures, to measure the health-related quality of life weights of health states. Monetary-based approaches include the revealed preference approach, which involves observing decisions that individuals actually make concerning health risks, and the willingness-to-pay approach, which provides a framework for investigating individuals' willingness to pay for benefits of health care interventions. Discrete choice experiments describe health care interventions in terms of their attributes, and elicit preferences for scenarios that combine different levels of those attributes. Empirical examples are used to illustrate each preference-based approach to benefit measurement, and several methodological issues raised by the application of these approaches to the perinatal context are discussed. Particular attention is given to identifying the relevant attributes to incorporate into the measurement instrument, appropriate respondents for the measurement exercise, potential sources of bias in description and valuation processes, and the practicality, reliability, and validity of alternative measurement approaches. The paper's conclusion is that researchers should be explicit and rigorous in their application of preference-based approaches to benefit measurement in the context

  8. Measuring Afterschool Program Quality Using Setting-Level Observational Approaches

    Science.gov (United States)

    Oh, Yoonkyung; Osgood, D. Wayne; Smith, Emilie P.

    2015-01-01

    The importance of afterschool hours for youth development is widely acknowledged, and afterschool settings have recently received increasing attention as an important venue for youth interventions, bringing a growing need for reliable and valid measures of afterschool quality. This study examined the extent to which the two observational tools,…

  9. National policy measures. Right approach to foreign direct investment flows

    Directory of Open Access Journals (Sweden)

    Cătălin-Emilian HUIDUMAC-PETRESCU

    2013-02-01

    Full Text Available 2011 was a difficult year for all the countries, developed and emerging ones. For overcoming the negative effects of the financial crisis, many economies have established as purpose to adopt new economic policies regarding the foreign direct investment flows (FDI, even to stimulate the flows or to reduce it (protectionism measures. So, there can be identified two categories of national policies: measures for the FDI flows stimulation and measures whose aim was the weighting of FDI developing, through restriction and regulation. In the first category we could include the liberalization measures and promotional and faciletation policies. In this study we evidenced that the fundament of the second category of policies is the belief that the FDI outward lead to job exports, to a raise of unemployment and a weakness of the industrial base.Many reports on FDI flows, here we talk about those made by UNCTAD, show that the regulation and restriction policies are seen as a possible protectionism, especially in the agricultural and extractive industries, where there have been required nationalization processes and divestments. Even more, the economies which adopted this kind of policies have been less interested in investing abroad, the outward of FDI being affected and globally the total outward decreased.

  10. Nurses' motivation to wash their hands: a standardized measurement approach.

    Science.gov (United States)

    O'boyle, C A; Henly, S J; Duckett, L J

    2001-08-01

    Handwashing is a simple procedure that is critical to prevention and control of infections, yet many health care workers (HCWs) do not practice hand hygiene according to recommended guidelines. The Handwashing Assessment Inventory (HAI) is a self-report instrument that is designed to measure the motivational schema of HCWs for handwashing.

  11. Corruption in Higher Education: Conceptual Approaches and Measurement Techniques

    Science.gov (United States)

    Osipian, Ararat L.

    2007-01-01

    Corruption is a complex and multifaceted phenomenon. Forms of corruption are multiple. Measuring corruption is necessary not only for getting ideas about the scale and scope of the problem, but for making simple comparisons between the countries and conducting comparative analysis of corruption. While the total impact of corruption is indeed…

  12. Quantitative approach to measuring the cerebrospinal fluid space with CT

    Energy Technology Data Exchange (ETDEWEB)

    Zeumer, H.; Hacke, W.; Hartwich, P.

    1982-01-01

    A method for measuring the subarachnoid space by using an independent CT evaluation unit is described. The normal values have been calculated for patients, according to age, and three examples are presented demonstrating reversible decrease of brain volume in patients suffering anorexia nervosa and chronic alcoholism.

  13. Design of psychosocial factors questionnaires: a systematic measurement approach

    Science.gov (United States)

    Vargas, Angélica; Felknor, Sarah A

    2012-01-01

    Background Evaluation of psychosocial factors requires instruments that measure dynamic complexities. This study explains the design of a set of questionnaires to evaluate work and non-work psychosocial risk factors for stress-related illnesses. Methods The measurement model was based on a review of literature. Content validity was performed by experts and cognitive interviews. Pilot testing was carried out with a convenience sample of 132 workers. Cronbach’s alpha evaluated internal consistency and concurrent validity was estimated by Spearman correlation coefficients. Results Three questionnaires were constructed to evaluate exposure to work and non-work risk factors. Content validity improved the questionnaires coherence with the measurement model. Internal consistency was adequate (α=0.85–0.95). Concurrent validity resulted in moderate correlations of psychosocial factors with stress symptoms. Conclusions Questionnaires´ content reflected a wide spectrum of psychosocial factors sources. Cognitive interviews improved understanding of questions and dimensions. The structure of the measurement model was confirmed. PMID:22628068

  14. Various approaches to standardization and the importance of measurement accuracy

    NARCIS (Netherlands)

    Gram, J.; Jespersen, J.; Kluft, C.; Declerck, P.

    1996-01-01

    Biochemical measurements of quantities, i.e. analytes, of the haemostatic system are the basis of evaluating patients with potentially serious or lifethreatening disorders. Therefore, there is a need of a high level of certainty of the results. Experience based on the comprehensive international

  15. Improving Attribute-Importance Measurement : a Reference-Point Approach

    NARCIS (Netherlands)

    Ittersum, van K.; Pennings, J.M.E.; Wansink, B.; Trijp, van J.C.M.

    2004-01-01

    Despite the importance of identifying the hierarchy of product attributes that drive judgment and choice, the many available methods remain limited regarding their convergent validity and test-retest reliability. To increase the validity and reliability of attribute-importance measurement, we focus

  16. Visual Analyses and Discriminations: One Approach to Measuring Students' Metacognition

    Science.gov (United States)

    Al-Hilawani, Yasser A.

    2006-01-01

    The metacognitive performance of four groups of students was examined. The students' processes of visual analysis and discrimination of real-life pictures were used to measure metacognition. There were 61 participants: 18 hearing students, 18 deaf and hard of hearing students, 16 students with mild mental disabilities, and 9 students with physical…

  17. Improving Attribute-Importance Measurement : a Reference-Point Approach

    NARCIS (Netherlands)

    Ittersum, van K.; Pennings, J.M.E.; Wansink, B.; Trijp, van J.C.M.

    2004-01-01

    Despite the importance of identifying the hierarchy of product attributes that drive judgment and choice, the many available methods remain limited regarding their convergent validity and test-retest reliability. To increase the validity and reliability of attribute-importance measurement, we focus

  18. Measuring Habituation in Infants: An Approach Using Regression Analysis.

    Science.gov (United States)

    Ashmead, Daniel H.; Davis, DeFord L.

    1996-01-01

    Used computer simulations to examine effectiveness of different criteria for measuring infant visual habituation. Found that a criterion based on fitting a second-order polynomial regression function to looking-time data produced more accurate estimation of looking times and higher power for detecting novelty effects than did the traditional…

  19. New approach to intracardiac hemodynamic measurements in small animals

    DEFF Research Database (Denmark)

    Eskesen, Kristian; Olsen, Niels T; Dimaano, Veronica L;

    2012-01-01

    Invasive measurements of intracardiac hemodynamics in animal models have allowed important advances in the understanding of cardiac disease. Currently they are performed either through a carotid arteriotomy or via a thoracotomy and apical insertion. Both of these techniques have disadvantages and...

  20. Various approaches to standardization and the importance of measurement accuracy

    NARCIS (Netherlands)

    Gram, J.; Jespersen, J.; Kluft, C.; Declerck, P.

    1996-01-01

    Biochemical measurements of quantities, i.e. analytes, of the haemostatic system are the basis of evaluating patients with potentially serious or lifethreatening disorders. Therefore, there is a need of a high level of certainty of the results. Experience based on the comprehensive international sta

  1. Flux Measurements in Trees: Methodological Approach and Application to Vineyards

    Directory of Open Access Journals (Sweden)

    Francesca De Lorenzi

    2008-03-01

    Full Text Available In this paper a review of two sap flow methods for measuring the transpiration in vineyards is presented. The objective of this work is to examine the potential of detecting transpiration in trees in response to environmental stresses, particularly the high concentration of ozone (O3 in troposphere. The methods described are the stem heat balance and the thermal dissipation probe; advantages and disadvantages of each method are detailed. Applications of both techniques are shown, in two large commercial vineyards in Southern Italy (Apulia and Sicily, submitted to semi-arid climate. Sap flow techniques allow to measure transpiration at plant scale and an upscaling procedure is necessary to calculate the transpiration at the whole stand level. Here a general technique to link the value of transpiration at plant level to the canopy value is presented, based on experimental relationships between transpiration and biometric characteristics of the trees. In both vineyards transpiration measured by sap flow methods compares well with evapotranspiration measured by micrometeorological techniques at canopy scale. Moreover soil evaporation component has been quantified. In conclusion, comments about the suitability of the sap flow methods for studying the interactions between trees and ozone are given.

  2. Suggested Approaches to the Measurement of Computer Anxiety.

    Science.gov (United States)

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  3. Measuring real exchange rate misalignment in Croatia: cointegration approach

    Directory of Open Access Journals (Sweden)

    Irena Palić

    2014-12-01

    Full Text Available The purpose of the paper is to analyze misalignment of the real exchange rate in Croatia. The misalignment analysis is conducted using the permanent equilibrium exchange rate approach. The equilibrium real exchange rate is computed using the cointegration approach whereby the real exchange rate and its fundamentals, namely terms of trade, net foreign assets and the ratio of prices of tradables to non-tradables are included in cointegration analysis. The Hodrick and Prescott filter is used to obtain permanent values of the equilibrium real exchange rate. The real exchange rate misalignment is computed as the deviation of the RER from its permanent equilibrium level. Four overvaluation periods and three undervaluation periods are recorded in Croatia in the observed period. Overvaluation periods are more often and of longer duration than undervaluation periods. However, the real exchange rate does not deviate largely from its estimated equilibrium value in the observed period, and it is neither overvalued nor undervalued constantly, but the periods alternate. Considering the results of the analysis, together with the empirical characteristics of Croatian economy, namely the high foreign currency indebtedness, highly euroized economy and underdeveloped export oriented sector, the depreciation of the real exchange rate is not recommended to economic policy makers and the current Croatian exchange rate policy is appropriate.

  4. Decoherent histories approach to the cosmological measure problem

    CERN Document Server

    Lloyd, Seth

    2016-01-01

    The method of decoherent histories allows probabilities to be assigned to sequences of quantum events in systems, such as the universe as a whole, where there is no external observer to make measurements. This paper applies the method of decoherent histories to address cosmological questions. Using a series of simple examples, beginning with the harmonic oscillator, we show that systems in a stationary state such as an energy eigenstate or thermal state can exhibit decoherent histories with non-trivial dynamics. We then examine decoherent histories in a universe that undergoes eternal inflation. Decoherent histories that assign probabilities to sequences of events in the vicinity of a timelike geodesic supply a natural cosmological measure. Under reasonable conditions, such sequences of events do not suffer from the presence of unlikely statistical fluctuations that mimic reality.

  5. A new approach for measuring human resource accounting

    Directory of Open Access Journals (Sweden)

    Esmat Bavali

    2014-06-01

    Full Text Available Significance of identifying human resource competency in organizations and the necessity for valuating human resource in accounting persuade many researchers to design a conceptual model for measuring human resource accounting. This study, first, examines dimensions of various valuation models of human resource and then they are compared with Goleman individual and social competency indicators. Next, individual, organizational and social competency indicators are designed through developing Goleman model. Finally, Analytical Hierarchy Process (AHP and experts’ ideas in human resource accounting in superior universities of the world are used to classify the indicators; and the conceptual model of measuring human resource accounting is designed based on guidelines of management and human capital development vice-presidency and inspiring effort rate of return method.

  6. Measuring causality by taking the directional symbolic mutual information approach

    Institute of Scientific and Technical Information of China (English)

    Chen Gui; Xie Lei; Chu Jian

    2013-01-01

    We propose a novel measure to assess causality through the comparison of symbolic mutual information between the future of one random quantity and the past of the other.This provides a new perspective that is different from the conventional conceptions.Based on this point of view,a new causality index is derived that uses the definition of directional symbolic mutual information.This measure presents properties that are different from the time delayed mutual information since the symbolization captures the dynamic features of the analyzed time series.In addition to characterizing the direction and the amplitude of the information flow,it can also detect coupling delays.This method has the property of robustness,conceptual simplicity,and fast computational speed.

  7. Cervical length measurement: comparison of transabdominal and transvaginal approach

    DEFF Research Database (Denmark)

    Westerway, Sue C; Pedersen, Lars Henning; Hyett, Jon

    2015-01-01

    accurately, particularly if the cervix is short. At 24–34 weeks, a policy of proceeding to TV scan if TA measurement is ... plots showed an inverse trend with shorter cervixes. In women with a cervix ...: 92–96%) respectively; the negative LR was 0.96 (95% CI: 0.84-1.08). The maximum area under the ROC curve would be obtained at a TA cut-off = 32 mm (to detect a cervix

  8. Experimental approaches to the measurement of dielectronic recombination

    Energy Technology Data Exchange (ETDEWEB)

    Datz, S.

    1984-01-01

    In dielectronic recombination, the first step involves a continuum electron which excites a previously bound electron and, in so doing, loses just enough energy to be captured in a bound state (nl). This results in a doubly excited ion of a lower charge state which may either autoionize or emit a photon resulting in a stabilized recombination. The complete signature of the event is an ion of reduced charge and an emitted photon. Methods of measuring this event are discussed.

  9. A new approach for measuring human resource accounting

    OpenAIRE

    2014-01-01

    Significance of identifying human resource competency in organizations and the necessity for valuating human resource in accounting persuade many researchers to design a conceptual model for measuring human resource accounting. This study, first, examines dimensions of various valuation models of human resource and then they are compared with Goleman individual and social competency indicators. Next, individual, organizational and social competency indicators are designed through developing G...

  10. Measurement of Creativity: The tripartite approach for creative thinking

    OpenAIRE

    Takahashi,Kiyoshi; Hirikami, Akira

    2014-01-01

    The purpose of this paper is to propose a new measurement method of creativity. Based on the tripartite thinking model (TTM), this paper developed the tripartite creativity test (TCT). The TCT was generated by considering creative process in problem solving. The TCT defines creativity as the interaction of three modes of thinking: logical thinking, critical thinking, and lateral thinking. This model is apart from traditional definition of creativity that prescribes it as the skill for produci...

  11. Modelling Crowd Dynamics: a Multiscale, Measure-theoretical Approach

    CERN Document Server

    Evers, Joep

    2011-01-01

    We present a strategy capable of describing basic features of the dynamics of crowds. The behaviour of the crowd is considered from a twofold perspective. We examine both the large scale behaviour of the crowd, and phenomena happening at the individual pedestrian's level. We unify micro and macro in a single model, by working with general mass measures and their transport. We improve existing modelling by coupling a measure-theoretical framework with basic ideas of mixture theory formulated in terms of measures. This strategy allows us to define several constituents of the crowd, each having its own partial velocity. We can thus examine the interaction between subpopulations that have distinct characteristics. We give special features to those pedestrians that are represented by the microscopic (discrete) part. In real life they would play the role of leaders, predators etc. Since we are interested in the global behaviour of the rest of the crowd, we model this part as a continuum. By identifying a suitable c...

  12. Dynamic Effects of Embedded Macro-Fiber Composite Actuators on Ultra-Light Flexible Structures of Repeated Pattern- a Homogenization Approach

    Directory of Open Access Journals (Sweden)

    A. Salehian

    2012-01-01

    Full Text Available Motivated by deployable satellite technology, this article presents a homogenization model of an inflatable, rigidized lattice structure with distributed macro-fiber composite (MFC actuation. The model is based upon a general expression for the strain and kinetic energy of a fundamental repeated element of the structure. These expressions are reduced in order and expressed in terms of the strain and displacement components of an equivalent one-dimensional vibration model. The resulting model is used to analyze changes in the structural natural frequencies introduced by the local effects of the added macro-fiber composite actuators for several configurations. A finite element solution is used as a comparison for the homogenization model, and the two are shown to be in good agreement, although the latter requires significantly less computational effort.

  13. An Integrated Approach for Site Selection of Snow Measurement Stations

    Directory of Open Access Journals (Sweden)

    Bahram Saghafian

    2016-11-01

    Full Text Available Snowmelt provides a reliable water resource for meeting domestic, agricultural, industrial and hydropower demands. Consequently, estimating the available snow water equivalent is essential for water resource management of snowy regions. Due to the spatiotemporal variability of the snowfall pattern in mountainous areas and difficult access to high altitudes areas, snow measurement is one of the most challenging hydro-meteorological data collection efforts. Development of an optimum snow measurement network is a complex task that requires integration of meteorological, hydrological, physiographical and economic studies. In this study, site selection of snow measurement stations is carried out through an integrated process using observed snow course data and analysis of historical snow cover images from National Oceanic Atmospheric Administration Advanced Very High Resolution Radiometer (NOAA-AVHRR at both regional and local scales. Several important meteorological and hydrological factors, such as monthly and annual rainfall distribution, spatial distribution of average frequency of snow observation (FSO for two periods of snow falling and melting season, as well as priority contribution of sub-basins to annual snowmelt runoff are considered for selecting optimum station network. The FSO maps representing accumulation of snowfall during falling months and snowpack persistence during melting months are prepared in the GIS based on NOAA-AVHRR historical snow cover images. Basins are partitioned into 250 m elevation intervals such that within each interval, establishment of new stations or relocation/removing of the existing stations were proposed. The decision is made on the basis of the combination of meteorological, hydrological and satellite information. Economic aspects and road access constraints are also considered in determining the station type. Eventually, for the study area encompassing a number of large basins in southwest of Iran

  14. Hydrophobicity measurements by HPLC: A new approach to. pi. constants

    Energy Technology Data Exchange (ETDEWEB)

    Gago, F.; Alvarez-Builla, J.; Elguero, J.

    1987-01-01

    A classical HPLC method of measuring log P/sub o/w/ has been reevaluated and applied to 107 different mono- and disubstituted benzenes. The contribution of several functional groups has been estimated through multiple regression analysis, obtaining statistically significant mean ..pi../sup *//sub m/ values. Deviations of experimentally determined log P/sup */ values for a set of ortho-, meta-, and para- disubstituted isomers from the ''simple additive'' model have been evaluated, and interpretations are suggested.

  15. Exploring Approaches How to Measure a Lean Process

    Directory of Open Access Journals (Sweden)

    Österman Christer

    2014-08-01

    Full Text Available Purpose:The purpose of the research is to explore a practical method of measuring the implementation of lean in a process. The method will be based on examining the abilities of a group. At this scale the ability to work standardized and solve problems is important. These two abilities are dependent of each other and are fundamental for the group's ability to create a stable result. In this context the method of standardized work (SW is define to be the methods used in a process to generate stable results. Problem solving (PS is defined as the methods used to return a process to a condition where SW is possible.

  16. Measuring healthcare preparedness: an all-hazards approach

    Directory of Open Access Journals (Sweden)

    Marcozzi David E

    2012-10-01

    Full Text Available Abstract In a paper appearing in this issue, Adini, et al. describe a struggle familiar to many emergency planners—the challenge of planning for all scenarios. The authors contend that all-hazards, or capabilities-based planning, in which a set of core capabilities applicable to numerous types of events is developed, is a more efficient way to achieve general health care system emergency preparedness than scenario-based planning. Essentially, the core of what is necessary to plan for and respond to one kind of disaster (e.g. a biologic event is also necessary for planning and responding to other types of disasters, allowing for improvements in planning and maximizing efficiencies. While Adini, et al. have advanced the science of health care emergency preparedness through their consideration of 490 measures to assess preparedness, a shorter set of validated preparedness measures would support the dual goals of accountability and improved outcomes and could provide the basis for determining which actions in the name of preparedness really matter.

  17. A novel approach to tribological measurements at harsh conditions

    Science.gov (United States)

    Weltevreden, Esther R.; van der Heide, Emile

    2011-10-01

    When dealing with high-tech equipment, accurate positioning is of the utmost importance to ensure durability and a productive lifetime. Unexpected high friction or wear of positioning mechanisms can lead to unnecessary down-time or products that are not up to specification. To ensure a sufficient lifetime, it is necessary to know beforehand how the sliding and rolling contacts will behave over time. This demand becomes more stringent when the machine operates at extreme conditions, e.g. vacuum or extremely low temperatures. Traditional greases and mineral oil based lubricants do not perform adequately in such extreme environments, as they either contaminate the vacuum or do not provide sufficient film thickness. TNO recently developed a unique measuring application, the TNO cryotribometer, in order to measure friction and wear of position mechanisms at harsh conditions. Preliminary results show that the contact pressure and the sliding velocity influenced the friction level greatly. This set-up is currently used to find and analyze different material combinations, which demonstrate a constant friction level under cryogenic vacuum conditions.

  18. Automated septum thickness measurement--a Kalman filter approach.

    Science.gov (United States)

    Snare, Sten Roar; Mjølstad, Ole Christian; Orderud, Fredrik; Dalen, Håvard; Torp, Hans

    2012-11-01

    Interventricular septum thickness in end-diastole (IVSd) is one of the key parameters in cardiology. This paper presents a fast algorithm, suitable for pocket-sized ultrasound devices, for measurement of IVSd using 2D B-mode parasternal long axis images. The algorithm is based on a deformable model of the septum and the mitral valve. The model shape is estimated using an extended Kalman filter. A feasibility study using 32 unselected recordings is presented. The recordings originate from a database consisting of subjects from a normal healthy population. Five patients with suspected hypertrophy were included in the study. Reference B-mode measurements were made by two cardiologists. A paired t-test revealed a non-significant mean difference, compared to the B-mode reference, of (mean±SD) 0.14±1.36 mm (p=0.532). Pearson's correlation coefficient was 0.79 (p<0.001). The results are comparable to the variability between the two cardiologists, which was found to be 1.29±1.23 mm (p<0.001). The results indicate that the method has potential as a tool for rapid assessment of IVSd. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. How to measure soundscapes. A theoretical and practical approach

    Science.gov (United States)

    Schulte-Fortkamp, Brigitte

    2002-11-01

    Noise sources interact with the specific acoustic and environmental makeup, topography, meteorology, land use pattern, and lifestyle. The evaluation of soundscapes needs subject-related methodological procedures. With such suitable measurements a way has to be found that allows us to rely on different dimensions on reaction to noise. Improving the soundscape of an urban environment imposes to account for the qualitative appreciation as a cognitive judgment given by listeners and, particularly, for the interaction between acoustic dimensions and other sensory modalities in qualitative judgments of an urban environment (Maffiolo). The structure of the residential area that is, the combination of noise sources are important for the judgment of a soundscope and are also important as subjective parameters which are relevant in people's point of view. Moreover, the relationship of both define the background for assessments. Studies are needed on the subject and its capability in perception and interpretation; studies on the subject inside the society, studies on the social and cultural context, and field studies including physical measurements. Soundscapes may be defined in its effects on man and vice versa and probably acoustical ecology will serve to understand the function of soundscapes.

  20. Pictorial approaches for measuring time use in rural Ethiopia.

    Science.gov (United States)

    Masuda, Yuta J; Fortmann, Lea; Gugerty, Mary Kay; Smith-Nilson, Marla; Cook, Joseph

    2014-01-01

    Time use researchers working in least developed countries (LDCs) face difficulties collecting data from illiterate populations who may conceptualize time differently than those in industrialized countries. We identify existing gaps in time use data collection methods and discuss two novel, pictorial methods to collect time use data from these populations. The first method is a modified recall interview modeled on participatory rural appraisal (PRA) methods that asks respondents to place macaroni on pictures of activity categories in proportion to the amount of time spent on that activity during the previous day. The second is a simplified pictorial time diary that uses a timer and sequentially-numbered stickers to re-create the temporal order of activities in 30-minute increments. The latter method also avoids recall bias problems. We present time use data collected in 2009 using these methods in a study examining the impacts of water infrastructure on women and children's time use in rural Ethiopia. In total, we collected information using the first method from 263 household members over age 10, including 167 water collectors, and pilot-tested the pictorial diary approach with 10 adult respondents.

  1. Measuring the dimensions of adaptive capacity: a psychometric approach

    Directory of Open Access Journals (Sweden)

    Michael Lockwood

    2015-03-01

    Full Text Available Although previous studies have examined adaptive capacity using a range of self-assessment procedures, no objective self-report approaches have been used to identify the dimensions of adaptive capacity and their relative importance. We examine the content, structure, and relative importance of dimensions of adaptive capacity as perceived by rural landholders in an agricultural landscape in South-Eastern Australia. Our findings indicate that the most important dimensions influencing perceived landholder adaptive capacity are related to their management style, particularly their change orientation. Other important dimensions are individual financial capacity, labor availability, and the capacity of communities and local networks to support landholders' management practices. Trust and confidence in government with respect to native vegetation management was not found to be a significant dimension of perceived adaptive capacity. The scale items presented, particularly those with high factor loadings, provide a solid foundation for assessment of adaptive capacity in other study areas, as well as exploration of relationships between the individual dimensions of adaptive capacity and dependent variables such as perceived resilience. Further work is needed to refine the scale items and compare the findings from this case study with those from other contexts and population samples.

  2. Measuring market share of petrol stations using conditional probability approach

    Science.gov (United States)

    Sharif, Shamshuritawati; Lwee, Xue Yin

    2017-05-01

    Oil and gas production is the strength of Malaysia's growth over past decades. It is one of the most strategic economic branches in the world. Since the oil industry is essential for the economic growth of a country, only a few undertakings have been achieved to establish. It is a very risky business. Therefore the dealer must have some information in hand before setting up a new business plan. Understanding the current business situation is an important strategy to avoid risky ventures. In this study, the aim is to deliver a very simple but essential way to identify the market share based on customer's choice factors. This approach is presented to encourage the non-statisticians to use it easily in helping their business performance. From this study, the most important factors differ from one station to another station. The results show that the factors of customer's choice for BHPetrol, Caltex, PETRON, PETRONAS and SHELL are site location, service quality, service quality, size of the petrol station, and brand image, respectively.

  3. Using Rasch Measurement Theory to Examine Two Instructional Approaches for Teaching and Learning of French Grammar

    Science.gov (United States)

    Vogel, Severine P.; Engelhard, George, Jr.

    2011-01-01

    The authors describe a quantitative approach based on Rasch measurement theory for evaluating classroom assessments within the context of foreign language classes. A secondary purpose was to examine the effects of two instructional approaches to teach grammar, a guided inductive and a deductive approach, through the lens of Rasch measurement…

  4. A multidimensional approach to measure poverty in rural Bangladesh.

    Science.gov (United States)

    Bhuiya, Abbas; Mahmood, Shehrin Shaila; Rana, A K M Masud; Wahed, Tania; Ahmed, Syed Masud; Chowdhury, A Mushtaque R

    2007-06-01

    Poverty is increasingly being understood as a multidimensional phenomenon. Other than income-consumption, which has been extensively studied in the past, health, education, shelter, and social involvement are among the most important dimensions of poverty. The present study attempts to develop a simple tool to measure poverty in its multidimensionality where it views poverty as an inadequate fulfillment of basic needs, such as food, clothing, shelter, health, education, and social involvement. The scale score ranges between 72 and 24 and is constructed in such a way that the score increases with increasing level of poverty. Using various techniques, the study evaluates the poverty-measurement tool and provides evidence for its reliability and validity by administering it in various areas of rural Bangladesh. The reliability coefficients, such as test-retest coefficient (0.85) and Cronbach's alpha (0.80) of the tool, were satisfactorily high. Based on the socioeconomic status defined by the participatory rural appraisal (PRA) exercise, the level of poverty identified by the scale was 33% in Chakaria, 26% in Matlab, and 32% in other rural areas of the country. The validity of these results was tested against some traditional methods of identifying the poor, and the association of the scores with that of the traditional indicators, such as ownership of land and occupation, asset index (r=0.72), and the wealth ranking obtained from the PRA exercise, was consistent. A statistically significant inverse relationship of the poverty scores with the socioeconomic status was observed in all cases. The scale also allowed the absolute level of poverty to be measured, and in the present study, the highest percentage of absolute poor was found in terms of health (44.2% in Chakaria, 36.4% in Matlab, and 39.1% in other rural areas), followed by social exclusion (35.7% in Chakaria, 28.5% in Matlab, and 22.3% in other rural areas), clothing (6.2% in Chakaria, 8.3% in Matlab, and 20

  5. Aberration measurement from specific photolithographic images: a different approach.

    Science.gov (United States)

    Nomura, H; Tawarayama, K; Kohno, T

    2000-03-01

    Techniques for measurement of higher-order aberrations of a projection optical system in photolithographic exposure tools have been established. Even-type and odd-type aberrations are independently obtained from printed grating patterns on a wafer by three-beam interference under highly coherent illumination. Even-type aberrations, i.e., spherical aberration and astigmatism, are derived from the best focus positions of vertical, horizontal, and oblique grating patterns by an optical microscope. Odd-type aberrations, i.e., coma and three-foil, are obtained by detection of relative shifts of a fine grating pattern to a large pattern by an overlay inspection tool. Quantitative diagnosis of lens aberrations with a krypton fluoride (KrF) excimer laser scanner is demonstrated.

  6. THEORETICAL APPROACHES REGARDING THE ECONOMIC AND FINANCIAL PERFORMANCE MEASUREMENT

    Directory of Open Access Journals (Sweden)

    CARUNTU ANDREEA LAURA

    2012-12-01

    Full Text Available The need for performance evaluation appears from the importance of the assessment of the available resources usage, the identification of the economic resources potential for future use and the capacity to generate cash flows. Practice has held a number of tools that provide information needed to assess the performance state, obviously, differentiated from one country to another corresponding to the particularities of each economic system and that are useful to a large number of people involved in the life of a trader. This article presents some performance characteristics and refers to the options of measuring the global performance by using the balanced scorecard, the Triple Bottom Line reporting and the GRI G3 guidelines.

  7. A measure theoretical approach to quantum stochastic processes

    Energy Technology Data Exchange (ETDEWEB)

    Waldenfels, Wilhelm von

    2014-04-01

    Authored by a leading researcher in the field. Self-contained presentation of the subject matter. Examines a number of worked examples in detail. This monograph takes as starting point that abstract quantum stochastic processes can be understood as a quantum field theory in one space and in one time coordinate. As a result it is appropriate to represent operators as power series of creation and annihilation operators in normal-ordered form, which can be achieved using classical measure theory. Considering in detail four basic examples (e.g. a two-level atom coupled to a heat bath of oscillators), in each case the Hamiltonian of the associated one-parameter strongly continuous group is determined and the spectral decomposition is explicitly calculated in the form of generalized eigen-vectors. Advanced topics include the theory of the Hudson-Parthasarathy equation and the amplified oscillator problem. To that end, a chapter on white noise calculus has also been included.

  8. Measurement of the Specific Heat Using a Gravity Cancellation Approach

    Science.gov (United States)

    Zhong, Fang

    2003-01-01

    The specific heat at constant volume C(sob V) of a simple fluid diverges near its liquid-vapor critical point. However, gravity-induced density stratification due to the divergence of isothermal susceptibility hinders the direct comparison of the experimental data with the predictions of renormalization group theory. In the past, a microgravity environment has been considered essential to eliminate the density stratification. We propose to perform specific heat measurements of He-3 on the ground using a method to cancel the density stratification. A He-3 fluid layer will be heated from below, using the thermal expansion of the fluid to cancel the hydrostatic compression. A 6% density stratification at a reduced temperature of 10(exp -5) can be cancelled to better than 0.1% with a steady 1.7 micro K temperature difference across a 0.05 cm thick fluid layer. A conventional AC calorimetry technique will be used to determine the heat capacity. The minimized bulk density stratification with a relaxation time 6500 sec at a reduced temperature of 10(exp -5) will stay unchanged during 1 Hz AC heating. The smear of the specific heat divergence due to the temperature difference across the cell is about 0.1% at a reduced temperature of 10(exp -6). The combination of using High Resolution Thermometry with a 0.5 n K temperature resolution in the AC technique and the cancellation of the density stratification will enable C(sub V) to be measured down to a reduced temperature of 10(exp -6) with less than a 1% systematic error.

  9. Measuring efficiency of international crude oil markets: A multifractality approach

    Science.gov (United States)

    Niere, H. M.

    2015-01-01

    The three major international crude oil markets are treated as complex systems and their multifractal properties are explored. The study covers daily prices of Brent crude, OPEC reference basket and West Texas Intermediate (WTI) crude from January 2, 2003 to January 2, 2014. A multifractal detrended fluctuation analysis (MFDFA) is employed to extract the generalized Hurst exponents in each of the time series. The generalized Hurst exponent is used to measure the degree of multifractality which in turn is used to quantify the efficiency of the three international crude oil markets. To identify whether the source of multifractality is long-range correlations or broad fat-tail distributions, shuffled data and surrogated data corresponding to each of the time series are generated. Shuffled data are obtained by randomizing the order of the price returns data. This will destroy any long-range correlation of the time series. Surrogated data is produced using the Fourier-Detrended Fluctuation Analysis (F-DFA). This is done by randomizing the phases of the price returns data in Fourier space. This will normalize the distribution of the time series. The study found that for the three crude oil markets, there is a strong dependence of the generalized Hurst exponents with respect to the order of fluctuations. This shows that the daily price time series of the markets under study have signs of multifractality. Using the degree of multifractality as a measure of efficiency, the results show that WTI is the most efficient while OPEC is the least efficient market. This implies that OPEC has the highest likelihood to be manipulated among the three markets. This reflects the fact that Brent and WTI is a very competitive market hence, it has a higher level of complexity compared against OPEC, which has a large monopoly power. Comparing with shuffled data and surrogated data, the findings suggest that for all the three crude oil markets, the multifractality is mainly due to long

  10. An isotopic approach to measuring nitrogen balance in caribou

    Science.gov (United States)

    Gustine, D.D.; Barboza, P.S.; Adams, L.G.; Farnell, R.G.; Parker, K.L.

    2011-01-01

    Nutritional restrictions in winter may reduce the availability of protein for reproduction and survival in northern ungulates. We refined a technique that uses recently voided excreta on snow to assess protein status in wild caribou (Rangifer tarandus) in late winter. Our study was the first application of this non-invasive, isotopic approach to assess protein status of wild caribou by determining dietary and endogenous contributions of nitrogen (N) to urinary urea. We used isotopic ratios of N (??15N) in urine and fecal samples to estimate the proportion of urea N derived from body N (p-UN) in pregnant, adult females of the Chisana Herd, a small population that ranged across the Alaska-Yukon border. We took advantage of a predator-exclosure project to examine N status of penned caribou in April 2006. Lichens were the primary forage (>40%) consumed by caribou in the pen and ?? 15N of fiber tracked the major forages in their diets. The ??15N of urinary urea for females in the pen was depleted relative (-1.3 ?? 1.0 parts per thousand [??], x?? ?? SD) to the ??15N of body N (2.7 ?? 0.7??). A similar proportion of animals in the exclosure lost core body mass (excluding estimates of fetal and uterine tissues; 55%) and body protein (estimated by isotope ratios; 54%). This non-invasive technique could be applied at various spatial and temporal scales to assess trends in protein status of free-ranging populations of northern ungulates. Intra- and inter-annual estimates of protein status could help managers monitor effects of foraging conditions on nutritional constraints in ungulates, increase the efficiency and efficacy of management actions, and help prepare stakeholders for potential changes in population trends. ?? 2010 The Wildlife Society.

  11. Ozone Measurements Monitoring Using Data-Based Approach

    KAUST Repository

    Harrou, Fouzi

    2016-02-01

    The complexity of ozone (O3) formation mechanisms in the troposphere make the fast and accurate modeling of ozone very challenging. In the absence of a process model, principal component analysis (PCA) has been extensively used as a data-based monitoring technique for highly correlated process variables; however conventional PCA-based detection indices often fail to detect small or moderate anomalies. In this work, we propose an innovative method for detecting small anomalies in highly correlated multivariate data. The developed method combine the multivariate exponentially weighted moving average (MEWMA) monitoring scheme with PCA modelling in order to enhance anomaly detection performance. Such a choice is mainly motivated by the greater ability of the MEWMA monitoring scheme to detect small changes in the process mean. The proposed PCA-based MEWMA monitoring scheme is successfully applied to ozone measurements data collected from Upper Normandy region, France, via the network of air quality monitoring stations. The detection results of the proposed method are compared to that declared by Air Normand air monitoring association.

  12. A preliminary approach to identify irradiated foods by thermoluminescence measurements

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Choonshik, E-mail: maggic7@korea.kr [Gyeong-In Regional Korea Food and Drug Administration, Juan-Dong 120, Nam-Gu, Incheon 402-835 (Korea, Republic of); Department of Bioscience and Biotechnology, Konkuk University, Hwayang-Dong 1, Gwangjin-Gu, Seoul 143-701 (Korea, Republic of); Kim, Hyoung-Ook [Gyeong-In Regional Korea Food and Drug Administration, Juan-Dong 120, Nam-Gu, Incheon 402-835 (Korea, Republic of); Lim, Yoongho [Department of Bioscience and Biotechnology, Konkuk University, Hwayang-Dong 1, Gwangjin-Gu, Seoul 143-701 (Korea, Republic of)

    2012-07-15

    Thermoluminescence (TL) is one of the physical methods for the identification of irradiated foods. Among the currently developed methods, TL is the most widely used method for the identification of irradiated foods. However, in order to use this method, silicate minerals should be isolated from food samples. The process for the isolation of silicate minerals is time consuming and laborious. In this work, we have investigated the applicability of the TL method using iron-containing minerals instead of silicate minerals. In the TL analyses of dried spices, TL glow curves of iron-containing minerals showed maximum temperatures between 150 and 250 Degree-Sign C which were the same as those of silicate minerals. The process for the mineral separation of the proposed method is simple, fast, easy, and reliable. Moreover, the analysis results including TL ratio have not shown significant differences compared with the silicate minerals method. As a result, the TL measurements using the iron-containing minerals could be an excellent method for the identification of the irradiated foods, including dried spices. - Highlights: Black-Right-Pointing-Pointer A thermoluminescence method using iron-containing minerals is proposed. Black-Right-Pointing-Pointer Current method using silicate minerals is time consuming and laborious. Black-Right-Pointing-Pointer However, the proposed method is simple, fast, easy, and reliable. Black-Right-Pointing-Pointer Analysis results are similar to those of the silicate minerals method.

  13. Non-Invasive Ocular Rigidity Measurement: A Differential Tonometry Approach

    Directory of Open Access Journals (Sweden)

    Efstathios T. Detorakis

    2015-12-01

    Full Text Available Purpose: Taking into account the fact that Goldmann applanation tonometry (GAT geometrically deforms the corneal apex and displaces volume from the anterior segment whereas Dynamic Contour Tonometry (DCT does not, we aimed at developing an algorithm for the calculation of ocular rigidity (OR based on the differences in pressure and volume between deformed and non-deformed status according to the general Friedenwald principle of differential tonometry. Methods: To avoid deviations of GAT IOP from true IOP in eyes with corneas different from the “calibration cornea” we applied the previously described Orssengo-Pye algorithm to calculate an error coefficient “C/B”. To test the feasibility of the proposed model, we calculated the OR coefficient (r in 17 cataract surgery candidates (9 males and 8 females. Results: The calculated r according to our model (mean ± SD, range was 0.0174 ± 0.010 (0.0123–0.022 mmHg/μL. A negative statistically significant correlation between axial length and r was detected whereas correlations between r and other biometric parameters examined were statistically not significant. Conclusions: The proposed method may prove a valid non-invasive tool for the measurement method of OR, which could help in introducing OR in the decision-making of the routine clinical practice.

  14. A conditional likelihood approach for regression analysis using biomarkers measured with batch-specific error.

    Science.gov (United States)

    Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi

    2012-12-20

    Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our method by using data from a colorectal adenoma study.

  15. Measuring the quality of MDT working: an observational approach

    Directory of Open Access Journals (Sweden)

    Taylor Cath

    2012-05-01

    Full Text Available Abstract Background Cancer multidisciplinary teams (MDTs are established in many countries but little is known about how well they function. A core activity is regular MDT meetings (MDMs where treatment recommendations are agreed. A mixed methods descriptive study was conducted to develop and test quality criteria for observational assessment of MDM performance calibrated against consensus from over 2000 MDT members about the “characteristics of an effective MDT”. Methods Eighteen of the 86 ‘Characteristics of Effective MDTs’ were considered relevant and feasible to observe. They collated to 15 aspects of MDT working covering four domains: the team (e.g. attendance, chairing, teamworking; infrastructure for meetings (venue, equipment; meeting organisation and logistics; and patient-centred clinical decision-making (patient-centredness, clarity of recommendations. Criteria for rating each characteristic from ‘very poor’ to ‘very good’ were derived from literature review, observing MDMs and expert input. Criteria were applied to 10 bowel cancer MDTs to assess acceptability and measure variation between and within teams. Feasibility and inter-rater reliability was assessed by comparing three observers. Results Observational assessment was acceptable to teams and feasible to implement. Total scores from 29 to 50 (out of 58 highlighted wide diversity in quality between teams. Eight teams were rated either ‘very good/good’ or ‘very poor/poor’ for at least three domains demonstrating some internal consistency. ‘Very good’ ratings were most likely for attendance and administrative preparation, and least likely for patient-centredness of decision-making and prioritisation of complex cases. All except two characteristics had intra-class correlations of ≥0.50. Conclusions This observational tool (MDT-OARS may contribute to the assessment of MDT performance. Further testing to confirm validity and reliability is required.

  16. Measuring progressive independence with the resident supervision index: empirical approach.

    Science.gov (United States)

    Kashner, T Michael; Byrne, John M; Chang, Barbara K; Henley, Steven S; Golden, Richard M; Aron, David C; Cannon, Grant W; Gilman, Stuart C; Holland, Gloria J; Kaminetzky, Catherine P; Keitz, Sheri A; Muchmore, Elaine A; Kashner, Tetyana K; Wicker, Annie B

    2010-03-01

    A Resident Supervision Index (RSI) developed by our research team quantifies the intensity of resident supervision in graduate medical education, with the goal of testing for progressive independence. The 4-part RSI method includes a survey instrument for staff and residents (RSI Inventory), a strategy to score survey responses, a theoretical framework (patient centered optimal supervision), and a statistical model that accounts for the presence or absence of supervision and the intensity of patient care. The RSI Inventory data came from 140 outpatient encounters involving 57 residents and 37 attending physicians during a 3-month period at a Department of Veterans Affairs outpatient clinic. Responses are scored to quantitatively measure the intensity of resident supervision across 10 levels of patient services (staff is absent, is present, participated, or provided care with or without a resident), case discussion (resident-staff interaction), and oversight (staff reviewed case, reviewed medical chart, consulted with staff, or assessed patient). Scores are analyzed by level and for patient care using a 2-part model (supervision initiated [yes or no] versus intensity once supervision was initiated). All resident encounters had patient care supervision, resident oversight, or both. Consistent with the progressive independence hypothesis, residents were 1.72 (P  =  .019) times more likely to be fully responsible for patient care with each additional postgraduate year. Decreasing case complexity, increasing clinic workload, and advanced nonmedical degrees among attending staff were negatively associated with supervision intensity, although associations varied by supervision level. These data are consistent with the progressive independence hypothesis in graduate medical education and offer empirical support for the 4-part RSI method to quantify the intensity of resident supervision for research, program evaluation, and resident assessment purposes. Before

  17. Prospective evaluation of potential toxicity of repeated doses of Thymus vulgaris L. extracts in rats by means of clinical chemistry, histopathology and NMR-based metabonomic approach.

    Science.gov (United States)

    Benourad, Fouzia; Kahvecioglu, Zehra; Youcef-Benkada, Mokhtar; Colet, Jean-Marie

    2014-10-01

    In the field of natural extracts, research generally focuses on the study of their biological activities for food, cosmetic, or pharmacological purposes. The evaluation of their adverse effects is often overlooked. In this study, the extracts of Thymus vulgaris L. were obtained by two different extraction methods. Intraperitoneal injections of both extracts were given daily for four days to male Wistar Han rats, at two different doses for each extract. The evaluation of the potential toxic effects included histopathological examination of liver, kidney, and lung tissues, as well as serum biochemistry of liver and kidney parameters, and (1)H-NMR-based metabonomic profiles of urine. The results showed that no histopathological changes were observed in the liver and kidney in rats treated with both extracts of thyme. Serum biochemical investigations revealed significant increases in blood urea nitrogen, creatinine, and uric acid in animals treated with polyphenolic extract at both doses. In these latter groups, metabonomic analysis revealed alterations in a number of urine metabolites involved in the energy metabolism in liver mitochondria. Indeed, the results showed alterations of glycolysis, Krebs cycle, and β-oxidative pathways as evidenced by increases in lactate and ketone bodies, and decreases in citrate, α-ketoglutarate, creatinine, hippurate, dimethylglycine, and dimethyalanine. In conclusion, this work showed that i.p. injection of repeated doses of thyme extracts causes some disturbances of intermediary metabolism in rats. The metabonomic study revealed interesting data which could be further used to determine the cellular pathways affected by such treatments.

  18. Discrepancies in reporting the CAG repeat lengths for Huntington's disease

    DEFF Research Database (Denmark)

    Quarrell, Oliver W; Handley, Olivia; O'Donovan, Kirsty

    2011-01-01

    Huntington's disease results from a CAG repeat expansion within the Huntingtin gene; this is measured routinely in diagnostic laboratories. The European Huntington's Disease Network REGISTRY project centrally measures CAG repeat lengths on fresh samples; these were compared with the original...

  19. Validation of 'variable number of tandem repeat'-based approach for examination of 'Candidatus Liberibacter asiaticus' diversity and its applications for the analysis of the pathogen populations in the areas of recent introduction.

    Science.gov (United States)

    Matos, Luis A; Hilf, Mark E; Chen, Jianchi; Folimonova, Svetlana Y

    2013-01-01

    Citrus greening (Huanglongbing, HLB) is one of the most destructive diseases of citrus worldwide. In South Asia HLB has been known for more than a century, while in Americas the disease was found relatively recently. HLB is associated with three species of 'Candidatus Liberibacter' among which 'Ca. Liberibacter asiaticus' (CLas) has most wide distribution. Recently, a number of studies identified different regions in the CLas genome with variable number of tandem repeats (VNTRs) that could be used for examination of CLas diversity. One of the objectives of the work presented here was to further validate the VNTR analysis-based approach by assessing the stability of these repeats upon multiplication of the pathogen in a host over an extended period of time and upon its passaging from a host to a host using CLas populations from Florida. Our results showed that the numbers of tandem repeats in the four loci tested display very distinguishable "signature profiles" for the two Florida-type CLas haplotype groups. Remarkably, the profiles do not change upon passage of the pathogen in citrus and psyllid hosts as well as after its presence within a host over a period of five years, suggesting that VNTR analysis-based approach represents a valid methodology for examination of the pathogen populations in various geographical regions. Interestingly, an extended analysis of CLas populations in different locations throughout Florida and in several countries in the Caribbean and Central America regions and in Mexico where the pathogen has been introduced recently demonstrated the dispersion of the same haplotypes of CLas. On the other hand, these CLas populations appeared to differ significantly from those obtained from locations where the disease has been present for a much longer time.

  20. Validation of 'variable number of tandem repeat'-based approach for examination of 'Candidatus Liberibacter asiaticus' diversity and its applications for the analysis of the pathogen populations in the areas of recent introduction.

    Directory of Open Access Journals (Sweden)

    Luis A Matos

    Full Text Available Citrus greening (Huanglongbing, HLB is one of the most destructive diseases of citrus worldwide. In South Asia HLB has been known for more than a century, while in Americas the disease was found relatively recently. HLB is associated with three species of 'Candidatus Liberibacter' among which 'Ca. Liberibacter asiaticus' (CLas has most wide distribution. Recently, a number of studies identified different regions in the CLas genome with variable number of tandem repeats (VNTRs that could be used for examination of CLas diversity. One of the objectives of the work presented here was to further validate the VNTR analysis-based approach by assessing the stability of these repeats upon multiplication of the pathogen in a host over an extended period of time and upon its passaging from a host to a host using CLas populations from Florida. Our results showed that the numbers of tandem repeats in the four loci tested display very distinguishable "signature profiles" for the two Florida-type CLas haplotype groups. Remarkably, the profiles do not change upon passage of the pathogen in citrus and psyllid hosts as well as after its presence within a host over a period of five years, suggesting that VNTR analysis-based approach represents a valid methodology for examination of the pathogen populations in various geographical regions. Interestingly, an extended analysis of CLas populations in different locations throughout Florida and in several countries in the Caribbean and Central America regions and in Mexico where the pathogen has been introduced recently demonstrated the dispersion of the same haplotypes of CLas. On the other hand, these CLas populations appeared to differ significantly from those obtained from locations where the disease has been present for a much longer time.

  1. How will we know patients are safer? An organization-wide approach to measuring and improving safety.

    Science.gov (United States)

    Pronovost, Peter; Holzmueller, Christine G; Needham, Dale M; Sexton, J Bryan; Miller, Marlene; Berenholtz, Sean; Wu, Albert W; Perl, Trish M; Davis, Richard; Baker, David; Winner, Laura; Morlock, Laura

    2006-07-01

    Our institution, like many, is struggling to develop measures that answer the question, How do we know we are safer? Our objectives are to present a framework to evaluate performance in patient safety and describe how we applied this model in intensive care units. We focus on measures of safety rather than broader measures of quality. The measures will allow health care organizations to evaluate whether they are safer now than in the past by answering the following questions: How often do we harm patients? How often do patients receive the appropriate interventions? How do we know we learned from defects? How well have we created a culture of safety? The first two measures are rate based, whereas the latter two are qualitative. To improve care within institutions, caregivers must be engaged, must participate in the selection and development of measures, and must receive feedback regarding their performance. The following attributes should be considered when evaluating potential safety measures: Measures must be important to the organization, must be valid (represent what they intend to measure), must be reliable (produce similar results when used repeatedly), must be feasible (affordable to collect data), must be usable for the people expected to employ the data to improve safety, and must have universal applicability within the entire institution. Health care institutions. Health care currently lacks a robust safety score card. We developed four aggregate measures of patient safety and present how we applied them to intensive care units in an academic medical center. The same measures are being applied to nearly 200 intensive care units as part of ongoing collaborative projects. The measures include how often do we harm patients, how often do we do what we should (i.e., use evidence-based medicine), how do we know we learned from mistakes, and how well do we improve culture. Measures collected by different departments can then be aggregated to provide a hospital

  2. Within-Subject Associations between Mood Dimensions and Non-exercise Activity: An Ambulatory Assessment Approach Using Repeated Real-Time and Objective Data

    OpenAIRE

    2016-01-01

    A physically active lifestyle has been related to positive health outcomes and high life expectancy, but the underlying psychological mechanisms maintaining physical activity are rarely investigated. Tremendous technological progress yielding sophisticated methodological approaches, i.e., ambulatory assessment, have recently enabled the study of these mechanisms in everyday life. In practice, accelerometers allow to continuously and objectively monitor physical activity. The combination with ...

  3. Mature clustered, regularly interspaced, short palindromic repeats RNA (crRNA) length is measured by a ruler mechanism anchored at the precursor processing site.

    Science.gov (United States)

    Hatoum-Aslan, Asma; Maniv, Inbal; Marraffini, Luciano A

    2011-12-27

    Precise RNA processing is fundamental to all small RNA-mediated interference pathways. In prokaryotes, clustered, regularly interspaced, short palindromic repeats (CRISPR) loci encode small CRISPR RNAs (crRNAs) that protect against invasive genetic elements by antisense targeting. CRISPR loci are transcribed as a long precursor that is cleaved within repeat sequences by CRISPR-associated (Cas) proteins. In many organisms, this primary processing generates crRNA intermediates that are subject to additional nucleolytic trimming to render mature crRNAs of specific lengths. The molecular mechanisms underlying this maturation event remain poorly understood. Here, we defined the genetic requirements for crRNA primary processing and maturation in Staphylococcus epidermidis. We show that changes in the position of the primary processing site result in extended or diminished maturation to generate mature crRNAs of constant length. These results indicate that crRNA maturation occurs by a ruler mechanism anchored at the primary processing site. We also show that maturation is mediated by specific cas genes distinct from those genes involved in primary processing, showing that this event is directed by CRISPR/Cas loci.

  4. Coordinated hybrid automatic repeat request

    KAUST Repository

    Makki, Behrooz

    2014-11-01

    We develop a coordinated hybrid automatic repeat request (HARQ) approach. With the proposed scheme, if a user message is correctly decoded in the first HARQ rounds, its spectrum is allocated to other users, to improve the network outage probability and the users\\' fairness. The results, which are obtained for single- and multiple-antenna setups, demonstrate the efficiency of the proposed approach in different conditions. For instance, with a maximum of M retransmissions and single transmit/receive antennas, the diversity gain of a user increases from M to (J+1)(M-1)+1 where J is the number of users helping that user.

  5. Perfusion CT measurements in healthy cervical spinal cord: feasibility and repeatability of the study as well as interchangeability of the perfusion estimates using two commercially available software packages

    Energy Technology Data Exchange (ETDEWEB)

    Bisdas, Sotirios [Johann Wolfgang University Hospital, Department of Radiology, Frankfurt (Germany); Medical University of South Carolina, Department of Radiology, Charleston, SC (United States); Johann Wolfgang Goethe University Hospital, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Rumboldt, Zoran; Deveikis, John; Spampinato, Maria Vittoria [Medical University of South Carolina, Department of Radiology, Charleston, SC (United States); Surlan, Katarina [Clinical Centre Ljubljana, Department of Clinical Radiology, Ljubljana (Slovenia); Koh, Tong San [Nanyang Technological University, School of Electrical and Electronic Engineering, Singapore (Singapore)

    2008-10-15

    Our purpose was to examine the feasibility and reproducibility of perfusion CT studies in the cervical spinal cord and the interchangeability of the values obtained by two post-processing methods. The perfusion CT studies of 40 patients with neck tumours were post-processed using two software packages (Software-1: deconvolution-based analysis with adiabatic tissue homogeneity approach and Software-2: maximum-slope-model with Patlak analysis). Eight patients were examined twice for assessing the reproducibility of the technique. Two neuroradiologists separately post-processed the images with two arterial input functions (AIFs): (1) the internal carotid artery (ICA) and (2) the vertebral artery (VA). Maps of blood flow (F) in ml/min/100 g, blood volume (V) in ml/100 g, mean transit time (MTT) in seconds (s) and permeability (PS) in ml/min/100 g were generated. The mean F, V, MTT and PS (Software-1) with VA-AIF and ICA-AIF were 8.93, 1.12, 16.3, 1.88 and 8.57, 1.19, 16.85 and 1.94, respectively. The reproducibility of the techniques was satisfactory, while the V and MTT values (in Software-1) and the F and V values (in Software-2) were dependent on the site of the AIF (p{>=}0.03 and p=0.02, respectively). The interobserver agreement was very good. The significant differences in measurements for a single patient (%) using Software-1/Software-2 were {+-}120%/110%, 90%/80%, 180% and 250%/130% for F, V, MTT and PS, respectively. Only F and PS values in the healthy tissue seemed to be interchangeable. Our results were in essential agreement with those derived by invasive measurements in animals. The cervical spine perfusion CT studies are feasible and reproducible. The present knowledge has to be validated with studies in spinal cord tumours in order to decide the usefulness of the perfusion CT in this field. (orig.)

  6. What is a microsatellite: a computational and experimental definition based upon repeat mutational behavior at A/T and GT/AC repeats.

    Science.gov (United States)

    Kelkar, Yogeshwar D; Strubczewski, Noelle; Hile, Suzanne E; Chiaromonte, Francesca; Eckert, Kristin A; Makova, Kateryna D

    2010-01-01

    Microsatellites are abundant in eukaryotic genomes and have high rates of strand slippage-induced repeat number alterations. They are popular genetic markers, and their mutations are associated with numerous neurological diseases. However, the minimal number of repeats required to constitute a microsatellite has been debated, and a definition of a microsatellite that considers its mutational behavior has been lacking. To define a microsatellite, we investigated slippage dynamics for a range of repeat sizes, utilizing two approaches. Computationally, we assessed length polymorphism at repeat loci in ten ENCODE regions resequenced in four human populations, assuming that the occurrence of polymorphism reflects strand slippage rates. Experimentally, we determined the in vitro DNA polymerase-mediated strand slippage error rates as a function of repeat number. In both approaches, we compared strand slippage rates at tandem repeats with the background slippage rates. We observed two distinct modes of mutational behavior. At small repeat numbers, slippage rates were low and indistinguishable from background measurements. A marked transition in mutability was observed as the repeat array lengthened, such that slippage rates at large repeat numbers were significantly higher than the background rates. For both mononucleotide and dinucleotide microsatellites studied, the transition length corresponded to a similar number of nucleotides (approximately 10). Thus, microsatellite threshold is determined not by the presence/absence of strand slippage at repeats but by an abrupt alteration in slippage rates relative to background. These findings have implications for understanding microsatellite mutagenesis, standardization of genome-wide microsatellite analyses, and predicting polymorphism levels of individual microsatellite loci.

  7. UK 2009-2010 repeat station report

    Directory of Open Access Journals (Sweden)

    Thomas J.G. Shanahan

    2013-03-01

    Full Text Available The British Geological Survey is responsible for conducting the UK geomagnetic repeat station programme. Measurements made at the UK repeat station sites are used in conjunction with the three UK magnetic observatories: Hartland, Eskdalemuir and Lerwick, to produce a regional model of the local field each year. The UK network of repeat stations comprises 41 stations which are occupied at approximately 3-4 year intervals. Practices for conducting repeat station measurements continue to evolve as advances are made in survey instrumentation and as the usage of the data continues to change. Here, a summary of the 2009 and 2010 UK repeat station surveys is presented, highlighting the measurement process and techniques, density of network, reduction process and recent results.

  8. A New Approach of Measuring Hospital Performance for Low- and Middle-income Countries.

    Science.gov (United States)

    Adhikari, Shiva Raj; Sapkota, Vishnu Prasad; Supakankunti, Siripen

    2015-11-01

    Efficiency of the hospitals affects the price of health services. Health care payments have equity implications. Evidence on hospital performance can support to design the policy; however, the recent literature on hospital efficiency produced conflicting results. Consequently, policy decisions are uncertain. Even the most of evidence were produced by using data from high income countries. Conflicting results were produced particularly due to differences in methods of measuring performance. Recently a management approach has been developed to measure the hospital performance. This approach to measure the hospital performance is very useful from policy perspective to improve health system from cost-effective way in low and middle income countries. Measuring hospital performance through management approach has some basic characteristics such as scoring management practices through double blind survey, measuring hospital outputs using various indicators, estimating the relationship between management practices and outputs of the hospitals. This approach has been successfully applied to developed countries; however, some revisions are required without violating the fundamental principle of this approach to replicate in low- and middle-income countries. The process has been clearly defined and applied to Nepal. As the results of this, the approach produced expected results. The paper contributes to improve the approach to measure hospital performance.

  9. Hysteresis of magnetostructural transitions: Repeatable and non-repeatable processes

    Energy Technology Data Exchange (ETDEWEB)

    Provenzano, Virgil [National Institute of Standards and Technology, Gaithersburg, MD 20899 (United States); Della Torre, Edward; Bennett, Lawrence H. [Department of Electrical and Computer Engineering, The George Washington University, Washington, DC 20052 (United States); ElBidweihy, Hatem, E-mail: Hatem@gwmail.gwu.edu [Department of Electrical and Computer Engineering, The George Washington University, Washington, DC 20052 (United States)

    2014-02-15

    The Gd{sub 5}Ge{sub 2}Si{sub 2} alloy and the off-stoichiometric Ni{sub 50}Mn{sub 35}In{sub 15} Heusler alloy belong to a special class of metallic materials that exhibit first-order magnetostructural transitions near room temperature. The magnetic properties of this class of materials have been extensively studied due to their interesting magnetic behavior and their potential for a number of technological applications such as refrigerants for near-room-temperature magnetic refrigeration. The thermally driven first-order transitions in these materials can be field-induced in the reverse order by applying a strong enough field. The field-induced transitions are typically accompanied by the presence of large magnetic hysteresis, the characteristics of which are a complicated function of temperature, field, and magneto-thermal history. In this study we show that the virgin curve, the major loop, and sequentially measured MH loops are the results of both repeatable and non-repeatable processes, in which the starting magnetostructural state, prior to the cycling of field, plays a major role. Using the Gd{sub 5}Ge{sub 2}Si{sub 2} and Ni{sub 50}Mn{sub 35}In{sub 15} alloys, as model materials, we show that a starting single phase state results in fully repeatable processes and large magnetic hysteresis, whereas a mixed phase starting state results in non-repeatable processes and smaller hysteresis.

  10. Reconfigurable multiport EPON repeater

    Science.gov (United States)

    Oishi, Masayuki; Inohara, Ryo; Agata, Akira; Horiuchi, Yukio

    2009-11-01

    An extended reach EPON repeater is one of the solutions to effectively expand FTTH service areas. In this paper, we propose a reconfigurable multi-port EPON repeater for effective accommodation of multiple ODNs with a single OLT line card. The proposed repeater, which has multi-ports in both OLT and ODN sides, consists of TRs, BTRs with the CDR function and a reconfigurable electrical matrix switch, can accommodate multiple ODNs to a single OLT line card by controlling the connection of the matrix switch. Although conventional EPON repeaters require full OLT line cards to accommodate subscribers from the initial installation stage, the proposed repeater can dramatically reduce the number of required line cards especially when the number of subscribers is less than a half of the maximum registerable users per OLT. Numerical calculation results show that the extended reach EPON system with the proposed EPON repeater can save 17.5% of the initial installation cost compared with a conventional repeater, and can be less expensive than conventional systems up to the maximum subscribers especially when the percentage of ODNs in lightly-populated areas is higher.

  11. Revisiting the TALE repeat.

    Science.gov (United States)

    Deng, Dong; Yan, Chuangye; Wu, Jianping; Pan, Xiaojing; Yan, Nieng

    2014-04-01

    Transcription activator-like (TAL) effectors specifically bind to double stranded (ds) DNA through a central domain of tandem repeats. Each TAL effector (TALE) repeat comprises 33-35 amino acids and recognizes one specific DNA base through a highly variable residue at a fixed position in the repeat. Structural studies have revealed the molecular basis of DNA recognition by TALE repeats. Examination of the overall structure reveals that the basic building block of TALE protein, namely a helical hairpin, is one-helix shifted from the previously defined TALE motif. Here we wish to suggest a structure-based re-demarcation of the TALE repeat which starts with the residues that bind to the DNA backbone phosphate and concludes with the base-recognition hyper-variable residue. This new numbering system is consistent with the α-solenoid superfamily to which TALE belongs, and reflects the structural integrity of TAL effectors. In addition, it confers integral number of TALE repeats that matches the number of bound DNA bases. We then present fifteen crystal structures of engineered dHax3 variants in complex with target DNA molecules, which elucidate the structural basis for the recognition of bases adenine (A) and guanine (G) by reported or uncharacterized TALE codes. Finally, we analyzed the sequence-structure correlation of the amino acid residues within a TALE repeat. The structural analyses reported here may advance the mechanistic understanding of TALE proteins and facilitate the design of TALEN with improved affinity and specificity.

  12. Gradient descent approach for minimizing dissimilarity measure in log-polarimagery

    Institute of Scientific and Technical Information of China (English)

    JIN Yong-jun; JIANG You-yi

    2006-01-01

    Log-polar mapping has been proposed as a very appropriate space-variant imaging model in active vision applications.There is no doubt about the importance of translation estimation in active visual tracking.In this paper an approach is presented,and its performances are evaluated.The approach uses a gradient descent for minimizing a dissimilarity measure.The experimental results reveal that this method is efficient for approaching active image translations.

  13. A statistical approach to latitude measurements: Ptolemy's and Riccioli's geographical works as case studies

    Science.gov (United States)

    Santoro, Luca

    2017-08-01

    The aim of this work is to analyze latitude measurements typically used in historical geographical works through a statistical approach. We use two sets of different age as case studies: Ptolemy's Geography and Riccioli's work on geography. A statistical approach to historical latitude and longitude databases can reveal systematic errors in geographical georeferencing processes. On the other hand, once exploiting the right statistical analysis, this approach can also lead to new information about ancient city locations.

  14. A Systems Approach to Measuring Return on Investment for HRD Interventions.

    Science.gov (United States)

    Wang, Greg G.; Dou, Zhengxia; Li, Ning

    2002-01-01

    Review of economics, industrial psychology, financial, and human resource development (HRD) literature was used to develop a systems approach to measuring return on investment in which HRD is a subsystem within the overall organizational system. The approach isolates non-HRD impacts and demonstrates applicability of statistical and mathematical…

  15. Flux-measuring approach of high temperature metal liquid based on BP neural networks

    Institute of Scientific and Technical Information of China (English)

    胡燕瑜; 桂卫华; 李勇刚

    2003-01-01

    A soft-measuring approach is presented to measure the flux of liquid zinc with high temperature andcausticity. By constructing mathematical model based on neural networks, weighing the mass of liquid zinc, the fluxof liquid zinc is acquired indirectly, the measuring on line and flux control are realized. Simulation results and indus-trial practice demonstrate that the relative error between the estimated flux value and practical measured flux value islower than 1.5%, meeting the need of industrial process.

  16. Novel Micro/Nano Approaches for Glucose Measurement Using pH-Sensitive Hydrogels

    Science.gov (United States)

    2005-06-01

    AD Award Number: W81XWH-04-1-0596 TITLE: Novel Micro / Nano Approaches for Glucose Measurement Using pH-Sensitive Hydrogels PRINCIPAL INVESTIGATOR...May 2005 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Novel Micro / Nano Approaches for Glucose Measurement Using pH- 5b. GRANT NUMBER Sensitive Hydrogels...Initially, the NaCI solution was circulated through the cell using a syringe pump . A schematic diagram of the apparatus used in this study was

  17. A Repeating Fast Radio Burst

    CERN Document Server

    Spitler, L G; Hessels, J W T; Bogdanov, S; Brazier, A; Camilo, F; Chatterjee, S; Cordes, J M; Crawford, F; Deneva, J; Ferdman, R D; Freire, P C C; Kaspi, V M; Lazarus, P; Lynch, R; Madsen, E C; McLaughlin, M A; Patel, C; Ransom, S M; Seymour, A; Stairs, I H; Stappers, B W; van Leeuwen, J; Zhu, W W

    2016-01-01

    Fast Radio Bursts are millisecond-duration astronomical radio pulses of unknown physical origin that appear to come from extragalactic distances. Previous follow-up observations have failed to find additional bursts at the same dispersion measures (i.e. integrated column density of free electrons between source and telescope) and sky position as the original detections. The apparent non-repeating nature of the fast radio bursts has led several authors to hypothesise that they originate in cataclysmic astrophysical events. Here we report the detection of ten additional bursts from the direction of FRB121102, using the 305-m Arecibo telescope. These new bursts have dispersion measures and sky positions consistent with the original burst. This unambiguously identifies FRB121102 as repeating and demonstrates that its source survives the energetic events that cause the bursts. Additionally, the bursts from FRB121102 show a wide range of spectral shapes that appear to be predominantly intrinsic to the source and wh...

  18. Comparative use of different emission measurement approaches to determine methane emissions from a biogas plant.

    Science.gov (United States)

    Reinelt, Torsten; Delre, Antonio; Westerkamp, Tanja; Holmgren, Magnus A; Liebetrau, Jan; Scheutz, Charlotte

    2017-06-16

    A sustainable anaerobic biowaste treatment has to mitigate methane emissions from the entire biogas production chain, but the exact quantification of these emissions remains a challenge. This study presents a comparative measurement campaign carried out with on-site and ground-based remote sensing measurement approaches conducted by six measuring teams at a Swedish biowaste treatment plant. The measured emissions showed high variations, amongst others caused by different periods of measurement performance in connection with varying operational states of the plant. The overall methane emissions measured by ground-based remote sensing varied from 5 to 25kgh(-1) (corresponding to a methane loss of 0.6-3.0% of upgraded methane produced), depending on operating conditions and the measurement method applied. Overall methane emissions measured by the on-site measuring approaches varied between 5 and 17kgh(-1) (corresponding to a methane loss of 0.6 and 2.1%) from team to team, depending on the number of measured emission points, operational state during the measurements and the measurement method applied. Taking the operational conditions into account, the deviation between different approaches and teams could be explained, in that the two largest methane-emitting sources, contributing about 90% of the entire site's emissions, were found to be the open digestate storage tank and a pressure release valve on the compressor station. Copyright © 2017. Published by Elsevier Ltd.

  19. Recursive quantum repeater networks

    CERN Document Server

    Van Meter, Rodney; Horsman, Clare

    2011-01-01

    Internet-scale quantum repeater networks will be heterogeneous in physical technology, repeater functionality, and management. The classical control necessary to use the network will therefore face similar issues as Internet data transmission. Many scalability and management problems that arose during the development of the Internet might have been solved in a more uniform fashion, improving flexibility and reducing redundant engineering effort. Quantum repeater network development is currently at the stage where we risk similar duplication when separate systems are combined. We propose a unifying framework that can be used with all existing repeater designs. We introduce the notion of a Quantum Recursive Network Architecture, developed from the emerging classical concept of 'recursive networks', extending recursive mechanisms from a focus on data forwarding to a more general distributed computing request framework. Recursion abstracts independent transit networks as single relay nodes, unifies software layer...

  20. In vitro systems toxicology approach to investigate the effects of repeated cigarette smoke exposure on human buccal and gingival organotypic epithelial tissue cultures

    Science.gov (United States)

    Schlage, Walter K.; Kostadinova, Radina; Xiang, Yang; Sewer, Alain; Majeed, Shoaib; Kuehn, Diana; Frentzel, Stefan; Talikka, Marja; Geertz, Marcel; Mathis, Carole; Ivanov, Nikolai; Hoeng, Julia; Peitsch, Manuel C.

    2014-01-01

    Smoking has been associated with diseases of the lung, pulmonary airways and oral cavity. Cytologic, genomic and transcriptomic changes in oral mucosa correlate with oral pre-neoplasia, cancer and inflammation (e.g. periodontitis). Alteration of smoking-related gene expression changes in oral epithelial cells is similar to that in bronchial and nasal epithelial cells. Using a systems toxicology approach, we have previously assessed the impact of cigarette smoke (CS) seen as perturbations of biological processes in human nasal and bronchial organotypic epithelial culture models. Here, we report our further assessment using in vitro human oral organotypic epithelium models. We exposed the buccal and gingival organotypic epithelial tissue cultures to CS at the air–liquid interface. CS exposure was associated with increased secretion of inflammatory mediators, induction of cytochrome P450s activity and overall weak toxicity in both tissues. Using microarray technology, gene-set analysis and a novel computational modeling approach leveraging causal biological network models, we identified CS impact on xenobiotic metabolism-related pathways accompanied by a more subtle alteration in inflammatory processes. Gene-set analysis further indicated that the CS-induced pathways in the in vitro buccal tissue models resembled those in the in vivo buccal biopsies of smokers from a published dataset. These findings support the translatability of systems responses from in vitro to in vivo and demonstrate the applicability of oral organotypical tissue models for an impact assessment of CS on various tissues exposed during smoking, as well as for impact assessment of reduced-risk products. PMID:25046638