WorldWideScience

Sample records for repeated measures approach

  1. Alcohol intake and colorectal cancer: a comparison of approaches for including repeated measures of alcohol consumption

    DEFF Research Database (Denmark)

    Thygesen, Lau Caspar; Wu, Kana; Grønbaek, Morten

    2008-01-01

    BACKGROUND: In numerous studies, alcohol intake has been found to be positively associated with colorectal cancer risk. However, the majority of studies included only one exposure measurement, which may bias the results if long-term intake is relevant.METHODS: We compared different approaches...... for including repeated measures of alcohol intake among 47,432 US men enrolled in the Health Professionals Follow-up Study. Questionnaires including questions on alcohol intake had been completed in 1986, 1990, 1994, and 1998. The outcome was incident colorectal cancer during follow-up from 1986 to 2002.RESULTS......: During follow-up, 868 members of the cohort experienced colorectal cancer. Baseline, updated, and cumulative average alcohol intakes were positively associated with colorectal cancer, with only minor differences among the approaches. These results support moderately increased risk for intake >30 g...

  2. Analysis of repeated measures data

    CERN Document Server

    Islam, M Ataharul

    2017-01-01

    This book presents a broad range of statistical techniques to address emerging needs in the field of repeated measures. It also provides a comprehensive overview of extensions of generalized linear models for the bivariate exponential family of distributions, which represent a new development in analysing repeated measures data. The demand for statistical models for correlated outcomes has grown rapidly recently, mainly due to presence of two types of underlying associations: associations between outcomes, and associations between explanatory variables and outcomes. The book systematically addresses key problems arising in the modelling of repeated measures data, bearing in mind those factors that play a major role in estimating the underlying relationships between covariates and outcome variables for correlated outcome data. In addition, it presents new approaches to addressing current challenges in the field of repeated measures and models based on conditional and joint probabilities. Markov models of first...

  3. Power analysis for multivariate and repeated measures designs: a flexible approach using the SPSS MANOVA procedure.

    Science.gov (United States)

    D'Amico, E J; Neilands, T B; Zambarano, R

    2001-11-01

    Although power analysis is an important component in the planning and implementation of research designs, it is often ignored. Computer programs for performing power analysis are available, but most have limitations, particularly for complex multivariate designs. An SPSS procedure is presented that can be used for calculating power for univariate, multivariate, and repeated measures models with and without time-varying and time-constant covariates. Three examples provide a framework for calculating power via this method: an ANCOVA, a MANOVA, and a repeated measures ANOVA with two or more groups. The benefits and limitations of this procedure are discussed.

  4. Repeatability of visual acuity measurement.

    Science.gov (United States)

    Raasch, T W; Bailey, I L; Bullimore, M A

    1998-05-01

    This study investigates features of visual acuity chart design and acuity testing scoring methods which affect the validity and repeatability of visual acuity measurements. Visual acuity was measured using the Sloan and British Standard letter series, and Landolt rings. Identifiability of the different letters as a function of size was estimated, and expressed in the form of frequency-of-seeing curves. These functions were then used to simulate acuity measurements with a variety of chart designs and scoring criteria. Systematic relationships exist between chart design parameters and acuity score, and acuity score repeatability. In particular, an important feature of a chart, that largely determines the repeatability of visual acuity measurement, is the amount of size change attributed to each letter. The methods used to score visual acuity performance also affect repeatability. It is possible to evaluate acuity score validity and repeatability using the statistical principles discussed here.

  5. Toward a simple, repeatable, non-destructive approach to measuring stable-isotope ratios of water within tree stems

    Science.gov (United States)

    Raulerson, S.; Volkmann, T.; Pangle, L. A.

    2017-12-01

    Traditional methodologies for measuring ratios of stable isotopes within the xylem water of trees involve destructive coring of the stem. A recent approach involves permanently installed probes within the stem, and an on-site assembly of pumps, switching valves, gas lines, and climate-controlled structure for field deployment of a laser spectrometer. The former method limits the possible temporal resolution of sampling, and sample size, while the latter may not be feasible for many research groups. We present results from initial laboratory efforts towards developing a non-destructive, temporally-resolved technique for measuring stable isotope ratios within the xylem flow of trees. Researchers have used direct liquid-vapor equilibration as a method to measure isotope ratios of the water in soil pores. Typically, this is done by placing soil samples in a fixed container, and allowing the liquid water within the soil to come into isotopic equilibrium with the headspace of the container. Water can also be removed via cryogenic distillation or azeotropic distillation, with the resulting liquid tested for isotope ratios. Alternatively, the isotope ratios of the water vapor can be directly measured using a laser-based water vapor isotope analyzer. Well-established fractionation factors and the isotope ratios in the vapor phase are then used to calculate the isotope ratios in the liquid phase. We propose a setup which would install a single, removable chamber onto a tree, where vapor samples could non-destructively and repeatedly be taken. These vapor samples will be injected into a laser-based isotope analyzer by a recirculating gas conveyance system. A major part of what is presented here is in the procedure of taking vapor samples at 100% relative humidity, appropriately diluting them with completely dry N2 calibration gas, and injecting them into the gas conveyance system without inducing fractionation in the process. This methodology will be helpful in making

  6. Ambient temperature and cardiovascular biomarkers in a repeated-measure study in healthy adults: A novel biomarker index approach.

    Science.gov (United States)

    Wu, Shaowei; Yang, Di; Pan, Lu; Shan, Jiao; Li, Hongyu; Wei, Hongying; Wang, Bin; Huang, Jing; Baccarelli, Andrea A; Shima, Masayuki; Deng, Furong; Guo, Xinbiao

    2017-07-01

    Associations of ambient temperature with cardiovascular morbidity and mortality have been well documented in numerous epidemiological studies, but the underlying pathways remain unclear. We investigated whether systemic inflammation, coagulation, systemic oxidative stress, antioxidant activity and endothelial function may be the mechanistic pathways associated with ambient temperature. Forty study participants underwent repeated blood collections for 12 times in Beijing, China in 2010-2011. Ambient temperature and air pollution data were measured in central monitors close to student residences. We created five indices as the sum of weighted biomarker percentiles to represent the overall levels of 15 cardiovascular biomarkers in five pathways (systemic inflammation: hs-CRP, TNF-α and fibrinogen; coagulation: fibrinogen, PAI-1, tPA, vWF and sP-selectin; systemic oxidative stress: Ox-LDL and sCD36: antioxidant activity: EC-SOD and GPX1; and endothelial function: ET-1, E-selectin, ICAM-1 and VCAM-1). We used generalized mixed-effects models to estimate temperature effects controlling for air pollution and other covariates. There were significant decreasing trends in the adjusted means of biomarker indices over the lowest to the highest quartiles of daily temperatures before blood collection. A 10°C decrease at 2-d average daily temperature were associated with increases of 2.5% [95% confidence interval (CI): 0.7, 4.2], 1.6% (95% CI: 0.1, 3.1), 2.7% (95% CI: 0.5, 4.8), 5.5% (95% CI: 3.8, 7.3) and 2.0% (95% CI: 0.3, 3.8) in the indices for systemic inflammation, coagulation, systemic oxidative stress, antioxidant activity and endothelial function, respectively. In contrast, the associations between ambient temperature and individual biomarkers had substantial variation in magnitude and strength. The altered cardiovascular biomarker profiles in healthy adults associated with ambient temperature changes may help explain the temperature-related cardiovascular morbidity

  7. Nonparametric additive regression for repeatedly measured data

    KAUST Repository

    Carroll, R. J.; Maity, A.; Mammen, E.; Yu, K.

    2009-01-01

    We develop an easily computed smooth backfitting algorithm for additive model fitting in repeated measures problems. Our methodology easily copes with various settings, such as when some covariates are the same over repeated response measurements

  8. Teaching renewable energy using online PBL in investigating its effect on behaviour towards energy conservation among Malaysian students: ANOVA repeated measures approach

    Science.gov (United States)

    Nordin, Norfarah; Samsudin, Mohd Ali; Hadi Harun, Abdul

    2017-01-01

    This research aimed to investigate whether online problem based learning (PBL) approach to teach renewable energy topic improves students’ behaviour towards energy conservation. A renewable energy online problem based learning (REePBaL) instruction package was developed based on the theory of constructivism and adaptation of the online learning model. This study employed a single group quasi-experimental design to ascertain the changed in students’ behaviour towards energy conservation after underwent the intervention. The study involved 48 secondary school students in a Malaysian public school. ANOVA Repeated Measure technique was employed in order to compare scores of students’ behaviour towards energy conservation before and after the intervention. Based on the finding, students’ behaviour towards energy conservation improved after the intervention.

  9. On balanced minimal repeated measurements designs

    Directory of Open Access Journals (Sweden)

    Shakeel Ahmad Mir

    2014-10-01

    Full Text Available Repeated Measurements designs are concerned with scientific experiments in which each experimental unit is assigned more than once to a treatment either different or identical. This class of designs has the property that the unbiased estimators for elementary contrasts among direct and residual effects are obtainable. Afsarinejad (1983 provided a method of constructing balanced Minimal Repeated Measurements designs p < t , when t is an odd or prime power, one or more than one treatment may occur more than once in some sequences and  designs so constructed no longer remain uniform in periods. In this paper an attempt has been made to provide a new method to overcome this drawback. Specifically, two cases have been considered                RM[t,n=t(t-t/(p-1,p], λ2=1 for balanced minimal repeated measurements designs and  RM[t,n=2t(t-t/(p-1,p], λ2=2 for balanced  repeated measurements designs. In addition , a method has been provided for constructing              extra-balanced minimal designs for special case RM[t,n=t2/(p-1,p], λ2=1.

  10. Nonparametric additive regression for repeatedly measured data

    KAUST Repository

    Carroll, R. J.

    2009-05-20

    We develop an easily computed smooth backfitting algorithm for additive model fitting in repeated measures problems. Our methodology easily copes with various settings, such as when some covariates are the same over repeated response measurements. We allow for a working covariance matrix for the regression errors, showing that our method is most efficient when the correct covariance matrix is used. The component functions achieve the known asymptotic variance lower bound for the scalar argument case. Smooth backfitting also leads directly to design-independent biases in the local linear case. Simulations show our estimator has smaller variance than the usual kernel estimator. This is also illustrated by an example from nutritional epidemiology. © 2009 Biometrika Trust.

  11. Multivariate linear models and repeated measurements revisited

    DEFF Research Database (Denmark)

    Dalgaard, Peter

    2009-01-01

    Methods for generalized analysis of variance based on multivariate normal theory have been known for many years. In a repeated measurements context, it is most often of interest to consider transformed responses, typically within-subject contrasts or averages. Efficiency considerations leads...... to sphericity assumptions, use of F tests and the Greenhouse-Geisser and Huynh-Feldt adjustments to compensate for deviations from sphericity. During a recent implementation of such methods in the R language, the general structure of such transformations was reconsidered, leading to a flexible specification...

  12. Methods for analysing cardiovascular studies with repeated measures

    NARCIS (Netherlands)

    Cleophas, T. J.; Zwinderman, A. H.; van Ouwerkerk, B. M.

    2009-01-01

    Background. Repeated measurements in a single subject are generally more similar than unrepeated measurements in different subjects. Unrepeated analyses of repeated data cause underestimation of the treatment effects. Objective. To review methods adequate for the analysis of cardiovascular studies

  13. Measurement System Analyses - Gauge Repeatability and Reproducibility Methods

    Science.gov (United States)

    Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej

    2018-02-01

    The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.

  14. Repeatability study of replicate crash tests: A signal analysis approach.

    Science.gov (United States)

    Seppi, Jeremy; Toczyski, Jacek; Crandall, Jeff R; Kerrigan, Jason

    2017-10-03

    To provide an objective basis on which to evaluate the repeatability of vehicle crash test methods, a recently developed signal analysis method was used to evaluate correlation of sensor time history data between replicate vehicle crash tests. The goal of this study was to evaluate the repeatability of rollover crash tests performed with the Dynamic Rollover Test System (DRoTS) relative to other vehicle crash test methods. Test data from DRoTS tests, deceleration rollover sled (DRS) tests, frontal crash tests, frontal offset crash tests, small overlap crash tests, small overlap impact (SOI) crash tests, and oblique crash tests were obtained from the literature and publicly available databases (the NHTSA vehicle database and the Insurance Institute for Highway Safety TechData) to examine crash test repeatability. Signal analysis of the DRoTS tests showed that force and deformation time histories had good to excellent repeatability, whereas vehicle kinematics showed only fair repeatability due to the vehicle mounting method for one pair of tests and slightly dissimilar mass properties (2.2%) in a second pair of tests. Relative to the DRS, the DRoTS tests showed very similar or higher levels of repeatability in nearly all vehicle kinematic data signals with the exception of global X' (road direction of travel) velocity and displacement due to the functionality of the DRoTS fixture. Based on the average overall scoring metric of the dominant acceleration, DRoTS was found to be as repeatable as all other crash tests analyzed. Vertical force measures showed good repeatability and were on par with frontal crash barrier forces. Dynamic deformation measures showed good to excellent repeatability as opposed to poor repeatability seen in SOI and oblique deformation measures. Using the signal analysis method as outlined in this article, the DRoTS was shown to have the same or better repeatability of crash test methods used in government regulatory and consumer evaluation test

  15. Unstable systems and repeated measurements. II

    International Nuclear Information System (INIS)

    Exner, P.

    1977-01-01

    Two examples are treated. In the first, the chamber structure is only assumed to be periodic and P(t) is an exponential. In the second example no specific assumption is made about the primary decay law and the measuring device is structured as an idealized spark chamber. This example contains the results by Beskow and Nilsson as a special case. (author)

  16. Teaching Renewable Energy Using Online PBL in Investigating Its Effect on Behaviour towards Energy Conservation among Malaysian Students: ANOVA Repeated Measures Approach

    Science.gov (United States)

    Nordin, Norfarah; Samsudin, Mohd Ali; Harun, Abdul Hadi

    2017-01-01

    This research aimed to investigate whether online problem based learning (PBL) approach to teach renewable energy topic improves students' behaviour towards energy conservation. A renewable energy online problem based learning (REePBaL) instruction package was developed based on the theory of constructivism and adaptation of the online learning…

  17. Unitarity, Feedback, Interactions - Dynamics Emergent from Repeated Measurements

    Science.gov (United States)

    Corona Ugalde, Paulina; Altamirano, Natacha; Mann, Robert; Zych, Magdalena

    Modern measurement theory dispenses with the description of a measurement as a projection. Rather, the measurement is understood as an operation, whereby the system's final state is determined by an action of a completely positive trace non-increasing map and the outcomes are described by linear operators on the system, distributed according to a positive-operator valued measure (POVM). The POVM approach unifies the theory of measurements with a general description of dynamics, the theory of open quantum systems. Engineering a particular measurement and engineering a particular dynamics for the system are thus two complementary aspects of the same conceptual framework. This correspondence is directly applied in quantum simulations and quantum control theory . With this motivation, we study what types of dynamics can emerge from a model of repeated short interactions of a system with a set of ancillae. We show that contingent on the model parameters the resulting dynamics ranges from exact unitarity to arbitrary fast decoherence. For a series of measurements the effective dynamics includes feedback-control, which for a composite system yields effective interactions between the subsystems. We quantify the amount of decoherence accompanying such induced interactions. The simple framework used in the present study can find applications in devising novel quantum control protocols, or quantum simulations.

  18. Stability of parameters in repeated TVA measures

    DEFF Research Database (Denmark)

    Sørensen, Thomas Alrik

    Several recent studies have explored the limitations of human visual short-term memory or VSTM (e.g. Luck & Vogel, 1997; Wheeler & Treisman, 2002; Alvarez & Cavanagh, 2004). Usually researchers agree that VSTM is limited to a capacity of about 3 to 4 objects at any given moment (Cowan, 2001......). Capacity of short-term memory is measured in a range of studies often using the change detection paradigm (CD). However, the whole report paradigm (WR) may be a more reliable paradigm (Cusack, Lehmann, Veldsman, & Mitchell, 2009). Moreover, each individual WR trial yield more information compared to a CD...

  19. Intra-examiner repeatability and agreement in accommodative response measurements.

    Science.gov (United States)

    Antona, B; Sanchez, I; Barrio, A; Barra, F; Gonzalez, E

    2009-11-01

    Clinical measurement of the accommodative response (AR) identifies the focusing plane of a subject with respect to the accommodative target. To establish whether a significant change in AR has occurred, it is important to determine the repeatability of this measurement. This study had two aims: First, to determine the intraexaminer repeatability of AR measurements using four clinical methods: Nott retinoscopy, monocular estimate method (MEM) retinoscopy, binocular crossed cylinder test (BCC) and near autorefractometry. Second, to study the level of agreement between AR measurements obtained with the different methods. The AR of the right eye at one accommodative demand of 2.50 D (40 cm) was measured on two separate occasions in 61 visually normal subjects of mean age 19.7 years (range 18-32 years). The intraexaminer repeatability of the tests, and agreement between them, were estimated by the Bland-Altman method. We determined mean differences (MD) and the 95% limits of agreement [coefficient of repeatability (COR) and coefficient of agreement (COA)]. Nott retinoscopy and BCC offered the best repeatability, showing the lowest MD and narrowest 95% interval of agreement (Nott: -0.10 +/- 0.66 D, BCC: -0.05 +/- 0.75 D). The 95% limits of agreement for the four techniques were similar (COA = +/- 0.92 to +/-1.00 D) yet clinically significant, according to the expected values of the AR. The two dynamic retinoscopy techniques (Nott and MEM) had a better agreement (COA = +/-0.64 D) although this COA must be interpreted in the context of the low MEM repeatability (COR = +/-0.98 D). The best method of assessing AR was Nott retinoscopy. The BCC technique was also repeatable, and both are recommended as suitable methods for clinical use. Despite better agreement between MEM and Nott, agreement among the remaining methods was poor such that their interchangeable use in clinical practice is not recommended.

  20. On summary measure analysis of linear trend repeated measures data: performance comparison with two competing methods.

    Science.gov (United States)

    Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh

    2012-03-22

    The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.

  1. Repeatability of Objective Measurements of Linear Udder and Body ...

    African Journals Online (AJOL)

    The objective of this study was to estimates the repeatability of objective measurements on linear udder and body conformation traits and to evaluate the objectivity of the measurements in Friesian x Bunaji cows. Data from 50 (F1) Frisian X Bunaji cows collected between 2007 and 2008 at the Dairy Research Farm of the ...

  2. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  3. Does Dry Eye Affect Repeatability of Corneal Topography Measurements?

    Science.gov (United States)

    Doğan, Aysun Şanal; Gürdal, Canan; Köylü, Mehmet Talay

    2018-04-01

    The purpose of this study was to assess the repeatability of corneal topography measurements in dry eye patients and healthy controls. Participants underwent consecutive corneal topography measurements (Sirius; Costruzione Strumenti Oftalmici, Florence, Italy). Two images with acquisition quality higher than 90% were accepted. The following parameters were evaluated: minimum and central corneal thickness, aqueous depth, apex curvature, anterior chamber volume, horizontal anterior chamber diameter, iridocorneal angle, cornea volume, and average simulated keratometry. Repeatability was assessed by calculating intra-class correlation coefficient. Thirty-three patients with dry eye syndrome and 40 healthy controls were enrolled to the study. The groups were similar in terms of age (39 [18-65] vs. 30.5 [18-65] years, p=0.198) and gender (M/F: 4/29 vs. 8/32, p=0.366). Intra-class correlation coefficients among all topography parameters within both groups showed excellent repeatability (>0.90). The anterior segment measurements provided by the Sirius corneal topography system were highly repeatable for dry eye patients and are sufficiently reliable for clinical practice and research.

  4. Does Dry Eye Affect Repeatability of Corneal Topography Measurements?

    Directory of Open Access Journals (Sweden)

    Aysun Şanal Doğan

    2018-04-01

    Full Text Available Objectives: The purpose of this study was to assess the repeatability of corneal topography measurements in dry eye patients and healthy controls. Materials and Methods: Participants underwent consecutive corneal topography measurements (Sirius; Costruzione Strumenti Oftalmici, Florence, Italy. Two images with acquisition quality higher than 90% were accepted. The following parameters were evaluated: minimum and central corneal thickness, aqueous depth, apex curvature, anterior chamber volume, horizontal anterior chamber diameter, iridocorneal angle, cornea volume, and average simulated keratometry. Repeatability was assessed by calculating intra-class correlation coefficient. Results: Thirty-three patients with dry eye syndrome and 40 healthy controls were enrolled to the study. The groups were similar in terms of age (39 [18-65] vs. 30.5 [18-65] years, p=0.198 and gender (M/F: 4/29 vs. 8/32, p=0.366. Intra-class correlation coefficients among all topography parameters within both groups showed excellent repeatability (>0.90. Conclusion: The anterior segment measurements provided by the Sirius corneal topography system were highly repeatable for dry eye patients and are sufficiently reliable for clinical practice and research.

  5. Measuring Repeatability of the Focus-variable Lenses

    Directory of Open Access Journals (Sweden)

    Jan Řezníček

    2014-12-01

    Full Text Available In the field of photogrammetry, the optical system, usually represented by the glass lens, is used for metric purposes. Therefore, the aberration characteristics of such a lens, inducing deviations from projective imaging, has to be well known. However, the most important property of the metric lens is the stability of its glass and mechanical elements, ensuring long-term reliability of the measured parameters. In case of a focus-variable lens, the repeatability of the lens setup is important as well. Lenses with a fixed focal length are usually considered as “fixed” though, in fact, most of them contain one or more movable glass elements, providing the focusing function. In cases where the lens is not equipped with fixing screws, the repeatability of the calibration parameters should be known. This paper derives simple mathematical formulas that can be used for measuring the repeatability of the focus-variable lenses, and gives a demonstrative example of such measuring. The given procedure has the advantage that only demanded parameters are estimated, hence, no unwanted correlations with the additional parameters exist. The test arrangement enables us to measure each demanded magnification of the optical system, which is important in close-range photogrammetry.

  6. The effect of repeated applanation on subsequent IOP measurements.

    Science.gov (United States)

    AlMubrad, Turki M; Ogbuehi, Kelechi C

    2008-11-01

    In studies aimed at assessing the accuracy and repeatability of non-contact tonometers, the order in which these tonometers and the Goldmann tonometer are used is usually randomised despite studies in the literature that demonstrate an ocular massage effect that occurs post-applanation but not after non-contact tonometry. The purpose of this study was to investigate the effect of repeated corneal applanation on subsequent assessments of IOP. Data were obtained from 65 left eyes of 65 young, oculovisual normals. Three sets of IOP measurements were obtained, one set with the Goldmann applanation tonometer and two with the Topcon CT80 non-contact tonometer (one set each before and after applanation with the Goldmann tonometer), in each one of two separate measurement sessions, one week apart. The average (and SD) IOP measured with the Goldmann tonometer in the first session (14.8+/-2.9 mmHg) did not vary significantly from the IOP measured with the non-contact tonometer (pre-applanation) in both sessions or with the average Goldmann IOP in the second session. The bias (mean difference +/- SD) between methods was 0.3+/-1.4 mmHg and 0.4+/-1.4 mmHg, respectively, for the first and second sessions, with the CT80 (pre-applanation) recording the higher IOP in both sessions. The within-session repeatability coefficients were +/-2.3 mmHg, +/-2.6 mmHg, +/-2.1 mmHg and +/-2.0 mmHg for the CT80 (pre-applanation) in the first and second sessions, and the Goldmann tonometer in the first and second sessions, respectively. Test-retest repeatability coefficients were +/-2.8 mmHg and +/-2.5 mmHg for the CT80 (pre-applanation) and the Goldmann tonometer respectively. Post-applanation with the Goldmann tonometer, there was a statistically significant (pcontact tonometer in both sessions. These results suggest that repeated corneal applanation leads to a statistically significant reduction in IOP on subsequent measurements.

  7. Modeling intraindividual variability with repeated measures data methods and applications

    CERN Document Server

    Hershberger, Scott L

    2013-01-01

    This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

  8. An approach to measurement

    International Nuclear Information System (INIS)

    Gudder, S.P.

    1984-01-01

    A new approach to measurement theory is presented. The definition of measurement is motivated by direct laboratory procedures as they are carried out in practice. The theory is developed within the quantum logic framework. The work clarifies an important problem in the quantum logic approach; namely, where the Hilbert space comes from. The relationship between measurements and observables is considered, and a Hilbert space embedding theorem is presented. Charge systems are also discussed. (author)

  9. Analysis of repeated measurement data in the clinical trials

    Science.gov (United States)

    Singh, Vineeta; Rana, Rakesh Kumar; Singhal, Richa

    2013-01-01

    Statistics is an integral part of Clinical Trials. Elements of statistics span Clinical Trial design, data monitoring, analyses and reporting. A solid understanding of statistical concepts by clinicians improves the comprehension and the resulting quality of Clinical Trials. In biomedical research it has been seen that researcher frequently use t-test and ANOVA to compare means between the groups of interest irrespective of the nature of the data. In Clinical Trials we record the data on the patients more than two times. In such a situation using the standard ANOVA procedures is not appropriate as it does not consider dependencies between observations within subjects in the analysis. To deal with such types of study data Repeated Measure ANOVA should be used. In this article the application of One-way Repeated Measure ANOVA has been demonstrated by using the software SPSS (Statistical Package for Social Sciences) Version 15.0 on the data collected at four time points 0 day, 15th day, 30th day, and 45th day of multicentre clinical trial conducted on Pandu Roga (~Iron Deficiency Anemia) with an Ayurvedic formulation Dhatrilauha. PMID:23930038

  10. Dispersion Measure Variation of Repeating Fast Radio Burst Sources

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yuan-Pei; Zhang, Bing, E-mail: yypspore@gmail.com, E-mail: zhang@physics.unlv.edu [Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871 (China)

    2017-09-20

    The repeating fast radio burst (FRB) 121102 was recently localized in a dwarf galaxy at a cosmological distance. The dispersion measure (DM) derived for each burst from FRB 121102 so far has not shown significant evolution, even though an apparent increase was recently seen with newly detected VLA bursts. It is expected that more repeating FRB sources may be detected in the future. In this work, we investigate a list of possible astrophysical processes that might cause DM variation of a particular FRB source. The processes include (1) cosmological scale effects such as Hubble expansion and large-scale structure fluctuations; (2) FRB local effects such as gas density fluctuation, expansion of a supernova remnant (SNR), a pulsar wind nebula, and an H ii region; and (3) the propagation effect due to plasma lensing. We find that the DM variations contributed by the large-scale structure are extremely small, and any observable DM variation is likely caused by the plasma local to the FRB source. In addition to mechanisms that decrease DM over time, we suggest that an FRB source in an expanding SNR around a nearly neutral ambient medium during the deceleration (Sedov–Taylor and snowplow) phases or in a growing H ii region can increase DM. Some effects (e.g., an FRB source moving in an H ii region or plasma lensing) can produce either positive or negative DM variations. Future observations of DM variations of FRB 121102 and other repeating FRB sources can provide important clues regarding the physical origin of these sources.

  11. Dispersion Measure Variation of Repeating Fast Radio Burst Sources

    International Nuclear Information System (INIS)

    Yang, Yuan-Pei; Zhang, Bing

    2017-01-01

    The repeating fast radio burst (FRB) 121102 was recently localized in a dwarf galaxy at a cosmological distance. The dispersion measure (DM) derived for each burst from FRB 121102 so far has not shown significant evolution, even though an apparent increase was recently seen with newly detected VLA bursts. It is expected that more repeating FRB sources may be detected in the future. In this work, we investigate a list of possible astrophysical processes that might cause DM variation of a particular FRB source. The processes include (1) cosmological scale effects such as Hubble expansion and large-scale structure fluctuations; (2) FRB local effects such as gas density fluctuation, expansion of a supernova remnant (SNR), a pulsar wind nebula, and an H ii region; and (3) the propagation effect due to plasma lensing. We find that the DM variations contributed by the large-scale structure are extremely small, and any observable DM variation is likely caused by the plasma local to the FRB source. In addition to mechanisms that decrease DM over time, we suggest that an FRB source in an expanding SNR around a nearly neutral ambient medium during the deceleration (Sedov–Taylor and snowplow) phases or in a growing H ii region can increase DM. Some effects (e.g., an FRB source moving in an H ii region or plasma lensing) can produce either positive or negative DM variations. Future observations of DM variations of FRB 121102 and other repeating FRB sources can provide important clues regarding the physical origin of these sources.

  12. Maintaining confidentiality in prospective studies: anonymous repeated measurements via email (ARME) procedure.

    Science.gov (United States)

    Carli, Vladimir; Hadlaczky, Gergö; Wasserman, Camilla; Stingelin-Giles, Nicola; Reiter-Theil, Stella; Wasserman, Danuta

    2012-02-01

    Respecting and protecting the confidentiality of data and the privacy of individuals regarding the information that they have given as participants in a research project is a cornerstone of complying with accepted research standards. However, in longitudinal studies, establishing and maintaining privacy is often challenging because of the necessity of repeated contact with participants. A novel internet-based solution is introduced here, which maintains privacy while at the same time ensures linkage of data to individual participants in a repeated measures design. With the use of the anonymous repeated measurements via email (ARME) procedure, two separate one-way communication systems are established through ad hoc email accounts and a secure study website. Strengths and limitations of the approach are discussed.

  13. Validity and repeatability of inertial measurement units for measuring gait parameters.

    Science.gov (United States)

    Washabaugh, Edward P; Kalyanaraman, Tarun; Adamczyk, Peter G; Claflin, Edward S; Krishnan, Chandramouli

    2017-06-01

    Inertial measurement units (IMUs) are small wearable sensors that have tremendous potential to be applied to clinical gait analysis. They allow objective evaluation of gait and movement disorders outside the clinic and research laboratory, and permit evaluation on large numbers of steps. However, repeatability and validity data of these systems are sparse for gait metrics. The purpose of this study was to determine the validity and between-day repeatability of spatiotemporal metrics (gait speed, stance percent, swing percent, gait cycle time, stride length, cadence, and step duration) as measured with the APDM Opal IMUs and Mobility Lab system. We collected data on 39 healthy subjects. Subjects were tested over two days while walking on a standard treadmill, split-belt treadmill, or overground, with IMUs placed in two locations: both feet and both ankles. The spatiotemporal measurements taken with the IMU system were validated against data from an instrumented treadmill, or using standard clinical procedures. Repeatability and minimally detectable change (MDC) of the system was calculated between days. IMUs displayed high to moderate validity when measuring most of the gait metrics tested. Additionally, these measurements appear to be repeatable when used on the treadmill and overground. The foot configuration of the IMUs appeared to better measure gait parameters; however, both the foot and ankle configurations demonstrated good repeatability. In conclusion, the IMU system in this study appears to be both accurate and repeatable for measuring spatiotemporal gait parameters in healthy young adults. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. On model selections for repeated measurement data in clinical studies.

    Science.gov (United States)

    Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J

    2015-05-10

    Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd.

  15. [Analysis of binary classification repeated measurement data with GEE and GLMMs using SPSS software].

    Science.gov (United States)

    An, Shengli; Zhang, Yanhong; Chen, Zheng

    2012-12-01

    To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.

  16. Aberrant approach-avoidance conflict resolution following repeated cocaine pre-exposure.

    Science.gov (United States)

    Nguyen, David; Schumacher, Anett; Erb, Suzanne; Ito, Rutsuko

    2015-10-01

    Addiction is characterized by persistence to seek drug reinforcement despite negative consequences. Drug-induced aberrations in approach and avoidance processing likely facilitate the sustenance of addiction pathology. Currently, the effects of repeated drug exposure on the resolution of conflicting approach and avoidance motivational signals have yet to be thoroughly investigated. The present study sought to investigate the effects of cocaine pre-exposure on conflict resolution using novel approach-avoidance paradigms. We used a novel mixed-valence conditioning paradigm to condition cocaine-pre-exposed rats to associate visuo-tactile cues with either the delivery of sucrose reward or shock punishment in the arms in which the cues were presented. Following training, exploration of an arm containing a superimposition of the cues was assessed as a measure of conflict resolution behavior. We also used a mixed-valence runway paradigm wherein cocaine-pre-exposed rats traversed an alleyway toward a goal compartment to receive a pairing of sucrose reward and shock punishment. Latency to enter the goal compartment across trials was taken as a measure of motivational conflict. Our results reveal that cocaine pre-exposure attenuated learning for the aversive cue association in our conditioning paradigm and enhanced preference for mixed-valence stimuli in both paradigms. Repeated cocaine pre-exposure allows appetitive approach motivations to gain greater influence over behavioral output in the context of motivational conflict, due to aberrant positive and negative incentive motivational processing.

  17. Repeated swim stress alters brain benzodiazepine receptors measured in vivo

    International Nuclear Information System (INIS)

    Weizman, R.; Weizman, A.; Kook, K.A.; Vocci, F.; Deutsch, S.I.; Paul, S.M.

    1989-01-01

    The effects of repeated swim stress on brain benzodiazepine receptors were examined in the mouse using both an in vivo and in vitro binding method. Specific in vivo binding of [ 3 H]Ro15-1788 to benzodiazepine receptors was decreased in the hippocampus, cerebral cortex, hypothalamus, midbrain and striatum after repeated swim stress (7 consecutive days of daily swim stress) when compared to nonstressed mice. In vivo benzodiazepine receptor binding was unaltered after repeated swim stress in the cerebellum and pons medulla. The stress-induced reduction in in vivo benzodiazepine receptor binding did not appear to be due to altered cerebral blood flow or to an alteration in benzodiazepine metabolism or biodistribution because there was no difference in [14C]iodoantipyrine distribution or whole brain concentrations of clonazepam after repeated swim stress. Saturation binding experiments revealed a change in both apparent maximal binding capacity and affinity after repeated swim stress. Moreover, a reduction in clonazepam's anticonvulsant potency was also observed after repeated swim stress [an increase in the ED50 dose for protection against pentylenetetrazol-induced seizures], although there was no difference in pentylenetetrazol-induced seizure threshold between the two groups. In contrast to the results obtained in vivo, no change in benzodiazepine receptor binding kinetics was observed using the in vitro binding method. These data suggest that environmental stress can alter the binding parameters of the benzodiazepine receptor and that the in vivo and in vitro binding methods can yield substantially different results

  18. Repeated swim stress alters brain benzodiazepine receptors measured in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Weizman, R.; Weizman, A.; Kook, K.A.; Vocci, F.; Deutsch, S.I.; Paul, S.M.

    1989-06-01

    The effects of repeated swim stress on brain benzodiazepine receptors were examined in the mouse using both an in vivo and in vitro binding method. Specific in vivo binding of (/sup 3/H)Ro15-1788 to benzodiazepine receptors was decreased in the hippocampus, cerebral cortex, hypothalamus, midbrain and striatum after repeated swim stress (7 consecutive days of daily swim stress) when compared to nonstressed mice. In vivo benzodiazepine receptor binding was unaltered after repeated swim stress in the cerebellum and pons medulla. The stress-induced reduction in in vivo benzodiazepine receptor binding did not appear to be due to altered cerebral blood flow or to an alteration in benzodiazepine metabolism or biodistribution because there was no difference in (14C)iodoantipyrine distribution or whole brain concentrations of clonazepam after repeated swim stress. Saturation binding experiments revealed a change in both apparent maximal binding capacity and affinity after repeated swim stress. Moreover, a reduction in clonazepam's anticonvulsant potency was also observed after repeated swim stress (an increase in the ED50 dose for protection against pentylenetetrazol-induced seizures), although there was no difference in pentylenetetrazol-induced seizure threshold between the two groups. In contrast to the results obtained in vivo, no change in benzodiazepine receptor binding kinetics was observed using the in vitro binding method. These data suggest that environmental stress can alter the binding parameters of the benzodiazepine receptor and that the in vivo and in vitro binding methods can yield substantially different results.

  19. Joint modelling of repeated measurement and time-to-event data: an introductory tutorial.

    Science.gov (United States)

    Asar, Özgür; Ritchie, James; Kalra, Philip A; Diggle, Peter J

    2015-02-01

    The term 'joint modelling' is used in the statistical literature to refer to methods for simultaneously analysing longitudinal measurement outcomes, also called repeated measurement data, and time-to-event outcomes, also called survival data. A typical example from nephrology is a study in which the data from each participant consist of repeated estimated glomerular filtration rate (eGFR) measurements and time to initiation of renal replacement therapy (RRT). Joint models typically combine linear mixed effects models for repeated measurements and Cox models for censored survival outcomes. Our aim in this paper is to present an introductory tutorial on joint modelling methods, with a case study in nephrology. We describe the development of the joint modelling framework and compare the results with those obtained by the more widely used approaches of conducting separate analyses of the repeated measurements and survival times based on a linear mixed effects model and a Cox model, respectively. Our case study concerns a data set from the Chronic Renal Insufficiency Standards Implementation Study (CRISIS). We also provide details of our open-source software implementation to allow others to replicate and/or modify our analysis. The results for the conventional linear mixed effects model and the longitudinal component of the joint models were found to be similar. However, there were considerable differences between the results for the Cox model with time-varying covariate and the time-to-event component of the joint model. For example, the relationship between kidney function as measured by eGFR and the hazard for initiation of RRT was significantly underestimated by the Cox model that treats eGFR as a time-varying covariate, because the Cox model does not take measurement error in eGFR into account. Joint models should be preferred for simultaneous analyses of repeated measurement and survival data, especially when the former is measured with error and the association

  20. [Analysis of variance of repeated data measured by water maze with SPSS].

    Science.gov (United States)

    Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang

    2007-01-01

    To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (PSPSS statistical package is available to fulfil this process.

  1. Measuring environmental change in forest ecosystems by repeated soil sampling: a North American perspective

    Science.gov (United States)

    Lawrence, Gregory B.; Fernandez, Ivan J.; Richter, Daniel D.; Ross, Donald S.; Hazlett, Paul W.; Bailey, Scott W.; Oiumet, Rock; Warby, Richard A.F.; Johnson, Arthur H.; Lin, Henry; Kaste, James M.; Lapenis, Andrew G.; Sullivan, Timothy J.

    2013-01-01

    Environmental change is monitored in North America through repeated measurements of weather, stream and river flow, air and water quality, and most recently, soil properties. Some skepticism remains, however, about whether repeated soil sampling can effectively distinguish between temporal and spatial variability, and efforts to document soil change in forest ecosystems through repeated measurements are largely nascent and uncoordinated. In eastern North America, repeated soil sampling has begun to provide valuable information on environmental problems such as air pollution. This review synthesizes the current state of the science to further the development and use of soil resampling as an integral method for recording and understanding environmental change in forested settings. The origins of soil resampling reach back to the 19th century in England and Russia. The concepts and methodologies involved in forest soil resampling are reviewed and evaluated through a discussion of how temporal and spatial variability can be addressed with a variety of sampling approaches. Key resampling studies demonstrate the type of results that can be obtained through differing approaches. Ongoing, large-scale issues such as recovery from acidification, long-term N deposition, C sequestration, effects of climate change, impacts from invasive species, and the increasing intensification of soil management all warrant the use of soil resampling as an essential tool for environmental monitoring and assessment. Furthermore, with better awareness of the value of soil resampling, studies can be designed with a long-term perspective so that information can be efficiently obtained well into the future to address problems that have not yet surfaced.

  2. Acute caffeine effect on repeatedly measured P300

    OpenAIRE

    Pan, Jingbo; Takeshita, Tatsuya; Morimoto, Kanehisa

    2000-01-01

    The acute effect of a single-dose of caffeine on the P300 event-related brain potential (ERP) was assessed in a study using a repeatedly presented auditory oddball button-press task. A dose (5mg/kg body-weight) of either caffeine or placebo lactose, dissolved in a cup of decaffeinated coffee, was administered double-blindly to coffee drinkers who had abstained from coffee for 24hrs, with the presentation order of the sessions counterbalanced and separated by 2–4 weeks. The caffeine-treatment ...

  3. Intelligence Is in the Eye of the Beholder: Investigating Repeated IQ Measurements in Forensic Psychiatry

    Science.gov (United States)

    Habets, Petra; Jeandarme, Inge; Uzieblo, Kasia; Oei, Karel; Bogaerts, Stefan

    2015-01-01

    Background: A stable assessment of cognition is of paramount importance for forensic psychiatric patients (FPP). The purpose of this study was to compare repeated measures of IQ scores in FPPs with and without intellectual disability. Methods: Repeated measurements of IQ scores in FPPs (n = 176) were collected. Differences between tests were…

  4. Statistical Validation for Clinical Measures: Repeatability and Agreement of Kinect™-Based Software.

    Science.gov (United States)

    Lopez, Natalia; Perez, Elisa; Tello, Emanuel; Rodrigo, Alejandro; Valentinuzzi, Max E

    2018-01-01

    The rehabilitation process is a fundamental stage for recovery of people's capabilities. However, the evaluation of the process is performed by physiatrists and medical doctors, mostly based on their observations, that is, a subjective appreciation of the patient's evolution. This paper proposes a tracking platform of the movement made by an individual's upper limb using Kinect sensor(s) to be applied for the patient during the rehabilitation process. The main contribution is the development of quantifying software and the statistical validation of its performance, repeatability, and clinical use in the rehabilitation process. The software determines joint angles and upper limb trajectories for the construction of a specific rehabilitation protocol and quantifies the treatment evolution. In turn, the information is presented via a graphical interface that allows the recording, storage, and report of the patient's data. For clinical purposes, the software information is statistically validated with three different methodologies, comparing the measures with a goniometer in terms of agreement and repeatability. The agreement of joint angles measured with the proposed software and goniometer is evaluated with Bland-Altman plots; all measurements fell well within the limits of agreement, meaning interchangeability of both techniques. Additionally, the results of Bland-Altman analysis of repeatability show 95% confidence. Finally, the physiotherapists' qualitative assessment shows encouraging results for the clinical use. The main conclusion is that the software is capable of offering a clinical history of the patient and is useful for quantification of the rehabilitation success. The simplicity, low cost, and visualization possibilities enhance the use of the software Kinect for rehabilitation and other applications, and the expert's opinion endorses the choice of our approach for clinical practice. Comparison of the new measurement technique with established

  5. Repeatability of Lucas chamber measurements; Powtarzalnosc pomiarow za pomoca komory Lucasa

    Energy Technology Data Exchange (ETDEWEB)

    Machaj, B.

    1997-12-31

    Results of investigations concerning the repeatability of the measurements carried out with Lucas chamber are presented in the report. The Lucas chamber is used for determination of radon concentration in air, and the Lucas chamber itself is measured in a laboratory stand. The repeatability of measurements is {+-}5.4% relative. The error due to instability of measuring channel gain is estimated to be 2-3% relative. (author). 5 refs, 17 figs, 5 tabs.

  6. High precision measurement of the micro-imaging system to check repeatability of precision

    International Nuclear Information System (INIS)

    Cheng Lin; Song Li; Ma Chuntao; Luo Hongxin; Wang Jie

    2010-01-01

    The beamlines slits of Shanghai Synchrotron Radiation Facility (SSRF) are required to have a repeatability of better than 1 μm. Before the slits installation, the off-line and/or on-line repeatability measurements must be conducted. A machine vision measuring system based on high resolution CCD and adjustable high magnification lens was used in this regard. A multi-level filtering method was used to treat the imaging data. After image binarization, the imaging noises were depressed effectively by using of algebraic mean filtering, statistics median filtering,and the least square filtering. Using the subtracted image between the images before and after slit movement, an average displacement of slit blades could be obtained, and the repeatability of slit could be measured, with a resolution of 0.3 μm of the measurement system. The experimental results show that this measurement system meets the requirements for non-contact measurements to the repeatability of slits. (authors)

  7. Measuring environmental change in forest ecosystems by repeated soil sampling: A North American perspective

    Science.gov (United States)

    Gregory B. Lawrence; Ivan J. Fernandez; Daniel D. Richter; Donald S. Ross; Paul W. Hazlett; Scott W. Bailey; Rock Ouimet; Richard A. F. Warby; Arthur H. Johnson; Henry Lin; James M. Kaste; Andrew G. Lapenis; Timothy J. Sullivan

    2013-01-01

    Environmental change is monitored in North America through repeated measurements of weather, stream and river flow, air and water quality, and most recently, soil properties. Some skepticism remains, however, about whether repeated soil sampling can effectively distinguish between temporal and spatial variability, and efforts to document soil change in forest...

  8. Preventive maintenance measures and repeat tests on actuators

    International Nuclear Information System (INIS)

    Hueren, H.

    1990-01-01

    At Biblis Nuclear Power Station, about 1500 electrical actuators and variable speed drives of various model ranges and with various driving end speeds and actuating times are installed and about 600 of these are located in important safety engineering systems. In order to optimize the preventive maintenance measures on the drives, a data bank has been established into which are stored, in addition to the fixed type data of each drive, inter alia, statements about location of application, valve type, inspection cycle, calendar year of next maintenance, findings during inspection measures and causes of faults. Before each unit inspection, in addition to the inspection lists, the maintenance and installation records and also the associated job instructions are produced from this data processing equipment. (orig.) [de

  9. Repeat Purchase Intention of Starbucks Consumers in Indonesia: A Green Brand Approach

    Directory of Open Access Journals (Sweden)

    Naili Farida

    2015-12-01

    Full Text Available This study develops and tests the repeat purchase intention model (with a green brand approach. The model considers four determinants; perceived image, satisfaction, trust, and attitude. The model is tested using data and a survey of 203 Starbucks customers in Indonesia. The analysis was carried out by employing Structural Equation Modeling. The data was processed with AMOS 21. The results confirm that the company’s green brand image is positively and significantly related to consumer satisfaction, trust, and attitude. On the other hand, consumer satisfaction and trust are shown to have insignificant influence on repeat purchase intention.

  10. Evaluation of the Repeatability and the Reproducibility of AL-Scan Measurements Obtained by Residents

    Directory of Open Access Journals (Sweden)

    Mehmet Kola

    2014-01-01

    Full Text Available Purpose. To assess the repeatability and reproducibility of ocular biometry and intraocular lens (IOL power measurements obtained by ophthalmology residents using an AL-Scan device, a novel optical biometer. Methods. Two ophthalmology residents were instructed regarding the AL-Scan device. Both performed ocular biometry and IOL power measurements using AL-Scan, three times on each of 128 eyes, independently of one another. Corneal keratometry readings, horizontal iris width, central corneal thickness, anterior chamber depth, pupil size, and axial length values measured by both residents were recorded together with IOL power values calculated on the basis of four different IOL calculation formulas (SRK/T, Holladay, and HofferQ. Repeatability and reproducibility of the measurements obtained were analyzed using the intraclass correlation coefficient (ICC. Results. Repeatability (ICC, 0.872-0.999 for resident 1 versus 0.905-0.999 for resident 2 and reproducibility (ICC, 0.916-0.999 were high for all biometric measurements. Repeatability (ICC, 0.981-0.983 for resident 1 versus 0.995-0.996 for resident 2 and reproducibility were also high for all IOL power measurements (ICC, 0.996 for all. Conclusions. The AL-Scan device exhibits good repeatability and reproducibility in all biometric measurements and IOL power calculations, independent of the operator concerned.

  11. Information technology-based approaches to reducing repeat drug exposure in patients with known drug allergies.

    Science.gov (United States)

    Cresswell, Kathrin M; Sheikh, Aziz

    2008-05-01

    There is increasing interest internationally in ways of reducing the high disease burden resulting from errors in medicine management. Repeat exposure to drugs to which patients have a known allergy has been a repeatedly identified error, often with disastrous consequences. Drug allergies are immunologically mediated reactions that are characterized by specificity and recurrence on reexposure. These repeat reactions should therefore be preventable. We argue that there is insufficient attention being paid to studying and implementing system-based approaches to reducing the risk of such accidental reexposure. Drawing on recent and ongoing research, we discuss a number of information technology-based interventions that can be used to reduce the risk of recurrent exposure. Proven to be effective in this respect are interventions that provide real-time clinical decision support; also promising are interventions aiming to enhance patient recognition, such as bar coding, radiofrequency identification, and biometric technologies.

  12. Measurement repeatability of tibial tuberosity-trochlear groove offset distance in red fox (Vulpes vulpes) cadavers

    NARCIS (Netherlands)

    Miles, J.E.; Jensen, B.R.; Kirpensteijn, J.; Svalastoga, E.L.; Eriksen, T.

    2013-01-01

    Abstract OBJECTIVE: To describe CT image reconstruction criteria for measurement of the tibial tuberosity-trochlear groove (TT-TG) offset distance, evaluate intra- and inter-reconstruction repeatability, and identify key sources of error in the measurement technique, as determined in vulpine hind

  13. Repeat Purchase Intention of Starbucks Consumers in Indonesia: A Green Brand Approach

    OpenAIRE

    Naili Farida; Elia Ardyan

    2015-01-01

    This study develops and tests the repeat purchase intention model (with a green brand approach). The model considers four determinants; perceived image, satisfaction, trust, and attitude. The model is tested using data and a survey of 203 Starbucks customers in Indonesia. The analysis was carried out by employing Structural Equation Modeling. The data was processed with AMOS 21. The results confirm that the company’s green brand image is positively and significantly related to consumer satisf...

  14. Formatting data files for repeated-measures analyses in SPSS: Using the Aggregate and Restructure procedures

    Directory of Open Access Journals (Sweden)

    Gyslain Giguère

    2006-03-01

    Full Text Available In this tutorial, we demonstrate how to use the Aggregate and Restructure procedures available in SPSS (versions 11 and up to prepare data files for repeated-measures analyses. In the first two sections of the tutorial, we briefly describe the Aggregate and Restructure procedures. In the final section, we present an example in which the data from a fictional lexical decision task are prepared for analysis using a mixed-design ANOVA. The tutorial demonstrates that the presented method is the most efficient way to prepare data for repeated-measures analyses in SPSS.

  15. Convergence of repeated quantum nondemolition measurements and wave-function collapse

    International Nuclear Information System (INIS)

    Bauer, Michel; Bernard, Denis

    2011-01-01

    Motivated by recent experiments on quantum trapped fields, we give a rigorous proof that repeated indirect quantum nondemolition (QND) measurements converge to the collapse of the wave function as predicted by the postulates of quantum mechanics for direct measurements. We also relate the rate of convergence toward the collapsed wave function to the relative entropy of each indirect measurement, a result which makes contact with information theory.

  16. Repeatability of central corneal thickness measurement with the Pentacam HR system

    Directory of Open Access Journals (Sweden)

    Ruiz Simonato Alonso

    2012-02-01

    Full Text Available PURPOSE: To assess the repeatability of central corneal thickness measurement at the geometrical center (Central Corneal Thickness - CCT given by the Pentacam High Resolution (HR Comprehensive Eye Scanner (Oculus, Wetzlar, Germany over time. METHODS: Prospective, single center, observational study. Two separate CCT measurements were taken by the Pentacam corneal tomography exam (CTm 3 to 12 months apart, and compared. RESULTS: One hundred and sixteen eyes (n=116 of 62 health patients were included in this study. Average CCT in first and last visits was 541.6±37 µm and 543.6±36.9 µm respectively. Mean difference between both measurements was 9.2±6.4 µm, and there was no statistically significant difference in CCT measurement between visits, with good correlation between them (P = 0.057, r² = 0,9209. CONCLUSION: Pentacam (HR CTm gives repeatable CCT measurements over time.

  17. Use of the Counseling Center Assessment of Psychological Symptoms 62 (CCAPS-62) as a Repeated Measure

    Science.gov (United States)

    Ghosh, Arpita; Rieder Bennett, Sara; Martin, Juanita K.

    2018-01-01

    The purpose of this initial, exploratory study was to examine the utility of the Counseling Center Assessment of Psychological Symptoms-62 (CCAPS-62) as a repeated measure tool at one university counseling center. This study investigated whether clients engaged in individual counseling changed in symptomology while in treatment and when (e.g.,…

  18. Use of Repeated Blood Pressure and Cholesterol Measurements to Improve Cardiovascular Disease Risk Prediction

    DEFF Research Database (Denmark)

    Paige, Ellie; Barrett, Jessica; Pennells, Lisa

    2017-01-01

    The added value of incorporating information from repeated blood pressure and cholesterol measurements to predict cardiovascular disease (CVD) risk has not been rigorously assessed. We used data on 191,445 adults from the Emerging Risk Factors Collaboration (38 cohorts from 17 countries with data...

  19. Counterbalancing and Other Uses of Repeated-Measures Latin-Square Designs: Analyses and Interpretations.

    Science.gov (United States)

    Reese, Hayne W.

    1997-01-01

    Recommends that when repeated-measures Latin-square designs are used to counterbalance treatments across a procedural variable or to reduce the number of treatment combinations given to each participant, effects be analyzed statistically, and that in all uses, researchers consider alternative interpretations of the variance associated with the…

  20. Parsimonious Structural Equation Models for Repeated Measures Data, with Application to the Study of Consumer Preferences

    Science.gov (United States)

    Elrod, Terry; Haubl, Gerald; Tipps, Steven W.

    2012-01-01

    Recent research reflects a growing awareness of the value of using structural equation models to analyze repeated measures data. However, such data, particularly in the presence of covariates, often lead to models that either fit the data poorly, are exceedingly general and hard to interpret, or are specified in a manner that is highly data…

  1. Cross-trimester repeated measures testing for Down's syndrome screening: an assessment.

    LENUS (Irish Health Repository)

    Wright, D

    2010-07-01

    To provide estimates and confidence intervals for the performance (detection and false-positive rates) of screening for Down\\'s syndrome using repeated measures of biochemical markers from first and second trimester maternal serum samples taken from the same woman.

  2. Accuracy and repeatability of anthropometric facial measurements using cone beam computed tomography

    NARCIS (Netherlands)

    Fourie, Zacharias; Damstra, Janalt; Gerrits, Peter O.; Ren, Yijin

    Objective: The purpose of this study was to determine the accuracy and repeatability of linear anthropometric measurements on the soft tissue surface model generated from cone beam computed tomography scans. Materials and Methods: The study sample consisted of seven cadaver heads. The accuracy and

  3. Near-Peer Teaching in Paramedic Education: A Repeated Measures Design

    Science.gov (United States)

    Williams, Brett; Nguyen, David

    2017-01-01

    The transition of the Australian paramedic discipline from vocation education and training to the higher education sector has seen a sharp rise in interest in near-peer teaching (NPT). The objective of this study was to examine satisfaction levels of NPT over one academic semester among undergraduate paramedic students. A repeated measured design…

  4. Testing Mean Differences among Groups: Multivariate and Repeated Measures Analysis with Minimal Assumptions.

    Science.gov (United States)

    Bathke, Arne C; Friedrich, Sarah; Pauly, Markus; Konietschke, Frank; Staffen, Wolfgang; Strobl, Nicolas; Höller, Yvonne

    2018-03-22

    To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer's disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved.

  5. Repeatability of junctional zone measurements using three-dimensional transvaginal ultrasound in healthy, fertile women

    DEFF Research Database (Denmark)

    Rasmussen, Christina Kjærgaard; Glavind, Julie; Madsen, Lene Duch

    2016-01-01

    -observer repeatability was evaluated according to the Bland-Altman method and expressed as coefficient of repeatability (CoR). Results: Using 3D-TVS we visualised a thin and regular JZ in most women. The posterior uterine wall had the largest median (interquartile range; iqr) value of JZmax (5.2 (iqr 3.8-6.5)mm. Ten out....... Correlations between measurements were poor in the narrow range of JZ thickness. Conclusions: The JZ has an indistinct outline by 3D-TVS resulting in an error of JZ measurement within a broad range of ±2-4 mm, but reduced by average measurements. The thickness of JZ varied within a narrow range of this healthy......, fertile population and reliability measurements of JZ thickness has to be evaluated in women with a wider range of JZ thickness....

  6. Optimally Repeatable Kinetic Model Variant for Myocardial Blood Flow Measurements with 82Rb PET

    Directory of Open Access Journals (Sweden)

    Adrian F. Ocneanu

    2017-01-01

    Full Text Available Purpose. Myocardial blood flow (MBF quantification with Rb82 positron emission tomography (PET is gaining clinical adoption, but improvements in precision are desired. This study aims to identify analysis variants producing the most repeatable MBF measures. Methods. 12 volunteers underwent same-day test-retest rest and dipyridamole stress imaging with dynamic Rb82 PET, from which MBF was quantified using 1-tissue-compartment kinetic model variants: (1 blood-pool versus uptake region sampled input function (Blood/Uptake-ROI, (2 dual spillover correction (SOC-On/Off, (3 right blood correction (RBC-On/Off, (4 arterial blood transit delay (Delay-On/Off, and (5 distribution volume (DV constraint (Global/Regional-DV. Repeatability of MBF, stress/rest myocardial flow reserve (MFR, and stress/rest MBF difference (ΔMBF was assessed using nonparametric reproducibility coefficients (RPCnp = 1.45 × interquartile range. Results. MBF using SOC-On, RVBC-Off, Blood-ROI, Global-DV, and Delay-Off was most repeatable for combined rest and stress: RPCnp = 0.21 mL/min/g (15.8%. Corresponding MFR and ΔMBF RPCnp were 0.42 (20.2% and 0.24 mL/min/g (23.5%. MBF repeatability improved with SOC-On at stress (p<0.001 and tended to improve with RBC-Off at both rest and stress (p<0.08. DV and ROI did not significantly influence repeatability. The Delay-On model was overdetermined and did not reliably converge. Conclusion. MBF and MFR test-retest repeatability were the best with dual spillover correction, left atrium blood input function, and global DV.

  7. Antarctic Ice Sheet Slope and Aspect Based on Icesat's Repeat Orbit Measurement

    Science.gov (United States)

    Yuan, L.; Li, F.; Zhang, S.; Xie, S.; Xiao, F.; Zhu, T.; Zhang, Y.

    2017-09-01

    Accurate information of ice sheet surface slope is essential for estimating elevation change by satellite altimetry measurement. A study is carried out to recover surface slope of Antarctic ice sheet from Ice, Cloud and land Elevation Satellite (ICESat) elevation measurements based on repeat orbits. ICESat provides repeat ground tracks within 200 meters in cross-track direction and 170 meters in along-track direction for most areas of Antarctic ice sheet. Both cross-track and along-track surface slopes could be obtained by adjacent repeat ground tracks. Combining those measurements yields a surface slope model with resolution of approximately 200 meters. An algorithm considering elevation change is developed to estimate the surface slope of Antarctic ice sheet. Three Antarctic Digital Elevation Models (DEMs) were used to calculate surface slopes. The surface slopes from DEMs are compared with estimates by using in situ GPS data in Dome A, the summit of Antarctic ice sheet. Our results reveal an average surface slope difference of 0.02 degree in Dome A. High resolution remote sensing images are also used in comparing the results derived from other DEMs and this paper. The comparison implies that our results have a slightly better coherence with GPS observation than results from DEMs, but our results provide more details and perform higher accuracy in coastal areas because of the higher resolution for ICESat measurements. Ice divides are estimated based on the aspect, and are weakly consistent with ice divides from other method in coastal regions.

  8. Comparability and repeatability of three commonly used methods for measuring endurance capacity.

    Science.gov (United States)

    Baxter-Gilbert, James; Mühlenhaupt, Max; Whiting, Martin J

    2017-12-01

    Measures of endurance (time to exhaustion) have been used to address a wide range of questions in ecomorphological and physiological research, as well as being used as a proxy for survival and fitness. Swimming, stationary (circular) track running, and treadmill running are all commonly used methods for measuring endurance. Despite the use of these methods across a broad range of taxa, how comparable these methods are to one another, and whether they are biologically relevant, is rarely examined. We used Australian water dragons (Intellagama lesueurii), a species that is morphologically adept at climbing, swimming, and running, to compare these three methods of endurance and examined if there is repeatability within and between trial methods. We found that time to exhaustion was not highly repeatable within a method, suggesting that single measures or a mean time to exhaustion across trials are not appropriate. Furthermore, we compared mean maximal endurance times among the three methods, and found that the two running methods (i.e., stationary track and treadmill) were similar, but swimming was distinctly different, resulting in lower mean maximal endurance times. Finally, an individual's endurance rank was not repeatable across methods, suggesting that the three endurance trial methods are not providing similar information about an individual's performance capacity. Overall, these results highlight the need to carefully match a measure of performance capacity with the study species and the research questions being asked so that the methods being used are behaviorally, ecologically, and physiologically relevant. © 2018 Wiley Periodicals, Inc.

  9. REPEATED MEASURES ANALYSIS OF CHANGES IN PHOTOSYNTHETIC EFFICIENCY IN SOUR CHERRY DURING WATER DEFICIT

    Directory of Open Access Journals (Sweden)

    Marija Viljevac

    2012-06-01

    Full Text Available The objective of this study was to investigate changes in photosynthetic efficiency applying repeated measures ANOVA using the photosynthetic performance index (PIABS of the JIP-test as a vitality parameter in seven genotypes of sour cherry (Prunus cerasus, L. during 10 days of continuous water deficit. Both univariate and multivariate ANOVA repeated measures revealed highly significant time effect (Days and its subsequent interactions with genotype and water deficit. However, the multivariate Pillai’s trace test detected the interaction Time × Genotype × Water deficit as not significant. According to the Tukey’s Studentized Range (HSD test, differences between the control and genotypes exposed to water stress became significant on the fourth day of the experiment, indicating that the plants on the average, began to lose their photosynthetic efficiency four days after being exposed to water shortage. It corroborates previous findings in other species that PIABS is very sensitive tool for detecting drought stress.

  10. Intelligence is in the eye of the beholder: investigating repeated IQ measurements in forensic psychiatry.

    Science.gov (United States)

    Habets, Petra; Jeandarme, Inge; Uzieblo, Kasia; Oei, Karel; Bogaerts, Stefan

    2015-05-01

    A stable assessment of cognition is of paramount importance for forensic psychiatric patients (FPP). The purpose of this study was to compare repeated measures of IQ scores in FPPs with and without intellectual disability. Repeated measurements of IQ scores in FPPs (n = 176) were collected. Differences between tests were computed, and each IQ score was categorized. Additionally, t-tests and regression analyses were performed. Differences of 10 points or more were found in 66% of the cases comparing WAIS-III with RAVEN scores. Fisher's exact test revealed differences between two WAIS-III scores and the WAIS categories. The WAIS-III did not predict other IQs (WAIS or RAVEN) in participants with intellectual disability. This study showed that stability or interchangeability of scores is lacking, especially in individuals with intellectual disability. Caution in interpreting IQ scores is therefore recommended, and the use of the unitary concept of IQ should be discouraged. © 2014 John Wiley & Sons Ltd.

  11. Analysis of oligonucleotide array experiments with repeated measures using mixed models

    Directory of Open Access Journals (Sweden)

    Getchell Thomas V

    2004-12-01

    Full Text Available Abstract Background Two or more factor mixed factorial experiments are becoming increasingly common in microarray data analysis. In this case study, the two factors are presence (Patients with Alzheimer's disease or absence (Control of the disease, and brain regions including olfactory bulb (OB or cerebellum (CER. In the design considered in this manuscript, OB and CER are repeated measurements from the same subject and, hence, are correlated. It is critical to identify sources of variability in the analysis of oligonucleotide array experiments with repeated measures and correlations among data points have to be considered. In addition, multiple testing problems are more complicated in experiments with multi-level treatments or treatment combinations. Results In this study we adopted a linear mixed model to analyze oligonucleotide array experiments with repeated measures. We first construct a generalized F test to select differentially expressed genes. The Benjamini and Hochberg (BH procedure of controlling false discovery rate (FDR at 5% was applied to the P values of the generalized F test. For those genes with significant generalized F test, we then categorize them based on whether the interaction terms were significant or not at the α-level (αnew = 0.0033 determined by the FDR procedure. Since simple effects may be examined for the genes with significant interaction effect, we adopt the protected Fisher's least significant difference test (LSD procedure at the level of αnew to control the family-wise error rate (FWER for each gene examined. Conclusions A linear mixed model is appropriate for analysis of oligonucleotide array experiments with repeated measures. We constructed a generalized F test to select differentially expressed genes, and then applied a specific sequence of tests to identify factorial effects. This sequence of tests applied was designed to control for gene based FWER.

  12. Repeatability in Color Measurements of a Spectrophotometer using Different Positioning Devices.

    Science.gov (United States)

    Hemming, Michael; Kwon, So Ran; Qian, Fang

    2015-12-01

    This study aimed to evaluate the repeatability of color measurements of an intraoral spectrophotometer with the use of three different methods by two operators. A total of 60 teeth were obtained, comprising 30 human maxillary teeth [central incisors (n = 10); canines (n = 10); molars (n = 10)] and 30 artificial teeth [lateral incisors (n = 10); premolar (n = 20)]. Multiple repeated color measurements were obtained from each tooth using three measuring methods by each of the two operators. Five typodonts with alternating artificial and human teeth were made. Measurements were taken by two operators with the Vita EasyShade spectrophotometer using the custom tray (CT), custom jig (CJ) and free hand (FH) method, twice, at an interval of 2 to 7 days. Friedman test was used to detect difference among the three color measuring methods. Post hoc Wilcoxon signed-rank test with Bonferroni correction applied was used for pair-wise comparison of color measurements among the three methods. Additionally, a paired-sample t-test was used to assess a significant difference between the two duplicated measurements made on the same tooth by the same operator for each color parameter and measuring method. For operator A, mean (SD) overall color change-ΔE* (SD) perceived for FH, CT and CJ were 2.21(2.00), 2.39 (1.58) and 2.86 (1.92), respectively. There was statistically significant difference in perceived ΔE* in FH vs CJ (p = 0.0107). However, there were no significant differences between FH and CT (p = 0.2829) or between CT and CJ (p = 0.1159). For operator B mean ΔE* (SD) for FH, CT and CJ were 3.24 (3.46), 1.95 (1.19) and 2.45 (1.56), respectively. There was a significant difference between FH and CT (p = 0.0031). However, there were no statistically significant differences in ΔE* in FH vs CJ (p = 0.3696) or CT vs CJ (p = 0.0809). The repeatability of color measurements was different among the three measuring methods by operators. Overall, the CT method worked well for both

  13. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices

    Science.gov (United States)

    Harrar, Solomon W.; Kong, Xiaoli

    2015-01-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results. PMID:26778861

  14. A Network-Based Algorithm for Clustering Multivariate Repeated Measures Data

    Science.gov (United States)

    Koslovsky, Matthew; Arellano, John; Schaefer, Caroline; Feiveson, Alan; Young, Millennia; Lee, Stuart

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Astronaut Corps is a unique occupational cohort for which vast amounts of measures data have been collected repeatedly in research or operational studies pre-, in-, and post-flight, as well as during multiple clinical care visits. In exploratory analyses aimed at generating hypotheses regarding physiological changes associated with spaceflight exposure, such as impaired vision, it is of interest to identify anomalies and trends across these expansive datasets. Multivariate clustering algorithms for repeated measures data may help parse the data to identify homogeneous groups of astronauts that have higher risks for a particular physiological change. However, available clustering methods may not be able to accommodate the complex data structures found in NASA data, since the methods often rely on strict model assumptions, require equally-spaced and balanced assessment times, cannot accommodate missing data or differing time scales across variables, and cannot process continuous and discrete data simultaneously. To fill this gap, we propose a network-based, multivariate clustering algorithm for repeated measures data that can be tailored to fit various research settings. Using simulated data, we demonstrate how our method can be used to identify patterns in complex data structures found in practice.

  15. Repeatability and Comparison of Keratometry Values Measured with Potec PRK-6000 Autorefractometer, IOLMaster, and Pentacam

    Directory of Open Access Journals (Sweden)

    Adem Türk

    2014-05-01

    Full Text Available Objectives: To research the repeatability and intercompatibility of keratometry values measured with Potec PRK-6000 autorefractometer, IOL Master, and Pentacam. Materials and Methods: In this prospective study, consecutive measurements were performed in two different sessions with the mentioned three devices on 110 eyes of 55 subjects who had no additional ocular pathology except for refraction error. The consistency of flat and steep keratometry, average keratometry, and corneal astigmatism values obtained in both sessions was compared by using intraclass correlation coefficient (ICC. The measurement differences between the devices were statistically compared as well. Results: The mean age of the study subjects was 23.05±3.01 (18-30 years. ICC values of average keratometry measurements obtained in the sessions were 0.996 for Potec PRK-6000 autorefractometer, 0.997 for IOL Master, and 0.999 for Pentacam. There was high compatibility between the three devices in terms of average keratometry values in Bland-Altman analysis. However, there were statistically significant differences between the devices in terms of parameters other than corneal astigmatism. Conclusion: The repeatability of the three devices was found considerably high in keratometry measurements. However, it is not appropriate for these devices to be substituted for each other in keratometry measurements. (Turk J Ophthalmol 2014; 44: 179-83

  16. Repeatability and Reproducibility of Compression Strength Measurements Conducted According to ASTM E9

    Science.gov (United States)

    Luecke, William E.; Ma, Li; Graham, Stephen M.; Adler, Matthew A.

    2010-01-01

    Ten commercial laboratories participated in an interlaboratory study to establish the repeatability and reproducibility of compression strength tests conducted according to ASTM International Standard Test Method E9. The test employed a cylindrical aluminum AA2024-T351 test specimen. Participants measured elastic modulus and 0.2 % offset yield strength, YS(0.2 % offset), using an extensometer attached to the specimen. The repeatability and reproducibility of the yield strength measurement, expressed as coefficient of variations were cv(sub r)= 0.011 and cv(sub R)= 0.020 The reproducibility of the test across the laboratories was among the best that has been reported for uniaxial tests. The reported data indicated that using diametrically opposed extensometers, instead of a single extensometer doubled the precision of the test method. Laboratories that did not lubricate the ends of the specimen measured yield stresses and elastic moduli that were smaller than those measured in laboratories that lubricated the specimen ends. A finite element analysis of the test specimen deformation for frictionless and perfect friction could not explain the discrepancy, however. The modulus measured from stress-strain data were reanalyzed using a technique that finds the optimal fit range, and applies several quality checks to the data. The error in modulus measurements from stress-strain curves generally increased as the fit range decreased to less than 40 % of the stress range.

  17. Repeated scenario simulation to improve competency in critical care: a new approach for nursing education.

    Science.gov (United States)

    Abe, Yukie; Kawahara, Chikako; Yamashina, Akira; Tsuboi, Ryoji

    2013-01-01

    In Japan, nursing education is being reformed to improve nurses' competency. Interest in use of simulation-based education to increase nurses' competency is increasing. To examine the effectiveness of simulation-based education in improving competency of cardiovascular critical care nurses. A training program that consisted of lectures, training in cardiovascular procedures, and scenario simulations was conducted with 24 Japanese nurses working at a university hospital. Participants were allocated to 4 groups, each of which visited 4 zones and underwent scenario simulations that included debriefings during and after the simulations. In each zone, the scenario simulation was repeated and participants assessed their own technical skills by scoring their performance on a rubric. Before and after the simulations, participants also completed a survey that used the Teamwork Activity Inventory in Nursing Scale (TAINS) to assess their nontechnical skills. All the groups showed increased rubric scores after the second simulation compared with the rubric scores obtained after the first simulation, despite differences in the order in which the scenarios were presented. Furthermore, the survey revealed significant increases in scores on the teamwork scale for the following subscale items: "Attitudes of the superior" (P Job satisfaction" (P = .01), and "Confidence as a team member" (P = .004). Our new educational approach of using repeated scenario simulations and TAINS seemed not only to enhance individual nurses' technical skills in critical care nursing but also to improve their nontechnical skills somewhat.

  18. Spatial cluster detection for repeatedly measured outcomes while accounting for residential history.

    Science.gov (United States)

    Cook, Andrea J; Gold, Diane R; Li, Yi

    2009-10-01

    Spatial cluster detection has become an important methodology in quantifying the effect of hazardous exposures. Previous methods have focused on cross-sectional outcomes that are binary or continuous. There are virtually no spatial cluster detection methods proposed for longitudinal outcomes. This paper proposes a new spatial cluster detection method for repeated outcomes using cumulative geographic residuals. A major advantage of this method is its ability to readily incorporate information on study participants relocation, which most cluster detection statistics cannot. Application of these methods will be illustrated by the Home Allergens and Asthma prospective cohort study analyzing the relationship between environmental exposures and repeated measured outcome, occurrence of wheeze in the last 6 months, while taking into account mobile locations.

  19. Effect of repeated contact on adhesion measurements involving polydimethylsiloxane structural material

    International Nuclear Information System (INIS)

    Kroner, E; Arzt, E; Maboudian, R

    2009-01-01

    During the last few years several research groups have focused on the fabrication of artificial gecko inspired adhesives. For mimicking these structures, different polymers are used as structure material, such as polydimethylsiloxanes (PDMS), polyurethanes (PU), and polypropylene (PP). While these polymers can be structured easily and used for artificial adhesion systems, the effects of repeated adhesion testing have never been investigated closely. In this paper we report on the effect of repeated adhesion measurements on the commercially available poly(dimethylsiloxane) polymer kit Sylgard 184 (Dow Corning). We show that the adhesion force decreases as a function of contact cycles. The rate of change and the final value of adhesion are found to depend on the details of the PDMS synthesis and structuring.

  20. The effect of repeated measurements and working memory on the most comfortable level in the ANL test

    DEFF Research Database (Denmark)

    Brännström, K Jonas; Olsen, Steen Østergaard; Holm, Lucas

    2014-01-01

    interleaved methodology during one session using a non-semantic version. Phonological (PWM) and visuospatial working memory (VSWM) was measured. STUDY SAMPLE: Thirty-two normal-hearing adults. RESULTS: Repeated measures ANOVA, intraclass correlations, and the coefficient of repeatability (CR) were used...

  1. Intra- and Intersession Repeatability of an Optical Quality and Intraocular Scattering Measurement System in Children.

    Directory of Open Access Journals (Sweden)

    Mi Tian

    Full Text Available To evaluate intra- and intersession repeatability of objective optical quality and intraocular scattering measurements with a double-pass system in children.Forty-two eyes of 42 children were included in the study. An optical quality analysis system (OQAS was used to measure optical quality parameters, including modulation transfer function cutoff frequency (MTFcutoff, Strehl ratio (SR, OQAS values (OV at 3 different contrasts and objective scatter index (OSI. Three measurement sessions with 10-min intervals were operated by the same technician, and in each session four consecutive measurements were obtained.Mean values for MTFcutoff, SR and OSI were 46.85 ± 7.45cpd, 0.27 ± 0.06 and 0.34 ± 0.22 respectively. 1 The intraclass correlation coefficients were ranged from 0.89 to 0.97 and coefficients of variation from 0.06 to 0.16 for all the parameters in the first session; the relative repeatability were 11.1% (MTFcutoff, 22.5% (SR, 10.9% (OV100%, 16.6% (OV2%, 22.4% (OV9% and 56.3% (OSI. Similar results were found in the second and third sessions. 2 Bland-Altman analysis showed that narrow 95% confidence intervals (compared between the first and second sessions ranged from -5.42 to 5.28 (MTFcutoff, -0.05 to 0.07 (SR, -0.18 to 0.18 (OV100%, -0.26 to 0.29 (OV20%, -0.33 to 0.39 (OV9% and -0.11 to 0.09 (OSI; the comparison between any two of the three sessions showed similar results.Measurements of optical quality and intraocular scattering in children by the double-pass system showed good intra- and intersession repeatability. Retinal image quality is high and intraocular scattering is low in children.

  2. REPEATABILITY AND ACCURACY OF EXOPLANET ECLIPSE DEPTHS MEASURED WITH POST-CRYOGENIC SPITZER

    Energy Technology Data Exchange (ETDEWEB)

    Ingalls, James G.; Krick, J. E.; Carey, S. J.; Stauffer, John R.; Lowrance, Patrick J.; Grillmair, Carl J.; Capak, Peter; Glaccum, William; Laine, Seppo; Surace, Jason; Storrie-Lombardi, Lisa [Spitzer Science Center, California Institute of Technology, 1200 E California Boulevard, Mail Code 314-6, Pasadena, CA 91125 (United States); Buzasi, Derek [Department of Chemistry and Physics, Florida Gulf Coast University, Fort Myers, FL 33965 (United States); Deming, Drake [Department of Astronomy, University of Maryland, College Park, MD 20742-2421 (United States); Diamond-Lowe, Hannah; Stevenson, Kevin B. [Department of Astronomy and Astrophysics, University of Chicago, 5640 S Ellis Avenue, Chicago, IL 60637 (United States); Evans, Thomas M. [School of Physics, University of Exeter, EX4 4QL Exeter (United Kingdom); Morello, G. [Department of Physics and Astronomy, University College London, Gower Street, WC1 E6BT (United Kingdom); Wong, Ian, E-mail: ingalls@ipac.caltech.edu [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States)

    2016-08-01

    We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. We have re-analyzed an existing 4.5 μ m data set, consisting of 10 observations of the XO-3b system during secondary eclipse, using seven different techniques for removing correlated noise. We find that, on average, for a given technique, the eclipse depth estimate is repeatable from epoch to epoch to within 156 parts per million (ppm). Most techniques derive eclipse depths that do not vary by more than a factor 3 of the photon noise limit. All methods but one accurately assess their own errors: for these methods, the individual measurement uncertainties are comparable to the scatter in eclipse depths over the 10 epoch sample. To assess the accuracy of the techniques as well as to clarify the difference between instrumental and other sources of measurement error, we have also analyzed a simulated data set of 10 visits to XO-3b, for which the eclipse depth is known. We find that three of the methods (BLISS mapping, Pixel Level Decorrelation, and Independent Component Analysis) obtain results that are within three times the photon limit of the true eclipse depth. When averaged over the 10 epoch ensemble,  5 out of 7 techniques come within 60 ppm of the true value. Spitzer exoplanet data, if obtained following current best practices and reduced using methods such as those described here, can measure repeatable and accurate single eclipse depths, with close to photon-limited results.

  3. Validation of Repeated Endothelial Function Measurements Using EndoPAT in Stroke

    DEFF Research Database (Denmark)

    Hansen, Aina S; Butt, Jawad H; Holm-Yildiz, Sonja

    2017-01-01

    BACKGROUND: Decreased endothelial function (EF) may be a prognostic marker for stroke. Measuring pharmacological effects on EF may be of interest in the development of personalized medicine for stroke prevention. In this study, we assessed the reliability of repeated EF measurements using a pulse......%, mean age 35.85 ± 3.47 years) and 21 stroke patients (men 52%, mean age 66.38 ± 2.85 years, and mean NIHSS 4.09 ± 0.53) under standardized conditions. EF was measured as the reactive hyperemia index (RHI), logarithm of RHI (lnRHI), and Framingham RHI (fRHI). Measurements were separated by 1.5 and 24 h...

  4. Education for patients with chronic kidney disease in Taiwan: a prospective repeated measures study.

    Science.gov (United States)

    Yen, Miaofen; Huang, Jeng-Jong; Teng, Hsiu-Lan

    2008-11-01

    To investigate the physical, knowledge and quality of life outcomes of an educational intervention for patients with early stage chronic kidney disease. A comprehensive predialysis education care team can be effective in slowing the progression of chronic kidney disease. A single group repeated measures design was used to evaluate the effects of the intervention. Participants were recruited through health department community health screen data banks. A predialysis, team-delivered educational intervention covering renal function health care, dietary management of renal function and the effects of Chinese herb medication on renal function was designed and implemented. Data were collected at baseline, six and 12 months. Study outcomes included physical indicators, knowledge (renal function protection, use of Chinese herbs and renal function and diet) and quality of life. Data were analysed using repeated measure anova to test for change over time in outcome variables. Sixty-six persons participated in this study. The predialysis educational intervention showed significant differences at the three time points in overall knowledge scores, waist-hip ratio, body mass index and global health status. Knowledge measures increased at month 6 and decreased at month 12. The primary indicator of renal function, glomerular filtration rate, remained stable throughout the 12 months of follow-up, despite the relatively older mean age of study participants. A predialysis education care team can provide effective disease-specific knowledge and may help retard deterioration of renal function in persons with early-stage chronic kidney disease. The intervention dose may need to be repeated every six months to maintain knowledge effects. A predialysis educational program with disease-specific knowledge and information is feasible and may provide positive outcomes for patients. Topics on the uses of Chinese herbs should be included for people who are likely to use alternative therapies.

  5. Characterisation and measurement of signals generated by DVB-H 'gap-filler' repeaters

    International Nuclear Information System (INIS)

    Baldini, M.; Barellini, A.; Bogi, L.; Licitra, G.; Silvi, A. M.; Zari, A.

    2009-01-01

    DVB-H (Digital Video Broadcasting Hand-held) is the standard developed by DVB Project and approved by ETSI with the aim of providing the reception of DVB signals even in mobility but also data transfers and multimedia services. The introduction and development of the DVB-H system is still ongoing. In this context, this work focuses on the temporal trend of electromagnetic impact of an urban DVB-H repeater (called 'gap-filler') for exposure assessment purposes; it also describes a method for its measurement by means of narrow band instrumental chains. (authors)

  6. Measuring Starlight Deflection during the 2017 Eclipse: Repeating the Experiment that made Einstein Famous

    Science.gov (United States)

    Bruns, Donald

    2016-05-01

    In 1919, astronomers performed an experiment during a solar eclipse, attempting to measure the deflection of stars near the sun, in order to verify Einstein's theory of general relativity. The experiment was very difficult and the results were marginal, but the success made Albert Einstein famous around the world. Astronomers last repeated the experiment in 1973, achieving an error of 11%. In 2017, using amateur equipment and modern technology, I plan to repeat the experiment and achieve a 1% error. The best available star catalog will be used for star positions. Corrections for optical distortion and atmospheric refraction are better than 0.01 arcsec. During totality, I expect 7 or 8 measurable stars down to magnitude 9.5, based on analysis of previous eclipse measurements taken by amateurs. Reference images, taken near the sun during totality, will be used for precise calibration. Preliminary test runs performed during twilight in April 2016 and April 2017 can accurately simulate the sky conditions during totality, providing an accurate estimate of the final uncertainty.

  7. Repeated absolute gravity measurements for monitoring slow intraplate vertical deformation in Western Europe

    Science.gov (United States)

    Van Camp, M. J.; de Viron, O.; Scherneck, H.; Hinzen, K. G.; Williams, S. D.; Lecocq, T.; Quinif, Y.; Camelbeeck, T.

    2011-12-01

    In continental plate interiors, ground surface movements are at the limit of the noise level and close to or below the accuracy of current geodetic techniques. Absolute gravity measurements are valuable to quantify slow vertical movements, as this instrument is drift free and, unlike GPS, independent of the terrestrial reference frame. Repeated absolute gravity (AG) measurements have been performed in Oostende (Belgian coastline) and at 8 stations along a southwest-northeast profile across the Belgian Ardennes and the Roer Valley Graben (Germany), in order to estimate the tectonic deformation in the area. The AG measurements, repeated once or twice a year, can resolve elusive gravity changes with a precision better than 3.7 nm/s2/yr (95% confidence interval) after 11 years, even in difficult conditions. After 8-15 years (depending on the station), we find that the gravity rates of change lie in the [-3.1, 8.1] nm/s2/yr interval and result from a combination of anthropogenic, climatic, tectonic, and Glacial Isostatic Adjustment (GIA) effects. After correcting for the GIA, the inferred gravity rates and consequently, the vertical land movements, reduce to zero within the uncertainty level at all stations except Jülich (due to man-induced subsidence) and Sohier (possibly, an artefact due to the shortness of the time series at that station).

  8. Predicting seed yield in perennial ryegrass using repeated canopy reflectance measurements and PLSR

    DEFF Research Database (Denmark)

    Gislum, René; Deleuran, Lise Christina; Boelt, Birte

    2009-01-01

    with first year seed crops using three sowing rates and three spring nitrogen (N) application rates. PLSR models were developed for each year and showed correlation coefficients of 0.71, 0.76, and 0.92, respectively. Regression coefficients showed in these experiments that the optimum time for canopy...... reflectance measurements was from approximately 600 cumulative growing degree-days (CGDD) to approximately 900 CGDD. This is the period just before and at heading of the seed crop. Furthermore, regression coefficients showed that information about N and water is important. The results support the development......Repeated canopy reflectance measurements together with partial least-squares regression (PLSR) were used to predict seed yield in perennial ryegrass (Lolium perenne L.). The measurements were performed during the spring and summer growing seasons of 2001 to 2003 in three field experiments...

  9. Characterization of the peripheral blood transcriptome in a repeated measures design using a panel of healthy individuals

    DEFF Research Database (Denmark)

    De Boever, Patrick; Wens, Britt; Forcheh, Anyiawung Chiara

    2014-01-01

    A repeated measures microarray design with 22 healthy, non-smoking volunteers (aging 32. ±. 5. years) was set up to study transcriptome profiles in whole blood samples. The results indicate that repeatable data can be obtained with high within-subject correlation. Probes that could discriminate b...

  10. Analyzing repeated measures data on individuals nested within groups: accounting for dynamic group effects.

    Science.gov (United States)

    Bauer, Daniel J; Gottfredson, Nisha C; Dean, Danielle; Zucker, Robert A

    2013-03-01

    Researchers commonly collect repeated measures on individuals nested within groups such as students within schools, patients within treatment groups, or siblings within families. Often, it is most appropriate to conceptualize such groups as dynamic entities, potentially undergoing stochastic structural and/or functional changes over time. For instance, as a student progresses through school, more senior students matriculate while more junior students enroll, administrators and teachers may turn over, and curricular changes may be introduced. What it means to be a student within that school may thus differ from 1 year to the next. This article demonstrates how to use multilevel linear models to recover time-varying group effects when analyzing repeated measures data on individuals nested within groups that evolve over time. Two examples are provided. The 1st example examines school effects on the science achievement trajectories of students, allowing for changes in school effects over time. The 2nd example concerns dynamic family effects on individual trajectories of externalizing behavior and depression. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  11. Analyzing Repeated Measures Marginal Models on Sample Surveys with Resampling Methods

    Directory of Open Access Journals (Sweden)

    James D. Knoke

    2005-12-01

    Full Text Available Packaged statistical software for analyzing categorical, repeated measures marginal models on sample survey data with binary covariates does not appear to be available. Consequently, this report describes a customized SAS program which accomplishes such an analysis on survey data with jackknifed replicate weights for which the primary sampling unit information has been suppressed for respondent confidentiality. First, the program employs the Macro Language and the Output Delivery System (ODS to estimate the means and covariances of indicator variables for the response variables, taking the design into account. Then, it uses PROC CATMOD and ODS, ignoring the survey design, to obtain the design matrix and hypothesis test specifications. Finally, it enters these results into another run of CATMOD, which performs automated direct input of the survey design specifications and accomplishes the appropriate analysis. This customized SAS program can be employed, with minor editing, to analyze general categorical, repeated measures marginal models on sample surveys with replicate weights. Finally, the results of our analysis accounting for the survey design are compared to the results of two alternate analyses of the same data. This comparison confirms that such alternate analyses, which do not properly account for the design, do not produce useful results.

  12. Tectonic, Climatic and Anthropogenic Vertical Land Movements in Western Europe by Repeated Absolute Gravity Measurements

    Science.gov (United States)

    van Camp, M. J.; de Viron, O.; Lecocq, T.; Hinzen, K. G.; Quinif, Y.; Williams, S. D.; Camelbeeck, T.

    2010-12-01

    In continental plate interiors, tectonic deformations are small and the associated ground surface movements remain close to or below the accuracy of current geodetic techniques, and at the limit of the noise level. An absolute gravimeter is an appropriate tool to quantify slow vertical movements, as this instrument, based on length and time standards, is drift free and does not depend on any terrestrial reference frame. Repeated absolute gravity (AG) measurements have been performed in Oostende (Belgian coastline) and at 8 stations along a southwest-northeast profile across the Belgian Ardennes and the Roer Valley Graben (Germany), in order to estimate the tectonic deformations in the area. After 7-13 years (depending on the station), we find evidence that the movements are no larger than a few millimeter per year and result from a combination of anthropogenic, climatic, tectonic, and Glacial Isostatic Adjustment (GIA) effects. This demonstrates the importance of precisely modeling the GIA effects in order to investigate intraplate tectonic deformations at the sub-millimeter level. This study also shows that AG measurements, repeated once or twice a year, can resolve vertical velocities at the 1.0 mm/yr level after 10 years, even in difficult conditions, provided that the gravimeter is carefully maintained.

  13. ±25ppm repeatable measurement of trapezoidal pulses with 5MHz bandwidth

    CERN Document Server

    AUTHOR|(SzGeCERN)712364; Arpaia, Pasquale; Cerqueira Bastos, Miguel; Martino, Michele

    2015-01-01

    High-quality measurements of pulses are nowadays widely used in fields such as radars, pulsed lasers, electromagnetic pulse generators, and particle accelerators. Whilst literature is mainly focused on fast systems for nanosecond regime with relaxed metrological requirements, in this paper, the high-performance measurement of slower pulses in microsecond regime is faced. In particular, the experimental proof demonstration for a 15 MS/s,_25 ppm repeatable acquisition system to characterize the flat-top of 3 ms rise-time trapezoidal pulses is given. The system exploits a 5MHz bandwidth circuit for analogue signal processing based on the concept of flat-top removal. The requirements, as well as the conceptual and physical designs are illustrated. Simulation results aimed at assessing the circuit performance are also presented. Finally, an experimental case study on the characterization of a pulsed power supply for the klystrons modulators of the Compact Linear Collider (CLIC) under study at CERN is reported. In ...

  14. Designed ankyrin repeat proteins: a new approach to mimic complex antigens for diagnostic purposes?

    Directory of Open Access Journals (Sweden)

    Stefanie Hausammann

    Full Text Available Inhibitory antibodies directed against coagulation factor VIII (FVIII can be found in patients with acquired and congenital hemophilia A. Such FVIII-inhibiting antibodies are routinely detected by the functional Bethesda Assay. However, this assay has a low sensitivity and shows a high inter-laboratory variability. Another method to detect antibodies recognizing FVIII is ELISA, but this test does not allow the distinction between inhibitory and non-inhibitory antibodies. Therefore, we aimed at replacing the intricate antigen FVIII by Designed Ankyrin Repeat Proteins (DARPins mimicking the epitopes of FVIII inhibitors. As a model we used the well-described inhibitory human monoclonal anti-FVIII antibody, Bo2C11, for the selection on DARPin libraries. Two DARPins were selected binding to the antigen-binding site of Bo2C11, which mimic thus a functional epitope on FVIII. These DARPins inhibited the binding of the antibody to its antigen and restored FVIII activity as determined in the Bethesda assay. Furthermore, the specific DARPins were able to recognize the target antibody in human plasma and could therefore be used to test for the presence of Bo2C11-like antibodies in a large set of hemophilia A patients. These data suggest, that our approach might be used to isolate epitopes from different sets of anti-FVIII antibodies in order to develop an ELISA-based screening assay allowing the distinction of inhibitory and non-inhibitory anti-FVIII antibodies according to their antibody signatures.

  15. Theoretical repeatability assessment without repetitive measurements in gradient high-performance liquid chromatography.

    Science.gov (United States)

    Kotani, Akira; Tsutsumi, Risa; Shoji, Asaki; Hayashi, Yuzuru; Kusu, Fumiyo; Yamamoto, Kazuhiro; Hakamata, Hideki

    2016-07-08

    This paper puts forward a time and material-saving method for evaluating the repeatability of area measurements in gradient HPLC with UV detection (HPLC-UV), based on the function of mutual information (FUMI) theory which can theoretically provide the measurement standard deviation (SD) and detection limits through the stochastic properties of baseline noise with no recourse to repetitive measurements of real samples. The chromatographic determination of terbinafine hydrochloride and enalapril maleate is taken as an example. The best choice of the number of noise data points, inevitable for the theoretical evaluation, is shown to be 512 data points (10.24s at 50 point/s sampling rate of an A/D converter). Coupled with the relative SD (RSD) of sample injection variability in the instrument used, the theoretical evaluation is proved to give identical values of area measurement RSDs to those estimated by the usual repetitive method (n=6) over a wide concentration range of the analytes within the 95% confidence intervals of the latter RSD. The FUMI theory is not a statistical one, but the "statistical" reliability of its SD estimates (n=1) is observed to be as high as that attained by thirty-one measurements of the same samples (n=31). Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A Modified Jonckheere Test Statistic for Ordered Alternatives in Repeated Measures Design

    Directory of Open Access Journals (Sweden)

    Hatice Tül Kübra AKDUR

    2016-09-01

    Full Text Available In this article, a new test based on Jonckheere test [1] for  randomized blocks which have dependent observations within block is presented. A weighted sum for each block statistic rather than the unweighted sum proposed by Jonckheereis included. For Jonckheere type statistics, the main assumption is independency of observations within block. In the case of repeated measures design, the assumption of independence is violated. The weighted Jonckheere type statistic for the situation of dependence for different variance-covariance structure and the situation based on ordered alternative hypothesis structure of each block on the design is used. Also, the proposed statistic is compared to the existing test based on Jonckheere in terms of type I error rates by performing Monte Carlo simulation. For the strong correlations, circular bootstrap version of the proposed Jonckheere test provides lower rates of type I error.

  17. Freight performance measures : approach analysis.

    Science.gov (United States)

    2010-05-01

    This report reviews the existing state of the art and also the state of the practice of freight performance measurement. Most performance measures at the state level have aimed at evaluating highway or transit infrastructure performance with an empha...

  18. Repeated assessment of orthotopic glioma pO2 by multi-site EPR oximetry: A technique with the potential to guide therapeutic optimization by repeated measurements of oxygen

    Science.gov (United States)

    Khan, Nadeem; Mupparaju, Sriram; Hou, Huagang; Williams, Benjamin B.; Swartz, Harold

    2011-01-01

    Tumor hypoxia plays a vital role in therapeutic resistance. Consequently, measurements of tumor pO2 could be used to optimize the outcome of oxygen-dependent therapies, such as, chemoradiation. However, the potential optimizations are restricted by the lack of methods to repeatedly and quantitatively assess tumor pO2 during therapies, particularly in gliomas. We describe the procedures for repeated measurements of orthotopic glioma pO2 by multi-site electron paramagnetic resonance (EPR) oximetry. This oximetry approach provides simultaneous measurements of pO2 at more than one site in the glioma and contralateral cerebral tissue. The pO2 of intracerebral 9L, C6, F98 and U251 tumors, as well as contralateral brain, were measured repeatedly for five consecutive days. The 9L glioma was well oxygenated with pO2 of 27 - 36 mm Hg, while C6, F98 and U251 glioma were hypoxic with pO2 of 7 - 12 mm Hg. The potential of multi-site EPR oximetry to assess temporal changes in tissue pO2 was investigated in rats breathing 100% O2. A significant increase in F98 tumor and contralateral brain pO2 was observed on day 1 and day 2, however, glioma oxygenation declined on subsequent days. In conclusion, EPR oximetry provides the capability to repeatedly assess temporal changes in orthotopic glioma pO2. This information could be used to test and optimize the methods being developed to modulate tumor hypoxia. Furthermore, EPR oximetry could be potentially used to enhance the outcome of chemoradiation by scheduling treatments at times of increase in glioma pO2. PMID:22079559

  19. Microcomputer-based tests for repeated-measures: Metric properties and predictive validities

    Science.gov (United States)

    Kennedy, Robert S.; Baltzley, Dennis R.; Dunlap, William P.; Wilkes, Robert L.; Kuntz, Lois-Ann

    1989-01-01

    A menu of psychomotor and mental acuity tests were refined. Field applications of such a battery are, for example, a study of the effects of toxic agents or exotic environments on performance readiness, or the determination of fitness for duty. The key requirement of these tasks is that they be suitable for repeated-measures applications, and so questions of stability and reliability are a continuing, central focus of this work. After the initial (practice) session, seven replications of 14 microcomputer-based performance tests (32 measures) were completed by 37 subjects. Each test in the battery had previously been shown to stabilize in less than five 90-second administrations and to possess retest reliabilities greater than r = 0.707 for three minutes of testing. However, all the tests had never been administered together as a battery and they had never been self-administered. In order to provide predictive validity for intelligence measurement, the Wechsler Adult Intelligence Scale-Revised and the Wonderlic Personnel Test were obtained on the same subjects.

  20. Repeated Blood Pressure Measurements in Childhood in Prediction of Hypertension in Adulthood.

    Science.gov (United States)

    Oikonen, Mervi; Nuotio, Joel; Magnussen, Costan G; Viikari, Jorma S A; Taittonen, Leena; Laitinen, Tomi; Hutri-Kähönen, Nina; Jokinen, Eero; Jula, Antti; Cheung, Michael; Sabin, Matthew A; Daniels, Stephen R; Raitakari, Olli T; Juonala, Markus

    2016-01-01

    Hypertension may be predicted from childhood risk factors. Repeated observations of abnormal blood pressure in childhood may enhance prediction of hypertension and subclinical atherosclerosis in adulthood compared with a single observation. Participants (1927, 54% women) from the Cardiovascular Risk in Young Finns Study had systolic and diastolic blood pressure measurements performed when aged 3 to 24 years. Childhood/youth abnormal blood pressure was defined as above 90th or 95th percentile. After a 21- to 31-year follow-up, at the age of 30 to 45 years, hypertension (>140/90 mm Hg or antihypertensive medication) prevalence was found to be 19%. Carotid intima-media thickness was examined, and high-risk intima-media was defined as intima-media thickness >90th percentile or carotid plaques. Prediction of adulthood hypertension and high-risk intima-media was compared between one observation of abnormal blood pressure in childhood/youth and multiple observations by improved Pearson correlation coefficients and area under the receiver operating curve. When compared with a single measurement, 2 childhood/youth observations improved the correlation for adult systolic (r=0.44 versus 0.35, Phypertension in adulthood (0.63 for 2 versus 0.60 for 1 observation, P=0.003). When compared with 2 measurements, third observation did not provide any significant improvement for correlation or prediction (P always >0.05). A higher number of childhood/youth observations of abnormal blood pressure did not enhance prediction of adult high-risk intima-media thickness. Compared with a single measurement, the prediction of adult hypertension was enhanced by 2 observations of abnormal blood pressure in childhood/youth. © 2015 American Heart Association, Inc.

  1. Measurement of repeat effects in Chicago’s criminal social network

    Directory of Open Access Journals (Sweden)

    Paul Kump

    2016-07-01

    Full Text Available The “near-repeat” effect is a well-known criminological phenomenon in which the occurrence of a crime incident gives rise to a temporary elevation of crime risk within close physical proximity to an initial incident. Adopting a social network perspective, we instead define a near repeat in terms of geodesic distance within a criminal social network, rather than spatial distance. Specifically, we report a statistical analysis of repeat effects in arrest data for Chicago during the years 2003–2012. We divide the arrest data into two sets (violent crimes and other crimes and, for each set, we compare the distributions of time intervals between repeat incidents to theoretical distributions in which repeat incidents occur only by chance. We first consider the case of the same arrestee participating in repeat incidents (“exact repeats” and then extend the analysis to evaluate repeat risks of those arrestees near one another in the social network. We observe repeat effects that diminish as a function of geodesic distance and time interval, and we estimate typical time scales for repeat crimes in Chicago.

  2. Developing the Pieta House Suicide Intervention Model: a quasi-experimental, repeated measures design.

    Science.gov (United States)

    Surgenor, Paul Wg; Freeman, Joan; O'Connor, Cindy

    2015-01-01

    While most crisis intervention models adhere to a generalised theoretical framework, the lack of clarity around how these should be enacted has resulted in a proliferation of models, most of which have little to no empirical support. The primary aim of this research was to propose a suicide intervention model that would resolve the client's suicidal crisis by decreasing their suicidal ideation and improve their outlook through enhancing a range of protective factors. The secondary aim was to assess the impact of this model on negative and positive outlook. A quasi-experimental, pre-test post-test repeated measures design was employed. A questionnaire assessing self-esteem, depression, and positive and negative suicidal ideation was administered to the same participants pre- and post- therapy facilitating paired responses. Multiple analysis of variance and paired-samples t-tests were conducted to establish whether therapy using the PH-SIM had a significant effect on the clients' negative and positive outlook. Analyses revealed a statistically significant effect of therapy for depression, negative suicidal ideation, self-esteem, and positive suicidal ideation. Negative outlook was significantly lower after therapy and positive outlook significantly higher. The decreased negative outlook and increased positive outlook following therapy provide some support for the proposed model in fulfilling its role, though additional research is required to establish the precise role of the intervention model in achieving this.

  3. Psychosocial outcomes of Hong Kong Chinese diagnosed with acute coronary syndromes: a prospective repeated measures study.

    Science.gov (United States)

    Chan, Dominic S K; Chau, Janita P C; Chang, Anne M

    2007-08-01

    Western studies have suggested that emotional stress and distress impacted on the morbidity and mortality in people following acute coronary events. Symptoms of anxiety and depression have been associated with re-infarction and death, prolonged recovery and disability and depression may precipitate the client's low self-esteem. This study examined perceived anxiety, depression and self-esteem of Hong Kong Chinese clients diagnosed with acute coronary syndrome (ACS) over a 6-month period following hospital admission. To examine: A prospective, repeated measures design with measures taken on two occasions over a 6-month period; (1) within the 1st week of hospital admission following the onset of ACS and (2) at 6 months follow up. Convenient sample of 182 voluntary consented clients admitted with ACS to a major public hospital in Hong Kong who could communicate in Chinese, complete questionnaires, cognitive intact, and were haemodynamically stable and free from acute chest pain at the time of interview. Baseline data were obtained within 1 week after hospital admission. The follow-up data was collected 6 months after hospital discharge. The Chinese version of the Hospital Anxiety and Depression Scale (HADS), State Self-esteem Scale (SSES), and Rosenberg's Self-Esteem Scale (RSES) were used to assess anxiety and depression, state self-esteem, and trait self-esteem, respectively. Findings suggested gender differences in clients' perception in anxiety, depression and self-esteem. Improvements in clients' perception of these variables were evident over the 6-month period following their acute coronary events. The study confirmed the western notion that psychosocial problems are common among coronary clients and this also applies to Hong Kong Chinese diagnosed with ACS. Further studies to explore effective interventions to address these psychosocial issues are recommended.

  4. Effect of hip braces on brake response time: Repeated measures designed study.

    Science.gov (United States)

    Dammerer, Dietmar; Waidmann, Cornelia; Huber, Dennis G; Krismer, Martin; Haid, Christian; Liebensteiner, Michael C

    2017-08-01

    The question whether or not a patient with a hip brace should drive a car is of obvious importance because the advice given to patients to resume driving is often anecdotal as few scientific data are available on this specific subject. To assess driving ability (brake response time) with commonly used hip braces. Repeated measures design. Brake response time was assessed under six conditions: (1) without a brace (control), (2) with a typical postoperative hip brace with adjustable range of motion and the settings: unrestricted, (3) flexion limited to 70°, (4) extension blocked at 20° hip flexion, (5) both flexion and extension limited (20°/70°) and (6) an elastic hip bandage. Brake response time was assessed using a custom-made driving simulator as used in previous studies. The participants were a convenience sample of able-bodied participants. A total of 70 participants (35 women and 35 men) participated in our study. Mean age was 31.1 (standard deviation: 10.6; range: 21.7-66.4) years. A significant within-subject effect for brake response time was found ( p = 0.009), but subsequent post hoc analyses revealed no significant differences between control and the other settings. Based on our findings, it does not seem mandatory to recommend driving abstinence for patients wearing a hip orthosis. We suggest that our results be interpreted with caution, because (1) an underlying pathological hip condition needs to be considered, (2) the ability to drive a car safely is multifactorial and brake response time is only one component thereof and (3) brake response time measurements were performed only with healthy participants. Clinical relevance Hip braces are used in the context of joint-preserving and prosthetic surgery of the hip. Therefore, clinicians are confronted with the question whether to allow driving a car with the respective hip brace or not. Our data suggest that hip braces do not impair brake response time.

  5. A repeated measures experiment of green exercise to improve self-esteem in UK school children.

    Directory of Open Access Journals (Sweden)

    Katharine Reed

    Full Text Available Exercising in natural, green environments creates greater improvements in adult's self-esteem than exercise undertaken in urban or indoor settings. No comparable data are available for children. The aim of this study was to determine whether so called 'green exercise' affected changes in self-esteem; enjoyment and perceived exertion in children differently to urban exercise. We assessed cardiorespiratory fitness (20 m shuttle-run and self-reported physical activity (PAQ-A in 11 and 12 year olds (n = 75. Each pupil completed two 1.5 mile timed runs, one in an urban and another in a rural environment. Trials were completed one week apart during scheduled physical education lessons allocated using a repeated measures design. Self-esteem was measured before and after each trial, ratings of perceived exertion (RPE and enjoyment were assessed after completing each trial. We found a significant main effect (F (1,74, = 12.2, p<0.001, for the increase in self-esteem following exercise but there was no condition by exercise interaction (F (1,74, = 0.13, p = 0.72. There were no significant differences in perceived exertion or enjoyment between conditions. There was a negative correlation (r = -0.26, p = 0.04 between habitual physical activity and RPE during the control condition, which was not evident in the green exercise condition (r = -0.07, p = 0.55. Contrary to previous studies in adults, green exercise did not produce significantly greater increases in self-esteem than the urban exercise condition. Green exercise was enjoyed more equally by children with differing levels of habitual physical activity and has the potential to engage less active children in exercise.

  6. A repeated measures experiment of green exercise to improve self-esteem in UK school children.

    Science.gov (United States)

    Reed, Katharine; Wood, Carly; Barton, Jo; Pretty, Jules N; Cohen, Daniel; Sandercock, Gavin R H

    2013-01-01

    Exercising in natural, green environments creates greater improvements in adult's self-esteem than exercise undertaken in urban or indoor settings. No comparable data are available for children. The aim of this study was to determine whether so called 'green exercise' affected changes in self-esteem; enjoyment and perceived exertion in children differently to urban exercise. We assessed cardiorespiratory fitness (20 m shuttle-run) and self-reported physical activity (PAQ-A) in 11 and 12 year olds (n = 75). Each pupil completed two 1.5 mile timed runs, one in an urban and another in a rural environment. Trials were completed one week apart during scheduled physical education lessons allocated using a repeated measures design. Self-esteem was measured before and after each trial, ratings of perceived exertion (RPE) and enjoyment were assessed after completing each trial. We found a significant main effect (F (1,74), = 12.2, pself-esteem following exercise but there was no condition by exercise interaction (F (1,74), = 0.13, p = 0.72). There were no significant differences in perceived exertion or enjoyment between conditions. There was a negative correlation (r = -0.26, p = 0.04) between habitual physical activity and RPE during the control condition, which was not evident in the green exercise condition (r = -0.07, p = 0.55). Contrary to previous studies in adults, green exercise did not produce significantly greater increases in self-esteem than the urban exercise condition. Green exercise was enjoyed more equally by children with differing levels of habitual physical activity and has the potential to engage less active children in exercise.

  7. Repeatability of Computerized Tomography-Based Anthropomorphic Measurements of Frailty in Patients With Pulmonary Fibrosis Undergoing Lung Transplantation.

    Science.gov (United States)

    McClellan, Taylor; Allen, Brian C; Kappus, Matthew; Bhatti, Lubna; Dafalla, Randa A; Snyder, Laurie D; Bashir, Mustafa R

    To determine interreader and intrareader repeatability and correlations among measurements of computerized tomography-based anthropomorphic measurements in patients with pulmonary fibrosis undergoing lung transplantation. This was an institutional review board-approved, Health Insurance Portability and Accountability Act-compliant retrospective study of 23 randomly selected subjects (19 male and 4 female; median age = 69 years; range: 66-77 years) with idiopathic pulmonary fibrosis undergoing pulmonary transplantation, who had also undergone preoperative thoracoabdominal computerized tomography. Five readers of varying imaging experience independently performed the following cross-sectional area measurements at the inferior endplate of the L3 vertebral body: right and left psoas muscles, right and left paraspinal muscles, total abdominal musculature, and visceral and subcutaneous fat. The following measurements were obtained at the inferior endplate of T6: right and left paraspinal muscles with and without including the trapezius muscles and subcutaneous fat. Three readers repeated all measurements to assess intrareader repeatability. Intrareader repeatability was nearly perfect (interclass correlation coefficients = 0.99, P < 0.001). Interreader agreement was excellent across all 5 readers (interclass correlation coefficients: 0.71-0.99, P < 0.001). Coefficients of variance between measures ranged from 3.2%-6.8% for abdominal measurements, but were higher for thoracic measurements, up to 23.9%. Correlation between total paraspinal and total psoas muscle area was strong (r 2 = 0.67, P < 0.001). Thoracic and abdominal musculature had a weaker correlation (r 2 = 0.35-0.38, P < 0.001). Measures of thoracic and abdominal muscle and fat area are highly repeatable in patients with pulmonary fibrosis undergoing lung transplantation. Measures of muscle area are strongly correlated among abdominal locations, but inversely correlated between abdominal and thoracic locations

  8. Unidimensional and Bidimensional Approaches to Measuring Acculturation.

    Science.gov (United States)

    Shin, Cha-Nam; Todd, Michael; An, Kyungeh; Kim, Wonsun Sunny

    2017-08-01

    Researchers easily overlook the complexity of acculturation measurement in research. This study is to elaborate the shortcomings of unidimensional approaches to conceptualizing acculturation and highlight the importance of using bidimensional approaches in health research. We conducted a secondary data analysis on acculturation measures and eating habits obtained from 261 Korean American adults in a Midwestern city. Bidimensional approaches better conceptualized acculturation and explained more of the variance in eating habits than did unidimensional approaches. Bidimensional acculturation measures combined with appropriate analytical methods, such as a cluster analysis, are recommended in health research because they provide a more comprehensive understanding of acculturation and its association with health behaviors than do other methods.

  9. Context matters! sources of variability in weekend physical activity among families: a repeated measures study

    Directory of Open Access Journals (Sweden)

    Robert J. Noonan

    2017-04-01

    Full Text Available Abstract Background Family involvement is an essential component of effective physical activity (PA interventions in children. However, little is known about the PA levels and characteristics of PA among families. This study used a repeated measures design and multiple data sources to explore the variability and characteristics of weekend PA among families. Methods Families (including a ‘target’ child aged 9–11 years, their primary caregiver(s and siblings aged 6–8 years were recruited through primary schools in Liverpool, UK. Participants completed a paper-based PA diary and wore an ActiGraph GT9X accelerometer on their left wrist for up to 16 weekend days. ActiGraph.csv files were analysed using the R-package GGIR version 1.1–4. Mean minutes of moderate-to-vigorous PA (MVPA for each weekend of measurement were calculated using linear mixed models, and variance components were estimated for participant (inter-individual, weekend of measurement, and residual error (intra-individual. Intraclass correlation coefficients (ICC were calculated from the proportion of total variance accounted for by inter-individual sources, and used as a measure of reliability. Diary responses were summed to produce frequency counts. To offer contextual insight into weekend PA among family units, demographic, accelerometer, and diary data were combined to form two case studies representative of low and high active families. Results Twenty-five participants from 7 families participated, including 7 ‘target’ children (mean age 9.3 ± 1.1 years, 4 boys, 6 siblings (mean age 7.2 ± 0.7 years; 4 boys and 12 adults (7 mothers and 5 fathers. There was a high degree of variability in target children’s (ICC = 0.55, siblings (ICC = 0.38, and mothers’ MVPA (ICC = 0.58, but not in fathers’ MVPA (ICC = 0.83. Children’s weekend PA was mostly unstructured in nature and undertaken with friends, whereas a greater proportion of parents’ weekend

  10. Analyzing repeated data collected by mobile phones and frequent text messages. An example of Low back pain measured weekly for 18 weeks

    Directory of Open Access Journals (Sweden)

    Axén Iben

    2012-07-01

    Full Text Available Abstract Background Repeated data collection is desirable when monitoring fluctuating conditions. Mobile phones can be used to gather such data from large groups of respondents by sending and receiving frequently repeated short questions and answers as text messages. The analysis of repeated data involves some challenges. Vital issues to consider are the within-subject correlation, the between measurement occasion correlation and the presence of missing values. The overall aim of this commentary is to describe different methods of analyzing repeated data. It is meant to give an overview for the clinical researcher in order for complex outcome measures to be interpreted in a clinically meaningful way. Methods A model data set was formed using data from two clinical studies, where patients with low back pain were followed with weekly text messages for 18 weeks. Different research questions and analytic approaches were illustrated and discussed, as well as the handling of missing data. In the applications the weekly outcome “number of days with pain” was analyzed in relation to the patients’ “previous duration of pain” (categorized as more or less than 30 days in the previous year. Research questions with appropriate analytical methods 1: How many days with pain do patients experience? This question was answered with data summaries. 2: What is the proportion of participants “recovered” at a specific time point? This question was answered using logistic regression analysis. 3: What is the time to recovery? This question was answered using survival analysis, illustrated in Kaplan-Meier curves, Proportional Hazard regression analyses and spline regression analyses. 4: How is the repeatedly measured data associated with baseline (predictor variables? This question was answered using generalized Estimating Equations, Poisson regression and Mixed linear models analyses. 5: Are there subgroups of patients with similar courses of pain

  11. Heart failure re-admission: measuring the ever shortening gap between repeat heart failure hospitalizations.

    Directory of Open Access Journals (Sweden)

    Jeffrey A Bakal

    Full Text Available Many quality-of-care and risk prediction metrics rely on time to first rehospitalization even though heart failure (HF patients may undergo several repeat hospitalizations. The aim of this study is to compare repeat hospitalization models. Using a population-based cohort of 40,667 patients, we examined both HF and all cause re-hospitalizations using up to five years of follow-up. Two models were examined: the gap-time model which estimates the adjusted time between hospitalizations and a multistate model which considered patients to be in one of four states; community-dwelling, in hospital for HF, in hospital for any reason, or dead. The transition probabilities and times were then modeled using patient characteristics and number of repeat hospitalizations. We found that during the five years of follow-up roughly half of the patients returned for a subsequent hospitalization for each repeat hospitalization. Additionally, we noted that the unadjusted time between hospitalizations was reduced ∼40% between each successive hospitalization. After adjustment each additional hospitalization was associated with a 28 day (95% CI: 22-35 reduction in time spent out of hospital. A similar pattern was seen when considering the four state model. A large proportion of patients had multiple repeat hospitalizations. Extending the gap between hospitalizations should be an important goal of treatment evaluation.

  12. Repeated Geophysical Surface Measurements to Estimate the Dynamics of Underground Coalfires

    Science.gov (United States)

    Wuttke, M. W.; Kessels, W.; Han, J.; Halisch, M.; Rüter, H.; Lindner, H.

    2009-04-01

    in a range between -130 and 176 nT. The maxima are most likely caused by the conversion of pyrite and markasit into maghemite, hematite and magnetite. Therefore the identified patches with high magnetic anomalies should have a direct connection to the burning coal in firezone 18. The firezone in Wuda has been visited now for five, that in Queergou for two times. All the discussed geophysical measurements together allow an integrated interpretation. Each result can be related to the combustion process with a particular likelihood for the vertical projection to the combustion centre. Probability calculations with chosen weight factors for each observation method are discussed. A so called fireindex deduced from the repeated measurements reveals the dynamics of the coal fire.

  13. A training approach for the transition of repeatable collaboration processes to practitioners

    NARCIS (Netherlands)

    Kolfschoten, G.L.; De Vreede, G.J.; Pietron, L.R.

    2011-01-01

    This paper presents a training approach to support the deployment of collaboration process support according to the Collaboration Engineering approach. In Collaboration Engineering, practitioners in an organization are trained to facilitate a specific collaborative work practice on a recurring

  14. Attrition from Web-Based Cognitive Testing: A Repeated Measures Comparison of Gamification Techniques.

    Science.gov (United States)

    Lumsden, Jim; Skinner, Andy; Coyle, David; Lawrence, Natalia; Munafo, Marcus

    2017-11-22

    The prospect of assessing cognition longitudinally and remotely is attractive to researchers, health practitioners, and pharmaceutical companies alike. However, such repeated testing regimes place a considerable burden on participants, and with cognitive tasks typically being regarded as effortful and unengaging, these studies may experience high levels of participant attrition. One potential solution is to gamify these tasks to make them more engaging: increasing participant willingness to take part and reducing attrition. However, such an approach must balance task validity with the introduction of entertaining gamelike elements. This study aims to investigate the effects of gamelike features on participant attrition using a between-subjects, longitudinal Web-based testing study. We used three variants of a common cognitive task, the Stop Signal Task (SST), with a single gamelike feature in each: one variant where points were rewarded for performing optimally; another where the task was given a graphical theme; and a third variant, which was a standard SST and served as a control condition. Participants completed four compulsory test sessions over 4 consecutive days before entering a 6-day voluntary testing period where they faced a daily decision to either drop out or continue taking part. Participants were paid for each session they completed. A total of 482 participants signed up to take part in the study, with 265 completing the requisite four consecutive test sessions. No evidence of an effect of gamification on attrition was observed. A log-rank test showed no evidence of a difference in dropout rates between task variants (χ 2 2 =3.0, P=.22), and a one-way analysis of variance of the mean number of sessions completed per participant in each variant also showed no evidence of a difference (F 2,262 =1.534, P=.21, partial η 2 =0.012). Our findings raise doubts about the ability of gamification to reduce attrition from longitudinal cognitive testing studies

  15. The effect of repeated measurements and working memory on the most comfortable level in the ANL test.

    Science.gov (United States)

    Brännström, K Jonas; Olsen, Steen Østergaard; Holm, Lucas; Kastberg, Tobias; Ibertsson, Tina

    2014-11-01

    To study the effect of a large number of repetitions on the most comfortable level (MCL) when doing the acceptable noise level (ANL) test, and explore if MCL variability is related to central cognitive processes. Twelve MCL repetitions were measured within the ANL test using interleaved methodology during one session using a non-semantic version. Phonological (PWM) and visuospatial working memory (VSWM) was measured. Thirty-two normal-hearing adults. Repeated measures ANOVA, intraclass correlations, and the coefficient of repeatability (CR) were used to assess the repeatability. Repeated measures ANOVA and CR indicated poor agreement between the two first repetitions. After excluding the first repetition, analyses showed that the MCL in the ANL test is reliable. A negative association was found between PWM and MCL variability indicating that subjects with higher PWM show less variability. The findings suggest that, after excluding the first repetition, the MCL in the ANL test is reliable. A single repetition of the MCL in the ANL test should be avoided. If an interleaved methodology is used, a single ANL repetition should be added prior to the actual testing. The findings also suggest that MCL variability is associated to PWM but not VSWM.

  16. Repeatability of measures of inflammatory cell number in bronchial biopsies in atopic asthma

    NARCIS (Netherlands)

    Sont, J. K.; Willems, L. N.; Evertse, C. E.; Hooijer, R.; Sterk, P. J.; van Krieken, J. H.

    1997-01-01

    Airway pathology is increasingly considered to be a major outcome in asthma research. The aim of this study was to examine the intra-observer, within-section and between-biopsy repeatability, together with the implications for statistical power of a computerized quantitative analysis of inflammatory

  17. Reliability of near-infrared spectroscopy for measuring biceps brachii oxygenation during sustained and repeated isometric contractions.

    Science.gov (United States)

    Muthalib, Makii; Millet, Guillaume Y; Quaresima, Valentina; Nosaka, Kazunori

    2010-01-01

    We examine the test-retest reliability of biceps brachii tissue oxygenation index (TOI) parameters measured by near-infrared spectroscopy during a 10-s sustained and a 30-repeated (1-s contraction, 1-s relaxation) isometric contraction task at 30% of maximal voluntary contraction (30% MVC) and maximal (100% MVC) intensities. Eight healthy men (23 to 33 yr) were tested on three sessions separated by 3 h and 24 h, and the within-subject reliability of torque and each TOI parameter were determined by Bland-Altman+/-2 SD limits of agreement plots and coefficient of variation (CV). No significant (P>0.05) differences between the three sessions were found for mean values of torque and TOI parameters during the sustained and repeated tasks at both contraction intensities. All TOI parameters were within+/-2 SD limits of agreement. The CVs for torque integral were similar between the sustained and repeated task at both intensities (4 to 7%); however, the CVs for TOI parameters during the sustained and repeated task were lower for 100% MVC (7 to 11%) than for 30% MVC (22 to 36%). It is concluded that the reliability of the biceps brachii NIRS parameters during both sustained and repeated isometric contraction tasks is acceptable.

  18. The French press: a repeatable and high-throughput approach to exercising zebrafish (Danio rerio).

    Science.gov (United States)

    Usui, Takuji; Noble, Daniel W A; O'Dea, Rose E; Fangmeier, Melissa L; Lagisz, Malgorzata; Hesselson, Daniel; Nakagawa, Shinichi

    2018-01-01

    Zebrafish are increasingly used as a vertebrate model organism for various traits including swimming performance, obesity and metabolism, necessitating high-throughput protocols to generate standardized phenotypic information. Here, we propose a novel and cost-effective method for exercising zebrafish, using a coffee plunger and magnetic stirrer. To demonstrate the use of this method, we conducted a pilot experiment to show that this simple system provides repeatable estimates of maximal swim performance (intra-class correlation [ICC] = 0.34-0.41) and observe that exercise training of zebrafish on this system significantly increases their maximum swimming speed. We propose this high-throughput and reproducible system as an alternative to traditional linear chamber systems for exercising zebrafish and similarly sized fishes.

  19. IsTeen Court effective for repeat offenders? A test of the restorative justice approach.

    Science.gov (United States)

    Forgays, Deborah Kirby; DeMilio, Lisa

    2005-02-01

    Teen Courts are an effective judicial alternative for many youth offenders. The majority of youth courts deal solely with first-time offenders. However, repeat offenders are at a greater risk for future crime. Is Teen Court effective with more experienced offenders? In this study, the authors examine the outcomes of 26 Whatcom County Teen Court offenders with at least one prior conviction. The sentence completion rate was higher and the recidivism was lower for the Teen Court offenders when compared with a sample of first-time Court Diversion offenders. This objective evidence of program success is augmented by an offender's perspective on his or her court experience. These perspectives as well as the continued voluntary involvement with Teen Court are discussed in relation to empowerment theory.

  20. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    Science.gov (United States)

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Spatial Cluster Detection for Repeatedly Measured Outcomes while Accounting for Residential History

    OpenAIRE

    Cook, Andrea J.; Gold, Diane R.; Li, Yi

    2009-01-01

    Spatial cluster detection has become an important methodology in quantifying the effect of hazardous exposures. Previous methods have focused on cross-sectional outcomes that are binary or continuous. There are virtually no spatial cluster detection methods proposed for longitudinal outcomes. This paper proposes a new spatial cluster detection method for repeated outcomes using cumulative geographic residuals. A major advantage of this method is its ability to readily incorporate information ...

  2. Test-retest repeatability of myocardial oxidative metabolism and efficiency using standalone dynamic 11C-acetate PET and multimodality approaches in healthy controls.

    Science.gov (United States)

    Hansson, Nils Henrik; Harms, Hendrik Johannes; Kim, Won Yong; Nielsen, Roni; Tolbod, Lars P; Frøkiær, Jørgen; Bouchelouche, Kirsten; Poulsen, Steen Hvitfeldt; Wiggers, Henrik; Parner, Erik Thorlund; Sörensen, Jens

    2018-05-31

    Myocardial efficiency measured by 11 C-acetate positron emission tomography (PET) has successfully been used in clinical research to quantify mechanoenergetic coupling. The objective of this study was to establish the repeatability of myocardial external efficiency (MEE) and work metabolic index (WMI) by non-invasive concepts. Ten healthy volunteers (63 ± 4 years) were examined twice, one week apart, using 11 C-acetate PET, cardiovascular magnetic resonance (CMR), and echocardiography. Myocardial oxygen consumption from PET was combined with stroke work data from CMR, echocardiography, or PET to obtain MEE and WMI for each modality. Repeatability was estimated as the coefficient of variation (CV) between test and retest. MEE CMR , MEE Echo , and MEE PET values were 21.9 ± 2.7%, 16.4 ± 3.7%, and 23.8 ± 4.9%, respectively, P PET values were 4.42 ± 0.90, 4.07 ± 0.63, and 4.58 ± 1.13 mmHg × mL/m 2  × 10 6 , respectively, P = .45. Repeatability for MEE CMR was superior compared with MEE Echo but did not differ significantly compared with MEE PET (6.3% vs 12.9% and 9.4%, P = .04 and .25). CV values for WMI CMR , WMI Echo , and WMI PET were 10.0%, 14.8%, and 12.0%, respectively, (P = .53). Non-invasive measurements of MEE using 11 C-acetate PET are highly repeatable. A PET-only approach did not differ significantly from CMR/PET and might facilitate further clinical research due to lower costs and broader applicability.

  3. Repeated testing improves achievement in a blended learning approach for risk competence training of medical students: results of a randomized controlled trial.

    Science.gov (United States)

    Spreckelsen, C; Juenger, J

    2017-09-26

    Adequate estimation and communication of risks is a critical competence of physicians. Due to an evident lack of these competences, effective training addressing risk competence during medical education is needed. Test-enhanced learning has been shown to produce marked effects on achievements. This study aimed to investigate the effect of repeated tests implemented on top of a blended learning program for risk competence. We introduced a blended-learning curriculum for risk estimation and risk communication based on a set of operationalized learning objectives, which was integrated into a mandatory course "Evidence-based Medicine" for third-year students. A randomized controlled trial addressed the effect of repeated testing on achievement as measured by the students' pre- and post-training score (nine multiple-choice items). Basic numeracy and statistical literacy were assessed at baseline. Analysis relied on descriptive statistics (histograms, box plots, scatter plots, and summary of descriptive measures), bootstrapped confidence intervals, analysis of covariance (ANCOVA), and effect sizes (Cohen's d, r) based on adjusted means and standard deviations. All of the 114 students enrolled in the course consented to take part in the study and were assigned to either the intervention or control group (both: n = 57) by balanced randomization. Five participants dropped out due to non-compliance (control: 4, intervention: 1). Both groups profited considerably from the program in general (Cohen's d for overall pre vs. post scores: 2.61). Repeated testing yielded an additional positive effect: while the covariate (baseline score) exhibits no relation to the post-intervention score, F(1, 106) = 2.88, p > .05, there was a significant effect of the intervention (repeated tests scenario) on learning achievement, F(1106) = 12.72, p blended learning approach can be improved significantly by implementing a test-enhanced learning design, namely repeated testing. As

  4. Power analysis for multivariate and repeated measurements designs via SPSS: correction and extension of D'Amico, Neilands, and Zambarano (2001).

    Science.gov (United States)

    Osborne, Jason W

    2006-05-01

    D'Amico, Neilands, and Zambarano (2001) published SPSS syntax to perform power analyses for three complex procedures: ANCOVA, MANOVA, and repeated measures ANOVA. Unfortunately, the published SPSS syntax for performing the repeated measures analysis needed some minor revision in order to perform the analysis correctly. This article presents the corrected syntax that will successfully perform the repeated measures analysis and provides some guidance on modifying the syntax to customize the analysis.

  5. Changing approaches of prosecutors towards juvenile repeated sex-offenders: A Bayesian evaluation.

    Science.gov (United States)

    Bandyopadhyay, Dipankar; Sinha, Debajyoti; Lipsitz, Stuart; Letourneau, Elizabeth

    2010-06-01

    Existing state-wide data bases on prosecutors' decisions about juvenile offenders are important, yet often un-explored resources for understanding changes in patterns of judicial decisions over time. We investigate the extent and nature of change in judicial behavior towards juveniles following the enactment of a new set of mandatory registration policies between 1992 and 1996 via analyzing the data on prosecutors' decisions of moving forward for youths repeatedly charged with sexual violence in South Carolina. We use a novel extension of random effects logistic regression model for longitudinal binary data via incorporating an unknown change-point year. For convenient physical interpretation, our models allow the proportional odds interpretation of effects of the explanatory variables and the change-point year with and without conditioning on the youth-specific random effects. As a consequence, the effects of the unknown change-point year and other factors can be interpreted as changes in both within youth and population averaged odds of moving forward. Using a Bayesian paradigm, we consider various prior opinions about the unknown year of the change in the pattern of prosecutors' decision. Based on the available data, we make posteriori conclusions about whether a change-point has occurred between 1992 and 1996 (inclusive), evaluate the degree of confidence about the year of change-point, estimate the magnitude of the effects of the change-point and other factors, and investigate other provocative questions about patterns of prosecutors' decisions over time.

  6. Performance Analysis of Measurement Inaccuracies of IMU/GPS on Airborne Repeat-pass Interferometric SAR in the Presence of Squint

    Directory of Open Access Journals (Sweden)

    Deng Yuan

    2014-08-01

    Full Text Available In the MOtion COmpensation (MOCO approach to airborne repeat-pass interferometric Synthetic Aperture Radar (SAR based on motion measurement data, the measurement inaccuracies of Inertial Measurement Unit/Global Positioning System (IMU/GPS and the positioning errors of the target, which may contribute to the residual uncompensated motion errors, affect the imaging result and interferometric measurement. Considering the effects of the two types of error, this paper builds a mathematical model of residual motion errors in presence of squint, and analyzes the effects on the residual motion errors induced by the measurement inaccuracies of IMU/GPS and the positioning errors of the target. In particular, the effects of various measurement inaccuracies of IMU/GPS on interferometric SAR image quality, interferometric phase, and digital elevation model precision are disscussed. Moreover, the paper quantitatively researches the effects of residual motion errors on airborne repeat-pass interferometric SAR through theoretical and simulated analyses and provides theoretical bases for system design and signal processing.

  7. A Multifunctional Frontloading Approach for Repeated Recycling of a Pressure-Controlled AFM Micropipette.

    Directory of Open Access Journals (Sweden)

    Phillip Roder

    Full Text Available Fluid force microscopy combines the positional accuracy and force sensitivity of an atomic force microscope (AFM with nanofluidics via a microchanneled cantilever. However, adequate loading and cleaning procedures for such AFM micropipettes are required for various application situations. Here, a new frontloading procedure is described for an AFM micropipette functioning as a force- and pressure-controlled microscale liquid dispenser. This frontloading procedure seems especially attractive when using target substances featuring high costs or low available amounts. Here, the AFM micropipette could be filled from the tip side with liquid from a previously applied droplet with a volume of only a few μL using a short low-pressure pulse. The liquid-loaded AFM micropipettes could be then applied for experiments in air or liquid environments. AFM micropipette frontloading was evaluated with the well-known organic fluorescent dye rhodamine 6G and the AlexaFluor647-labeled antibody goat anti-rat IgG as an example of a larger biological compound. After micropipette usage, specific cleaning procedures were tested. Furthermore, a storage method is described, at which the AFM micropipettes could be stored for a few hours up to several days without drying out or clogging of the microchannel. In summary, the rapid, versatile and cost-efficient frontloading and cleaning procedure for the repeated usage of a single AFM micropipette is beneficial for various application situations from specific surface modifications through to local manipulation of living cells, and provides a simplified and faster handling for already known experiments with fluid force microscopy.

  8. Measuring Aseismic Slip through Characteristically Repeating Earthquakes at the Mendocino Triple Junction, Northern California

    Science.gov (United States)

    Materna, K.; Taira, T.; Burgmann, R.

    2016-12-01

    The Mendocino Triple Junction (MTJ), at the transition point between the San Andreas fault system, the Mendocino Transform Fault, and the Cascadia Subduction Zone, undergoes rapid tectonic deformation and produces more large (M>6.0) earthquakes than any region in California. Most of the active faults of the triple junction are located offshore, making it difficult to characterize both seismic slip and aseismic creep. In this work, we study aseismic creep rates near the MTJ using characteristically repeating earthquakes (CREs) as indicators of creep rate. CREs are generally interpreted as repeated failures of the same seismic patch within an otherwise creeping fault zone; as a consequence, the magnitude and recurrence time of the CREs can be used to determine a fault's creep rate through empirically calibrated scaling relations. Using seismic data from 2010-2016, we identify CREs as recorded by an array of eight 100-Hz PBO borehole seismometers deployed in the Cape Mendocino area. For each event pair with epicenters less than 30 km apart, we compute the cross-spectral coherence of 20 seconds of data starting one second before the P-wave arrival. We then select pairs with high coherence in an appropriate frequency band, which is determined uniquely for each event pair based on event magnitude, station distance, and signal-to-noise ratio. The most similar events (with median coherence above 0.95 at two or more stations) are selected as CREs and then grouped into CRE families, and each family is used to infer a local creep rate. On the Mendocino Transform Fault, we find relatively high creep rates of >5 cm/year that increase closer to the Gorda Ridge. Closer to shore and to the MTJ itself, we find many families of repeaters on and off the transform fault with highly variable creep rates, indicative of the complex deformation that takes place there.

  9. Construction and updating of a public events questionnaire for repeated measures longitudinal studies

    Directory of Open Access Journals (Sweden)

    Martha eNoone

    2014-03-01

    Full Text Available Impairments of retrospective memory and cases of retrograde amnesia are often seen in clinical settings. A measure of the proportion of memories retained over a specified time can be useful in clinical situations and public events questionnaires may be valuable in this respect. However, consistency of retention of public events memory has rarely been studied in the same participants. In addition, when used in a research context, public events questionnaires require updating to ensure questions are of equivalent age with respect to when the test is taken. This paper describes an approach to constructing and updating a Public Events Questionnaire (PEQ for use with a sample that is recruited and followed-up over a long time-period. Internal consistency, parallel-form reliability, test-retest reliability and secondary validity analyses were examined for three versions of the PEQ that were updated every six months. Versions 2 and 3 of the questionnaire were reliable across and within versions and for recall and recognition. Change over time was comparable across each version of the PEQ. These results show that PEQs can be regularly updated in a standardised fashion to allow use throughout studies with long recruitment periods.

  10. Construction and updating of a public events questionnaire for repeated measures longitudinal studies.

    Science.gov (United States)

    Noone, Martha; Semkovska, Maria; Carton, Mary; Dunne, Ross; Horgan, John-Paul; O'Kane, Breige; McLoughlin, Declan M

    2014-01-01

    Impairments of retrospective memory and cases of retrograde amnesia are often seen in clinical settings. A measure of the proportion of memories retained over a specified time can be useful in clinical situations and public events questionnaires may be valuable in this respect. However, consistency of retention of public events memory has rarely been studied in the same participants. In addition, when used in a research context, public events questionnaires require updating to ensure questions are of equivalent age with respect to when the test is taken. This paper describes an approach to constructing and updating a Public Events Questionnaire (PEQ) for use with a sample that is recruited and followed-up over a long time-period. Internal consistency, parallel-form reliability, test-retest reliability, and secondary validity analyses were examined for three versions of the PEQ that were updated every 6 months. Versions 2 and 3 of the questionnaire were reliable across and within versions and for recall and recognition. Change over time was comparable across each version of the PEQ. These results show that PEQs can be regularly updated in a standardized fashion to allow use throughout studies with long recruitment periods.

  11. Intra-session repeatability of iridocorneal angle measurements provided by a Scheimpflug photography-based system in healthy eyes

    OpenAIRE

    Ruiz-Belda, Clara; Piñero, David P.; Ruiz Fortes, Pedro; Soto-Negro, Roberto; Moya, Myriam; Pérez Cambrodí, Rafael J.; Artola, Alberto

    2016-01-01

    Purpose: The purpose of this study was to evaluate intra-session repeatability of measurements of the iridocorneal angle at different meridians in the nasal and temporal areas in healthy eyes using the Sirius Scheimpflug photography-based system in glaucoma analysis mode. Methods: A total of 43 eyes of 43 patients ranging in age from 36 to 79 years were enrolled in the study. All eyes received a comprehensive ophthalmologic examination including a complete anterior segment analysis with the C...

  12. Comparison of Repeated Measurement Design and Mixed Models in Evaluation of the Entonox Effect on Labor Pain

    Directory of Open Access Journals (Sweden)

    Nasim Karimi

    2017-01-01

    Full Text Available Background & objectives: In many medical studies, the response variable is measured repeatedly over time to evaluate the treatment effect that is known as longitudinal study. The analysis method for this type of data is repeated measures ANOVA that uses only one correlation structure and the results are not valid with inappropriate correlation structure. To avoid this problem, a convenient alternative is mixed models. So, the aim of this study was to compare of mixed and repeated measurement models for examination of the Entonox effect on the labor pain. Methods: This experimental study was designed to compare the effect of Entonox and oxygen inhalation on pain relief between two groups. Data were analyzed using repeated measurement and mixed models with different correlation structures. Selection and comparison of proper correlation structures performed using Akaike information criterion, Bayesian information criterion and restricted log-likelihood. Data were analyzed using SPSS-22. Results: Results of our study showed that all variables containing analgesia methods, labor duration of the first and second stages, and time were significant in these tests. In mixed model, heterogeneous first-order autoregressive, first-order autoregressive, heterogeneous Toeplitz and unstructured correlation structures were recognized as the best structures. Also, all variables were significant in these structures. Unstructured variance covariance matrix was recognized as the worst structure and labor duration of the first and second stages was not significant in this structure. Conclusions: This study showed that the Entonox inhalation has a significant effect on pain relief in primiparous and it is confirmed by all of the models.

  13. Application of a repeat-measure biomarker measurement error model to 2 validation studies: examination of the effect of within-person variation in biomarker measurements.

    Science.gov (United States)

    Preis, Sarah Rosner; Spiegelman, Donna; Zhao, Barbara Bojuan; Moshfegh, Alanna; Baer, David J; Willett, Walter C

    2011-03-15

    Repeat-biomarker measurement error models accounting for systematic correlated within-person error can be used to estimate the correlation coefficient (ρ) and deattenuation factor (λ), used in measurement error correction. These models account for correlated errors in the food frequency questionnaire (FFQ) and the 24-hour diet recall and random within-person variation in the biomarkers. Failure to account for within-person variation in biomarkers can exaggerate correlated errors between FFQs and 24-hour diet recalls. For 2 validation studies, ρ and λ were calculated for total energy and protein density. In the Automated Multiple-Pass Method Validation Study (n=471), doubly labeled water (DLW) and urinary nitrogen (UN) were measured twice in 52 adults approximately 16 months apart (2002-2003), yielding intraclass correlation coefficients of 0.43 for energy (DLW) and 0.54 for protein density (UN/DLW). The deattenuated correlation coefficient for protein density was 0.51 for correlation between the FFQ and the 24-hour diet recall and 0.49 for correlation between the FFQ and the biomarker. Use of repeat-biomarker measurement error models resulted in a ρ of 0.42. These models were similarly applied to the Observing Protein and Energy Nutrition Study (1999-2000). In conclusion, within-person variation in biomarkers can be substantial, and to adequately assess the impact of correlated subject-specific error, this variation should be assessed in validation studies of FFQs. © The Author 2011. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved.

  14. Evaluating measurement accuracy a practical approach

    CERN Document Server

    Rabinovich, Semyon G

    2013-01-01

    The goal of Evaluating Measurement Accuracy: A Practical Approach is to present methods for estimating the accuracy of measurements performed in industry, trade, and scientific research. From developing the theory of indirect measurements to proposing new methods of reduction, transformation, and enumeration, this work encompasses the full range of measurement data processing. It includes many examples that illustrate the application of general theory to typical problems encountered in measurement practice. As a result, the book serves as an inclusive reference work for data processing of all types of measurements: single and multiple, combined and simultaneous, direct (both linear and nonlinear), and indirect (both dependent and independent). It is a working tool for experimental scientists and engineers of all disciplines who work with instrumentation. It is also a good resource for natural science and engineering students and for technicians performing measurements in industry. A key feature of the book is...

  15. Reliability and Repeatability of Cone Density Measurements in Patients With Stargardt Disease and RPGR-Associated Retinopathy.

    Science.gov (United States)

    Tanna, Preena; Kasilian, Melissa; Strauss, Rupert; Tee, James; Kalitzeos, Angelos; Tarima, Sergey; Visotcky, Alexis; Dubra, Alfredo; Carroll, Joseph; Michaelides, Michel

    2017-07-01

    To assess reliability and repeatability of cone density measurements by using confocal and (nonconfocal) split-detector adaptive optics scanning light ophthalmoscopy (AOSLO) imaging. It will be determined whether cone density values are significantly different between modalities in Stargardt disease (STGD) and retinitis pigmentosa GTPase regulator (RPGR)-associated retinopathy. Twelve patients with STGD (aged 9-52 years) and eight with RPGR-associated retinopathy (aged 11-31 years) were imaged using both confocal and split-detector AOSLO simultaneously. Four graders manually identified cone locations in each image that were used to calculate local densities. Each imaging modality was evaluated independently. The data set consisted of 1584 assessments of 99 STGD images (each image in two modalities and four graders who graded each image twice) and 928 RPGR assessments of 58 images (each image in two modalities and four graders who graded each image twice). For STGD assessments the reliability for confocal and split-detector AOSLO was 67.9% and 95.9%, respectively, and the repeatability was 71.2% and 97.3%, respectively. The differences in the measured cone density values between modalities were statistically significant for one grader. For RPGR assessments the reliability for confocal and split-detector AOSLO was 22.1% and 88.5%, respectively, and repeatability was 63.2% and 94.5%, respectively. The differences in cone density between modalities were statistically significant for all graders. Split-detector AOSLO greatly improved the reliability and repeatability of cone density measurements in both disorders and will be valuable for natural history studies and clinical trials using AOSLO. However, it appears that these indices may be disease dependent, implying the need for similar investigations in other conditions.

  16. Reliability and Repeatability of Cone Density Measurements in Patients With Stargardt Disease and RPGR-Associated Retinopathy

    Science.gov (United States)

    Tanna, Preena; Kasilian, Melissa; Strauss, Rupert; Tee, James; Kalitzeos, Angelos; Tarima, Sergey; Visotcky, Alexis; Dubra, Alfredo; Carroll, Joseph; Michaelides, Michel

    2017-01-01

    Purpose To assess reliability and repeatability of cone density measurements by using confocal and (nonconfocal) split-detector adaptive optics scanning light ophthalmoscopy (AOSLO) imaging. It will be determined whether cone density values are significantly different between modalities in Stargardt disease (STGD) and retinitis pigmentosa GTPase regulator (RPGR)–associated retinopathy. Methods Twelve patients with STGD (aged 9–52 years) and eight with RPGR-associated retinopathy (aged 11–31 years) were imaged using both confocal and split-detector AOSLO simultaneously. Four graders manually identified cone locations in each image that were used to calculate local densities. Each imaging modality was evaluated independently. The data set consisted of 1584 assessments of 99 STGD images (each image in two modalities and four graders who graded each image twice) and 928 RPGR assessments of 58 images (each image in two modalities and four graders who graded each image twice). Results For STGD assessments the reliability for confocal and split-detector AOSLO was 67.9% and 95.9%, respectively, and the repeatability was 71.2% and 97.3%, respectively. The differences in the measured cone density values between modalities were statistically significant for one grader. For RPGR assessments the reliability for confocal and split-detector AOSLO was 22.1% and 88.5%, respectively, and repeatability was 63.2% and 94.5%, respectively. The differences in cone density between modalities were statistically significant for all graders. Conclusions Split-detector AOSLO greatly improved the reliability and repeatability of cone density measurements in both disorders and will be valuable for natural history studies and clinical trials using AOSLO. However, it appears that these indices may be disease dependent, implying the need for similar investigations in other conditions. PMID:28738413

  17. One Approach for Dynamic L-lysine Modelling of Repeated Fed-batch Fermentation

    Directory of Open Access Journals (Sweden)

    Kalin Todorov

    2007-03-01

    Full Text Available This article deals with establishment of dynamic unstructured model of variable volume fed-batch fermentation process with intensive droppings for L-lysine production. The presented approach of the investigation includes the following main procedures: description of the process by generalized stoichiometric equations; preliminary data processing and calculation of specific rates for main kinetic variables; identification of the specific rates as a second-order non-linear dynamic models; establishment and optimisation of dynamic model of the process; simulation researches. MATLAB is used as a research environment.

  18. Measurement-Device-Independent Approach to Entanglement Measures

    Science.gov (United States)

    Shahandeh, Farid; Hall, Michael J. W.; Ralph, Timothy C.

    2017-04-01

    Within the context of semiquantum nonlocal games, the trust can be removed from the measurement devices in an entanglement-detection procedure. Here, we show that a similar approach can be taken to quantify the amount of entanglement. To be specific, first, we show that in this context, a small subset of semiquantum nonlocal games is necessary and sufficient for entanglement detection in the local operations and classical communication paradigm. Second, we prove that the maximum payoff for these games is a universal measure of entanglement which is convex and continuous. Third, we show that for the quantification of negative-partial-transpose entanglement, this subset can be further reduced down to a single arbitrary element. Importantly, our measure is measurement device independent by construction and operationally accessible. Finally, our approach straightforwardly extends to quantify the entanglement within any partitioning of multipartite quantum states.

  19. The genetic analysis of repeated measures II: The Karhunen-Loeve expansion.

    NARCIS (Netherlands)

    Molenaar, P.C.M.; Boomsma, D.I.

    1987-01-01

    Outlines the Karhunen-Loeve (N. Ahmed and K. R. Rao, 1975) approach to the genetic analysis of time series of arbitrary length and with arbitrary covariance function. This approach is based on the simultaneous eigenvalue decomposition of the covariance matrices of the original time series obtained

  20. EFFICIENCY OF REPEATED AND UNSCHEDULED TRAINING AS THE MEASURES TO PREVENT ACCIDENTS AT SUPPLY DEPOTS AND WAREHOUSES

    Directory of Open Access Journals (Sweden)

    Bocharova Irina Nikolaevna

    2013-05-01

    Full Text Available This paper presents the results of the analysis of the state of occupational safety at supply depots and warehouses. It is revealed that most accidents involve the employees who have less than one year’s service. Experience has proven that the preventive activities to avoid occupational traumatism are efficient when a complex of workplace safety measures is implemented. The experts consider the repeated and unscheduled training to be very important events. This is supported by the fact that among the employees of the commercial establishments who underwent repeated and unscheduled training, the number of individuals who suffered an accident is small. The efficient functioning of the occupational safety training system is infeasible without ensuring the motivation for assimilating the knowledge and forming the complete foundation for safe labor. In order to reduce the number of accidents, one should proceed from the principle of responding to accidents to the system for professional risk management.

  1. Reproducibility and repeatability of a new computerized software for sagittal spinopelvic and scoliosis curvature radiologic measurements: Keops(®).

    Science.gov (United States)

    Maillot, C; Ferrero, E; Fort, D; Heyberger, C; Le Huec, J-C

    2015-07-01

    The purpose of this study was to evaluate the inter- and intra-observer variability of the computerized radiologic measurements using Keops(®) and to determine the bias between the software and the standard paper measurement. Four individuals measured all frontal and sagittal variables on the 30 X-rays randomly selected on two occasions (test and retest conditions). The Bland-Altman plot was used to determine the degree of agreement between the measurement on paper X-ray and the measurement using Keops(®) for all reviewers and for the two measures; the intraclass correlation coefficient (ICC) was calculated for each pair of analyses to assess interobserver reproducibility among the four reviewers for the same patient using either paper X-ray or Keops(®) measurement and finally, concordance correlation coefficient (rc) was calculated to assess intraobserver repeatability among the same reviewer for one patient between the two measure using the same method (paper or Keops(®)). The mean difference calculated between the two methods was minimal at -0, 4° ± 3.41° [-7.1; 6.4] for frontal measurement and 0.1° ± 3.52° [-6.7; 6.8] for sagittal measurement. Keops(®) has a better interobserver reproducibility than paper measurement for determination of the sagittal pelvic parameter (ICC = 0.9960 vs. 0.9931; p = 0.0001). It has a better intraobserver repeatability than paper for determination of Cobbs angle (rc = 0.9872 vs. 0.9808; p rc = 0.9981 vs. 0.9953; p plane and that the use of this software can be recommended for clinical application. Diagnostic, level III.

  2. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    Science.gov (United States)

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The

  3. Intergenic and repeat transcription in human, chimpanzee and macaque brains measured by RNA-Seq.

    Directory of Open Access Journals (Sweden)

    Augix Guohua Xu

    Full Text Available Transcription is the first step connecting genetic information with an organism's phenotype. While expression of annotated genes in the human brain has been characterized extensively, our knowledge about the scope and the conservation of transcripts located outside of the known genes' boundaries is limited. Here, we use high-throughput transcriptome sequencing (RNA-Seq to characterize the total non-ribosomal transcriptome of human, chimpanzee, and rhesus macaque brain. In all species, only 20-28% of non-ribosomal transcripts correspond to annotated exons and 20-23% to introns. By contrast, transcripts originating within intronic and intergenic repetitive sequences constitute 40-48% of the total brain transcriptome. Notably, some repeat families show elevated transcription. In non-repetitive intergenic regions, we identify and characterize 1,093 distinct regions highly expressed in the human brain. These regions are conserved at the RNA expression level across primates studied and at the DNA sequence level across mammals. A large proportion of these transcripts (20% represents 3'UTR extensions of known genes and may play roles in alternative microRNA-directed regulation. Finally, we show that while transcriptome divergence between species increases with evolutionary time, intergenic transcripts show more expression differences among species and exons show less. Our results show that many yet uncharacterized evolutionary conserved transcripts exist in the human brain. Some of these transcripts may play roles in transcriptional regulation and contribute to evolution of human-specific phenotypic traits.

  4. The Repeated Administration of Resveratrol Has Measurable Effects on Circulating T-Cell Subsets in Humans

    Directory of Open Access Journals (Sweden)

    J. Luis Espinoza

    2017-01-01

    Full Text Available Preclinical studies have shown that resveratrol exerts immunomodulatory effects with potential clinical value in the amelioration of autoimmune disorders and cancer prevention; however, little is known about the in vivo effects of this naturally occurring polyphenol on human immune cells. We assessed the effects of repeated doses of resveratrol (1000 mg/day for 28 days on circulating immune cells in healthy Japanese individuals. Resveratrol was safe and well tolerated and was associated with significant increases in the numbers of circulating γδ T cells and regulatory T cells and resulted in small, yet significant, decreases in the plasma levels of the proinflammatory cytokines TNF-α and MCP-1 and a significant increase in the plasma antioxidant activity compared with the corresponding antioxidant baseline activity and with that in four control individuals. In in vitro studies, resveratrol significantly improved the growth of γδ T cells and regulatory T cells. These findings demonstrate that resveratrol has some clear biological effects on human circulating immune cells. Further studies are necessary to interpret the long-term immunological changes associated with resveratrol treatment.

  5. Perspectives on repeated low-level blast and the measurement of neurotrauma in humans as an occupational exposure risk

    Science.gov (United States)

    Carr, W.; Dell, K. C.; Yanagi, M. A.; Hassan, D. M.; LoPresti, M. L.

    2017-11-01

    A pressing question in military medical research is the nature and degree of effects on the human brain from occupational repeated exposure to low-level explosive blast, but reliable and effective means to objectively measure such effects remain elusive. In survey results, headache, difficulty sleeping, irritability, cognitive impairment, and a variety of other symptoms consistent with post-concussive syndrome have been reported by those exposed to blast and there was positive correlation between degree of blast exposure and degree of symptomology, but an important goal is to obtain more objective evidence of an effect than self-report alone. This review reflects recent efforts to measure and evaluate such hypothesized effects and current recommendations for ongoing study. Optimal measures are likely those with sensitivity and specificity to systemic effects in mild neurotrauma, that have minimal to no volitional component, and that can be sampled relatively quickly with minimal intrusion in prospective, observational field studies during routine training with explosives. An understanding of an association between parameters of exposure to repeated low-level blast and negative neurologic effects would support the evaluation of clinical implications and development of protective equipment and surveillance protocols where warranted. At present, low-level blast exposure surveillance measurements do not exist as a systematic record for any professional community.

  6. Repeatable aversion across threat types is linked with life-history traits but is dependent on how aversion is measured.

    Science.gov (United States)

    Davidson, Gabrielle L; Reichert, Michael S; Crane, Jodie M S; O'Shea, William; Quinn, John L

    2018-02-01

    Personality research suggests that individual differences in risk aversion may be explained by links with life-history variation. However, few empirical studies examine whether repeatable differences in risk avoidance behaviour covary with life-history traits among individuals in natural populations, or how these links vary depending on the context and the way risk aversion is measured. We measured two different risk avoidance behaviours (latency to enter the nest and inspection time) in wild great tits ( Parus major ) in two different contexts-response to a novel object and to a predator cue placed at the nest-box during incubation---and related these behaviours to female reproductive success and condition. Females responded equally strongly to both stimuli, and although both behaviours were repeatable, they did not correlate. Latency to enter was negatively related to body condition and the number of offspring fledged. By contrast, inspection time was directly explained by whether incubating females had been flushed from the nest before the trial began. Thus, our inferences on the relationship between risk aversion and fitness depend on how risk aversion was measured. Our results highlight the limitations of drawing conclusions about the relevance of single measures of a personality trait such as risk aversion.

  7. Short-term Changes of Apparent Optical Properties in a Shallow Water Environment: Observations from Repeated Airborne Hyperspectral Measurements

    Science.gov (United States)

    Zhang, M.; English, D. C.; Hu, C.; Carlson, P. R., Jr.; Muller-Karger, F. E.; Toro-Farmer, G.; Herwitz, S. R.

    2016-02-01

    An atmospheric correction algorithm has been developed for AISA imagery over optically shallow waters in Sugarloaf Key of the Florida Keys. The AISA data were collected repeatedly during several days in May 2012, October 2012, and May 2013. A non-zero near-infrared (NIR) remote sensing reflectance (Rrs) was accounted for through iterations, based on the relationship of field-measured Rrs between the NIR and red wavelengths. Validation showed mean ratios of 0.94 to 1.002 between AISA-derived and field-measured Rrs in the blue to red wavelengths, with uncertainties generally turbidity (light attenuation) and bottom contributions. Some of these changes are larger than two times of the Rrs uncertainties from the AISA retrievals, therefore representing statistically significant changes that can be well observed from airborne measurements. The case study suggests that repeated airborne measurements may be used to study short-term changes in shallow water environments, and such a capacity may be enhanced with future geostationary satellite missions specifically designed to observe coastal ecosystems.

  8. Measuring the Dynamic Soil Response During Repeated Wheeling Using Seismic Methods

    DEFF Research Database (Denmark)

    Keller, Thomas; Carizzon, Marco; Berisso, Feto Esimo

    2013-01-01

    was performed with an agricultural tire (60 kN wheel load) on a gleyic Cambisol. We measured Vp using an acoustic (microseismic) device at various depths before, during (i.e., below the tire), and after wheeling. In addition, we measured bulk density and penetrometer resistance before and after wheeling...

  9. A DESCRIPTION OF QUASAR VARIABILITY MEASURED USING REPEATED SDSS AND POSS IMAGING

    International Nuclear Information System (INIS)

    MacLeod, Chelsea L.; Ivezić, Željko; Becker, Andrew C.; Anderson, Scott F.; Sesar, Branimir; De Vries, Wim; Kochanek, Christopher S.; Kelly, Brandon C.; Lupton, Robert H.; Hall, Patrick B.; Richards, Gordon T.; Schneider, Donald P.

    2012-01-01

    We provide a quantitative description and statistical interpretation of the optical continuum variability of quasars. The Sloan Digital Sky Survey (SDSS) has obtained repeated imaging in five UV-to-IR photometric bands for 33,881 spectroscopically confirmed quasars. About 10,000 quasars have an average of 60 observations in each band obtained over a decade along Stripe 82 (S82), whereas the remaining ∼25,000 have 2-3 observations due to scan overlaps. The observed time lags span the range from a day to almost 10 years, and constrain quasar variability at rest-frame time lags of up to 4 years, and at rest-frame wavelengths from 1000 Å to 6000 Å. We publicly release a user-friendly catalog of quasars from the SDSS Data Release 7 that have been observed at least twice in SDSS or once in both SDSS and the Palomar Observatory Sky Survey, and we use it to analyze the ensemble properties of quasar variability. Based on a damped random walk (DRW) model defined by a characteristic timescale and an asymptotic variability amplitude that scale with the luminosity, black hole mass, and rest wavelength for individual quasars calibrated in S82, we can fully explain the ensemble variability statistics of the non-S82 quasars such as the exponential distribution of large magnitude changes. All available data are consistent with the DRW model as a viable description of the optical continuum variability of quasars on timescales of ∼5-2000 days in the rest frame. We use these models to predict the incidence of quasar contamination in transient surveys such as those from the Palomar Transient Factory and Large Synoptic Survey Telescope.

  10. The use of a measure of acute irritation to predict the outcome of repeated usage of hand soap products.

    Science.gov (United States)

    Williams, C; Wilkinson, M; McShane, P; Pennington, D; Fernandez, C; Pierce, S

    2011-06-01

    Healthcare-associated infection is an important worldwide problem that could be reduced by better hand hygiene practice. However, an increasing number of healthcare workers are experiencing irritant contact dermatitis of the hands as a result of repeated hand washing. This may lead to a reduced level of compliance with regard to hand hygiene. To assess whether a measure of acute irritation by hand soaps could predict the effects of repeated usage over a 2-week period. In a double-blind, randomized comparison study, the comparative irritation potential of four different hand soaps was assessed over a 24-h treatment period. The effect of repeated hand washing with the hand soap products over a 2-week period in healthy adult volunteers on skin barrier function was then determined by assessment of transepidermal water loss (TEWL), epidermal hydration and a visual assessment using the Hand Eczema Severity Index (HECSI) at days 0, 7 and 14. A total of 121 subjects from the 123 recruited completed phase 1 of the study. All four products were seen to be significantly different from each other in terms of the irritant reaction observed and all products resulted in a significantly higher irritation compared with the no-treatment control. Seventy-nine of the initial 121 subjects were then enrolled into the repeated usage study. A statistically significant worsening of the clinical condition of the skin as measured by HECSI was seen from baseline to day 14 in those subjects repeatedly washing their hands with two of the four soap products (products C and D) with P-values of 0·02 and 0·01, respectively. Subclinical assessment of the skin barrier function by measuring epidermal hydration was significantly increased from baseline to day 7 after repeated hand washing with products A, B and D but overall no significant change was seen in all four products tested by day 14. A statistically significant increase in TEWL at day 14 was seen for product A (P = 0·02) indicating a

  11. Evaluation, including effects of storage and repeated freezing and thawing, of a method for measurement of urinary creatinine

    DEFF Research Database (Denmark)

    Garde, A H; Hansen, Åse Marie; Kristiansen, J

    2003-01-01

    The aims of this study were to elucidate to what extent storage and repeated freezing and thawing influenced the concentration of creatinine in urine samples and to evaluate the method for determination of creatinine in urine. The creatinine method was based on the well-known Jaffe's reaction...... and measured on a COBAS Mira autoanalyser from Roche. The main findings were that samples for analysis of creatinine should be kept at a temperature of -20 degrees C or lower and frozen and thawed only once. The limit of detection, determined as 3 x SD of 20 determinations of a sample at a low concentration (6...

  12. Repeated stimulation, inter-stimulus interval and inter-electrode distance alters muscle contractile properties as measured by Tensiomyography

    Science.gov (United States)

    Johnson, Mark I.; Francis, Peter

    2018-01-01

    Context The influence of methodological parameters on the measurement of muscle contractile properties using Tensiomyography (TMG) has not been published. Objective To investigate the; (1) reliability of stimulus amplitude needed to elicit maximum muscle displacement (Dm), (2) effect of changing inter-stimulus interval on Dm (using a fixed stimulus amplitude) and contraction time (Tc), (3) the effect of changing inter-electrode distance on Dm and Tc. Design Within subject, repeated measures. Participants 10 participants for each objective. Main outcome measures Dm and Tc of the rectus femoris, measured using TMG. Results The coefficient of variance (CV) and the intra-class correlation (ICC) of stimulus amplitude needed to elicit maximum Dm was 5.7% and 0.92 respectively. Dm was higher when using an inter-electrode distance of 7cm compared to 5cm [P = 0.03] and when using an inter-stimulus interval of 10s compared to 30s [P = 0.017]. Further analysis of inter-stimulus interval data, found that during 10 repeated stimuli Tc became faster after the 5th measure when compared to the second measure [P<0.05]. The 30s inter-stimulus interval produced the most stable Tc over 10 measures compared to 10s and 5s respectively. Conclusion Our data suggest that the stimulus amplitude producing maximum Dm of the rectus femoris is reliable. Inter-electrode distance and inter-stimulus interval can significantly influence Dm and/ or Tc. Our results support the use of a 30s inter-stimulus interval over 10s or 5s. Future studies should determine the influence of methodological parameters on muscle contractile properties in a range of muscles. PMID:29451885

  13. Human-centred approaches in slipperiness measurement

    Science.gov (United States)

    Grönqvist, Raoul; Abeysekera, John; Gard, Gunvor; Hsiang, Simon M.; Leamon, Tom B.; Newman, Dava J.; Gielo-Perczak, Krystyna; Lockhart, Thurmon E.; Pai, Clive Y.-C.

    2010-01-01

    A number of human-centred methodologies—subjective, objective, and combined—are used for slipperiness measurement. They comprise a variety of approaches from biomechanically-oriented experiments to psychophysical tests and subjective evaluations. The objective of this paper is to review some of the research done in the field, including such topics as awareness and perception of slipperiness, postural and balance control, rating scales for balance, adaptation to slippery conditions, measurement of unexpected movements, kinematics of slipping, and protective movements during falling. The role of human factors in slips and falls will be discussed. Strengths and weaknesses of human-centred approaches in relation to mechanical slip test methodologies are considered. Current friction-based criteria and thresholds for walking without slipping are reviewed for a number of work tasks. These include activities such as walking on a level or an inclined surface, running, stopping and jumping, as well as stair ascent and descent, manual exertion (pushing and pulling, load carrying, lifting) and particular concerns of the elderly and mobility disabled persons. Some future directions for slipperiness measurement and research in the field of slips and falls are outlined. Human-centred approaches for slipperiness measurement do have many applications. First, they are utilized to develop research hypotheses and models to predict workplace risks caused by slipping. Second, they are important alternatives to apparatus-based friction measurements and are used to validate such methodologies. Third, they are used as practical tools for evaluating and monitoring slip resistance properties of foot wear, anti-skid devices and floor surfaces. PMID:11794763

  14. Direct and Repeated Clinical Measurements of pO2 for Enhancing Cancer Therapy and Other Applications.

    Science.gov (United States)

    Swartz, Harold M; Williams, Benjamin B; Hou, Huagang; Khan, Nadeem; Jarvis, Lesley A; Chen, Eunice Y; Schaner, Philip E; Ali, Arif; Gallez, Bernard; Kuppusamy, Periannan; Flood, Ann B

    2016-01-01

    The first systematic multi-center study of the clinical use of EPR oximetry has begun, with funding as a PPG from the NCI. Using particulate oxygen sensitive EPR, materials in three complementary forms (India Ink, "OxyChips", and implantable resonators) the clinical value of the technique will be evaluated. The aims include using repeated measurement of tumor pO2 to monitor the effects of treatments on tumor pO2, to use the measurements to select suitable subjects for the type of treatment including the use of hyperoxic techniques, and to provide data that will enable existing clinical techniques which provide data relevant to tumor pO2 but which cannot directly measure it to be enhanced by determining circumstances where they can give dependable information about tumor pO2.

  15. Looking back on a half century of repeat magnetic measurements in France

    Science.gov (United States)

    Alexandrescu, Mioara Mandea; Gilder, Stuart; Courtillot, Vincent; Le Mouël, Jean Louis; Gilbert, Daniel

    Birds do it. Bees do it. And with the discovery of lodestone over 2200 years ago, humans too could incorporate the Earth's magnetic field into their daily lives. Some of the oldest applications for tracking the magnetic field were in land and sea navigation. Magnetic field measurements quickly became an important economic factor in world trade, with documented use dating from the 11th century in China.The measurements are important in other applications as well. For example, rapid field variations are generated by solar activity and its interaction with the terrestrial environment. Large magnetic storms can disrupt satellite operation, communication systems, power transmission networks, and so forth [Campbell, 1997].Geomagnetism also provides a unique opportunity to explore the Earth's outer core, which is mostly liquid (molten) iron, where the field is generated. Field measurements can also yield valuable insights into the location of mineral deposits and aid in applications in the petroleum industry.

  16. Repeated stimulation, inter-stimulus interval and inter-electrode distance alters muscle contractile properties as measured by Tensiomyography.

    Directory of Open Access Journals (Sweden)

    Hannah V Wilson

    Full Text Available The influence of methodological parameters on the measurement of muscle contractile properties using Tensiomyography (TMG has not been published.To investigate the; (1 reliability of stimulus amplitude needed to elicit maximum muscle displacement (Dm, (2 effect of changing inter-stimulus interval on Dm (using a fixed stimulus amplitude and contraction time (Tc, (3 the effect of changing inter-electrode distance on Dm and Tc.Within subject, repeated measures.10 participants for each objective.Dm and Tc of the rectus femoris, measured using TMG.The coefficient of variance (CV and the intra-class correlation (ICC of stimulus amplitude needed to elicit maximum Dm was 5.7% and 0.92 respectively. Dm was higher when using an inter-electrode distance of 7cm compared to 5cm [P = 0.03] and when using an inter-stimulus interval of 10s compared to 30s [P = 0.017]. Further analysis of inter-stimulus interval data, found that during 10 repeated stimuli Tc became faster after the 5th measure when compared to the second measure [P<0.05]. The 30s inter-stimulus interval produced the most stable Tc over 10 measures compared to 10s and 5s respectively.Our data suggest that the stimulus amplitude producing maximum Dm of the rectus femoris is reliable. Inter-electrode distance and inter-stimulus interval can significantly influence Dm and/ or Tc. Our results support the use of a 30s inter-stimulus interval over 10s or 5s. Future studies should determine the influence of methodological parameters on muscle contractile properties in a range of muscles.

  17. Approach to measurement to quantum mechanics

    International Nuclear Information System (INIS)

    Sudarshan, E.C.G.; Sherry, T.N.; Gautam, S.R.

    1977-10-01

    An unconventional approach to the measurement problem in quantum mechanics is considered, the apparatus is treated as a classical system, belonging to the macro-world. In order to have a measurement the apparatus must interact with the quantum system. As a first step, the classical apparatus is embedded into a larger quantum mechanical structure, making use of superselection rules. Projection back to the classical system is possible. The apparatus and system are now coupled such that the apparatus remains classical (principle of integrity), and unambiguous information of the values of a quantum observable are transferred to the variables of the apparatus. Finally, projection back to the classical formulation is accomplished. Further measurement of the classical apparatus can be done, causing no problems of principle. Thus interactions causing pointers to move (which are not treat) can be added. The restrictions placed by the principle of integrity on the form of the interaction between classical and quantum systems are examined

  18. Measuring Electrospun Nanofibre Diameter: a Novel Approach

    International Nuclear Information System (INIS)

    Ziabari, M.; Mottaghitalab, V.; Haghi, A. K.; McGovern, S. T.

    2008-01-01

    A new method based on image analysis for electrospun nanofibre diameter measurement is presented. First, the SEM micrograph of the nanofibre web obtained by electrospinning process is converted to binary image using local thresholding method. In the next step, skeleton and distance transformed image are generated. Then, the intersection points which bring about untrue measurements are identified and removed from the skeleton. Finally, the resulting skeleton and distance transformed image are used to determine fibre diameter. The method is evaluated by a simulated image with known characteristics generated by ?-randomness procedure. The results indicate that this approach is successful in making fast, accurate automated measurements of electrospun fibre diameters. (cross-disciplinary physics and related areas of science and technology)

  19. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  20. Superficial Ultrasound Shear Wave Speed Measurements in Soft and Hard Elasticity Phantoms: Repeatability and Reproducibility Using Two Different Ultrasound Systems

    Science.gov (United States)

    Dillman, Jonathan R.; Chen, Shigao; Davenport, Matthew S.; Zhao, Heng; Urban, Matthew W.; Song, Pengfei; Watcharotone, Kuanwong; Carson, Paul L.

    2014-01-01

    Background There is a paucity of data available regarding the repeatability and reproducibility of superficial shear wave speed (SWS) measurements at imaging depths relevant to the pediatric population. Purpose To assess the repeatability and reproducibility of superficial shear wave speed (SWS) measurements acquired from elasticity phantoms at varying imaging depths using three different imaging methods, two different ultrasound systems, and multiple operators. Methods and Materials Soft and hard elasticity phantoms manufactured by Computerized Imaging Reference Systems, Inc. (Norfolk, VA) were utilized for our investigation. Institution #1 used an Acuson S3000 ultrasound system (Siemens Medical Solutions USA, Inc.) and three different shear wave imaging method/transducer combinations, while institution #2 used an Aixplorer ultrasound system (Supersonic Imagine) and two different transducers. Ten stiffness measurements were acquired from each phantom at three depths (1.0, 2.5, and 4.0 cm) by four operators at each institution. Student’s t-test was used to compare SWS measurements between imaging techniques, while SWS measurement agreement was assessed with two-way random effects single measure intra-class correlation coefficients and coefficients of variation. Mixed model regression analysis determined the effect of predictor variables on SWS measurements. Results For the soft phantom, the average of mean SWS measurements across the various imaging methods and depths was 0.84 ± 0.04 m/s (mean ± standard deviation) for the Acuson S3000 system and 0.90 ± 0.02 m/s for the Aixplorer system (p=0.003). For the hard phantom, the average of mean SWS measurements across the various imaging methods and depths was 2.14 ± 0.08 m/s for the Acuson S3000 system and 2.07 ± 0.03 m/s Aixplorer system (p>0.05). The coefficients of variation were low (0.5–6.8%), and inter-operator agreement was near-perfect (ICCs ≥0.99). Shear wave imaging method and imaging depth

  1. Superficial ultrasound shear wave speed measurements in soft and hard elasticity phantoms: repeatability and reproducibility using two ultrasound systems.

    Science.gov (United States)

    Dillman, Jonathan R; Chen, Shigao; Davenport, Matthew S; Zhao, Heng; Urban, Matthew W; Song, Pengfei; Watcharotone, Kuanwong; Carson, Paul L

    2015-03-01

    There is a paucity of data available regarding the repeatability and reproducibility of superficial shear wave speed (SWS) measurements at imaging depths relevant to the pediatric population. To assess the repeatability and reproducibility of superficial shear wave speed measurements acquired from elasticity phantoms at varying imaging depths using three imaging methods, two US systems and multiple operators. Soft and hard elasticity phantoms manufactured by Computerized Imaging Reference Systems Inc. (Norfolk, VA) were utilized for our investigation. Institution No. 1 used an Acuson S3000 US system (Siemens Medical Solutions USA, Malvern, PA) and three shear wave imaging method/transducer combinations, while institution No. 2 used an Aixplorer US system (SuperSonic Imagine, Bothell, WA) and two different transducers. Ten stiffness measurements were acquired from each phantom at three depths (1.0 cm, 2.5 cm and 4.0 cm) by four operators at each institution. Student's t-test was used to compare SWS measurements between imaging techniques, while SWS measurement agreement was assessed with two-way random effects single-measure intra-class correlation coefficients (ICCs) and coefficients of variation. Mixed model regression analysis determined the effect of predictor variables on SWS measurements. For the soft phantom, the average of mean SWS measurements across the various imaging methods and depths was 0.84 ± 0.04 m/s (mean ± standard deviation) for the Acuson S3000 system and 0.90 ± 0.02 m/s for the Aixplorer system (P = 0.003). For the hard phantom, the average of mean SWS measurements across the various imaging methods and depths was 2.14 ± 0.08 m/s for the Acuson S3000 system and 2.07 ± 0.03 m/s Aixplorer system (P > 0.05). The coefficients of variation were low (0.5-6.8%), and interoperator agreement was near-perfect (ICCs ≥ 0.99). Shear wave imaging method and imaging depth significantly affected measured SWS (P

  2. New approach to intracardiac hemodynamic measurements in small animals

    DEFF Research Database (Denmark)

    Eskesen, Kristian; Olsen, Niels T; Dimaano, Veronica L

    2012-01-01

    Invasive measurements of intracardiac hemodynamics in animal models have allowed important advances in the understanding of cardiac disease. Currently they are performed either through a carotid arteriotomy or via a thoracotomy and apical insertion. Both of these techniques have disadvantages...... and are not conducive to repeated measurements. Therefore, the purpose of this study was to develop a new technique for measuring intracardiac hemodynamics....

  3. Repeated serum creatinine measurement in primary care: Not all patients have chronic renal failure.

    Science.gov (United States)

    Gentille Lorente, Delicia; Gentille Lorente, Jorge; Salvadó Usach, Teresa

    2015-01-01

    To assess the prevalence of kidney failure in patients from a primary care centre in a basic healthcare district with laboratory availability allowing serum creatinine measurements. An observational descriptive cross-sectional study. A basic healthcare district serving 23,807 people aged ≥ 18 years. Prevalence of kidney failure among 17,240 patients having at least one laboratory measurement available was 8.5% (mean age 77.6 ± 12.05 years). In 33.2% of such patients an occult kidney failure was found (98.8% were women). Prevalence of chronic kidney failure among 10,011 patients having at least 2 laboratory measurements available (≥ 3 months apart) was 5.5% with mean age being 80.1 ± 10.0 years (most severely affected patients were those aged 75 to 84); 59.7% were men and 76.3% of cases were in stage 3. An occult kidney failure was found in 5.3% of patients with women being 86.2% of them (a glomerular filtration rate<60 ml/min was estimated for plasma creatinine levels of 0.9 mg/dl or higher). Comparison of present findings to those previously reported demonstrates the need for further studies on the prevalence of overall (chronic and acute) kidney failure in Spain in order to estimate the real scope of the disease. Primary care physicians play a critical role in disease detection, therapy, control and recording (in medical records). MDRD equation is useful and practical to estimate glomerular filtration rate. Copyright © 2015 The Authors. Published by Elsevier España, S.L.U. All rights reserved.

  4. Prognostic value of repeated serum CA 125 measurements in first trimester pregnancy.

    Science.gov (United States)

    Schmidt, T; Rein, D T; Foth, D; Eibach, H W; Kurbacher, C M; Mallmann, P; Römer, T

    2001-08-01

    To assess the diagnostic value of maternal CA 125 in patients with symptomatic first trimester pregnancy and to evaluate the prognostic significance of CA 125 versus beta-hCG in early pregnancies with intact fetal heartbeat, complicated by vaginal bleeding. Two prospective open-label studies with longitudinal follow-up in the second trial. Academic Department of Obstetrics and Gynecology, University of Cologne. Study 1: 168 patients presenting between gestational weeks 6 and 12 with: extrauterine pregnancy, 29; missed abortion, 50; incomplete spontaneous abortion, 38; imminent abortion, 33; and normal pregnancy (no history of endometriosis or ovarian mass), 18. Study 2: Fifty consecutive patients with vaginal bleeding during gestational weeks 6-12 all of whom having demostrable fetal heartbeat. Eighteen patients finally aborted whereas the remainder had normally continuing pregnancy until term. Study 1: Single serum determinations of CA 125 and beta-hCG were correlated with the different disorders observed. Study 2: Two sequential measurements of serum CA 125 and beta-hCG performed within a 5-7 days interval were related to the outcome of pregnancy as indicated by changes of the ultrasound presentation, miscarriage, future hospitalization, or delivery. Study 1: Patients with vaginal bleeding generally had higher median CA 125 values (38 IU/ml; range 1.3-540) compared to non-bleeding patients (17.8 IU/ml; range 1.0-157). No statistically significant differences in regard to median serum CA 125 levels between symptomatic and normal pregnancies occurred: normal pregnancy, 25.5 IU/ml (range 3.2-97); ectopic pregnancy, 26 IU/ml (range 1.3-157); missed abortion, 19.1IU/ml (range 1-242); threatened abortion, 48 IU/ml (range 5.2-540); spontaneous abortion, 40 IU/ml (range 5.4-442). Study 2: Initial CA 125 levels did not differ significantly between both groups of patients with 27/32 non-aborters and 13/18 aborters showing concentrations below 65 IU/ml. After 5-7 days, CA

  5. Approaches towards airport economic performance measurement

    Directory of Open Access Journals (Sweden)

    Ivana STRYČEKOVÁ

    2011-01-01

    Full Text Available The paper aims to assess how economic benchmarking is being used by airports as a means of performance measurement and comparison of major international airports in the world. The study focuses on current benchmarking practices and methods by taking into account different factors according to which it is efficient to benchmark airports performance. As methods are considered mainly data envelopment analysis and stochastic frontier analysis. Apart from them other approaches are discussed by airports to provide economic benchmarking. The main objective of this article is to evaluate the efficiency of the airports and answer some undetermined questions involving economic benchmarking of the airports.

  6. Characterization of fetal growth by repeated ultrasound measurements in the wild guinea pig (Cavia aperea).

    Science.gov (United States)

    Schumann, K; Guenther, A; Göritz, F; Jewgenow, K

    2014-08-01

    Fetal growth during pregnancy has previously been studied in the domesticated guinea pig (Cavia aperea f. porcellus) after dissecting pregnant females, but there are no studies describing the fetal growth in their wild progenitor, the wild guinea pig (C aperea). In this study, 50 pregnancies of wild guinea pig sows were investigated using modern ultrasound technique. The two most common fetal growth parameters (biparietal diameter [BPD] and crown-rump-length [CRL]) and uterine position were measured. Data revealed similar fetal growth patterns in the wild guinea pig and domesticated guinea pig in the investigated gestation period, although they differ in reproductive milestones such as gestation length (average duration of pregnancy 68 days), average birth weight, and litter mass. In this study, pregnancy lasted on average 60.2 days with a variance of less than a day (0.96 days). The measured fetal growth parameters are strongly correlated with each (R = 0.91; P guinea pig. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. The effects of toluene plus noise on hearing thresholds: an evaluation based on repeated measurements in the German printing industry.

    Science.gov (United States)

    Schäper, Michael; Seeber, Andreas; van Thriel, Christoph

    2008-01-01

    The ototoxicity of occupational exposure to toluene plus noise was investigated in a longitudinal study in rotogravure printing and existing findings in the literature were evaluated. The study comprised four repeated examinations during 5 years and started with 333 male workers. Lifetime weighted average exposures (LWAE) to toluene and noise were determined from individual work histories and historic recordings; recent individual exposures were measured 10 times during the study (toluene, active sampling; noise, stationary measurements). Auditory thresholds were measured with pure tone audiometry at 0.125, 0.25, 0.5, 0.75, 1, 2, 3, 4, 6, 8, 12 kHz. Mean LWAE exposures to toluene and noise were 45+/-17 ppm plus 82+/-7 dB(A) for high toluene exposed and 10+/-7 ppm plus 82+/-4 dB(A) for low toluene exposed subjects, mean current exposures were 26+/-20 ppm plus 81+/-4 dB(A) and 3+/-3 ppm plus 82+/-4 dB(A). Mean exposure duration was 21.3+/-6.5 years for long exposed and 5.9+/-2.2 years for short exposed subjects. Repeated measurement analyses of variance did not reveal effects of toluene intensity, exposure duration and interactions between toluene intensity and noise intensity. Noise intensity [79+/-3 dB(A) vs. 84+/-1 dB(A)] was significant for auditory thresholds. A case concept utilising developments of individual auditory thresholds did not reveal significant toluene effects. Logistic models including age, exposure duration, toluene in ambient air, current noise and either hippuric acid or ortho-cresol (o-cresol) found only age to be significant for elevated OR of high frequency hearing loss. Due to missing toluene effects, it was concluded that the threshold level for developing hearing loss as a result of occupational exposure to toluene plus noise might be above the current limit of 50 ppm toluene.

  8. Understanding how adherence goals promote adherence behaviours: a repeated measure observational study with HIV seropositive patients

    Directory of Open Access Journals (Sweden)

    Jones Gareth

    2012-08-01

    Full Text Available Abstract Background The extent to which patients follow treatments as prescribed is pivotal to treatment success. An exceptionally high level (> 95% of HIV medication adherence is required to suppress viral replication and protect the immune system and a similarly high level (> 80% of adherence has also been suggested in order to benefit from prescribed exercise programmes. However, in clinical practice, adherence to both often falls below the desirable level. This project aims to investigate a wide range of psychological and personality factors that may lead to adherence/non-adherence to medical treatment and exercise programmes. Methods HIV positive patients who are referred to the physiotherapist-led 10-week exercise programme as part of the standard care are continuously recruited. Data on social cognitive variables (attitude, intention, subjective norms, self-efficacy, and outcome beliefs about the goal and specific behaviours, selected personality factors, perceived quality of life, physical activity, self-reported adherence and physical assessment are collected at baseline, at the end of the exercise programme and again 3 months later. The project incorporates objective measures of both exercise (attendance log and improvement in physical measures such as improved fitness level, weight loss, improved circumferential anthropometric measures and medication adherence (verified by non-invasive hair analysis. Discussion The novelty of this project comes from two key aspects, complemented with objective information on exercise and medication adherence. The project assesses beliefs about both the underlying goal such as following prescribed treatment; and about the specific behaviours such as undertaking the exercise or taking the medication, using both implicit and explicit assessments of patients’ beliefs and attitudes. We predict that i the way people think about the underlying goal of their treatments explains medication and exercise

  9. Repeated measurement of nasal lavage fluid chemokines in school-age children with asthma.

    Science.gov (United States)

    Noah, Terry L; Tudor, Gail E; Ivins, Sally S; Murphy, Paula C; Peden, David B; Henderson, Frederick W

    2006-02-01

    Inflammatory processes at the mucosal surface may play a role in maintenance of asthma pathophysiology. Cross-sectional studies in asthmatic patients suggest that chemokines such as interleukin 8 (IL-8) are overproduced by respiratory epithelium. To test the hypothesis that chemokine levels are persistently elevated in the respiratory secretions of asthmatic children at a stable baseline. We measured nasal lavage fluid (NLF) levels of chemokines and other mediators at 3- to 4-month intervals in a longitudinal study of asthmatic children, with nonasthmatic siblings as controls. In a linear mixed-model analysis, both family and day of visit had significant effects on nasal mediators. Thus, data for 12 asthmatic-nonasthmatic sibling pairs who had 3 or more same-day visits were analyzed separately. For sibling pairs, median eosinophil cationic protein levels derived from serial measurements in NLF were elevated in asthmatic patients compared with nonasthmatic patients, with a near-significant tendency for elevation of total protein and eotaxin levels as well. However, no significant differences were found for IL-8 or several other chemokines. Ratios of IL-13 or IL-5 to interferon-gamma released by house dust mite antigen-stimulated peripheral blood mononuclear cells, tested on a single occasion, were significantly increased for asthmatic patients. Substantial temporal and family-related variability exists in nasal inflammation in asthmatic children. Although higher levels of eosinophil cationic protein are usually present in NLF of patients with stable asthma compared with patients without asthma, chemokines other than eotaxin are not consistently increased. Eosinophil activation at the mucosal surface is a more consistent predictor of asthmatic symptoms than nonspecific elevation of epithelium-derived inflammatory chemokine levels.

  10. Magnetic Resonance Spectroscopy in Patients with Insomnia: A Repeated Measurement Study.

    Directory of Open Access Journals (Sweden)

    Kai Spiegelhalder

    Full Text Available Chronic insomnia is one of the most prevalent central nervous system disorders. It is characterized by increased arousal levels, however, the neurobiological causes and correlates of hyperarousal in insomnia remain to be further determined. In the current study, magnetic resonance spectroscopy was used in the morning and evening in a well-characterized sample of 20 primary insomnia patients (12 females; 8 males; 42.7 ± 13.4 years and 20 healthy good sleepers (12 females; 8 males; 44.1 ± 10.6 years. The most important inhibitory and excitatory neurotransmitters of the central nervous system, γ-aminobutyric acid (GABA and glutamate/glutamine (Glx, were assessed in the anterior cingulate cortex (ACC and dorsolateral prefrontal cortex (DLPFC. The primary hypothesis, a diurnal effect on GABA levels in patients with insomnia, could not be confirmed. Moreover, the current results did not support previous findings of altered GABA levels in individuals with insomnia. Exploratory analyses, however, suggested that GABA levels in the ACC may be positively associated with habitual sleep duration, and, thus, reduced GABA levels may be a trait marker of objective sleep disturbances. Moreover, there was a significant GROUP x MEASUREMENT TIME interaction effect on Glx in the DLPFC with increasing Glx levels across the day in the patients but not in the control group. Therefore, Glx levels may reflect hyperarousal at bedtime in those with insomnia. Future confirmatory studies should include larger sample sizes to investigate brain metabolites in different subgroups of insomnia.

  11. Repeatability of swept-source optical coherence tomography retinal and choroidal thickness measurements in neovascular age-related macular degeneration

    DEFF Research Database (Denmark)

    Hanumunthadu, Daren; Ilginis, Tomas; Restori, Marie

    2017-01-01

    BACKGROUND: The aim was to determine the intrasession repeatability of swept-source optical coherence tomography (SS-OCT)-derived retinal and choroidal thickness measurements in eyes with neovascular age-related macular degeneration (nAMD). METHODS: A prospective study consisting of patients...... with active nAMD enrolled in the Distance of Choroid Study at Moorfields Eye Hospital, London. Patients underwent three 12×9 mm macular raster scans using the deep range imaging (DRI) OCT-1 SS-OCT (Topcon) device in a single imaging session. Retinal and choroidal thicknesses were calculated for the ETDRS...... macular subfields. Repeatability was calculated according to methods described by Bland and Altman. RESULTS: 39 eyes of 39 patients with nAMD were included with a mean (±SD) age of 73.9 (±7.2) years. The mean (±SD) retinal thickness of the central macular subfield was 225.7 μm (±12.4 μm...

  12. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Science.gov (United States)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  13. Breath acidification in adolescent runners exposed to atmospheric pollution: A prospective, repeated measures observational study

    Directory of Open Access Journals (Sweden)

    Van Sickle David

    2008-03-01

    Full Text Available Abstract Background Vigorous outdoors exercise during an episode of air pollution might cause airway inflammation. The purpose of this study was to examine the effects of vigorous outdoor exercise during peak smog season on breath pH, a biomarker of airway inflammation, in adolescent athletes. Methods We measured breath pH both pre- and post-exercise on ten days during peak smog season in 16 high school athletes engaged in daily long-distance running in a downwind suburb of Atlanta. The association of post-exercise breath pH with ambient ozone and particulate matter concentrations was tested with linear regression. Results We collected 144 pre-exercise and 146 post-exercise breath samples from 16 runners (mean age 14.9 years, 56% male. Median pre-exercise breath pH was 7.58 (interquartile range: 6.90 to 7.86 and did not change significantly after exercise. We observed no significant association between ambient ozone or particulate matter and post-exercise breath pH. However both pre- and post-exercise breath pH were strikingly low in these athletes when compared to a control sample of 14 relatively sedentary healthy adults and to published values of breath pH in healthy subjects. Conclusion Although we did not observe an acute effect of air pollution exposure during exercise on breath pH, breath pH was surprisingly low in this sample of otherwise healthy long-distance runners. We speculate that repetitive vigorous exercise may induce airway acidification.

  14. Use of count-based image reconstruction to evaluate the variability and repeatability of measured standardised uptake values.

    Directory of Open Access Journals (Sweden)

    Tomohiro Kaneta

    Full Text Available Standardized uptake values (SUVs are the most widely used quantitative imaging biomarkers in PET. It is important to evaluate the variability and repeatability of measured SUVs. Phantom studies seem to be essential for this purpose; however, repetitive phantom scanning is not recommended due to the decay of radioactivity. In this study, we performed count-based image reconstruction to avoid the influence of decay using two different PET/CT scanners. By adjusting the ratio of 18F-fluorodeoxyglucose solution to tap water, a NEMA IEC body phantom was set for SUVs of 4.0 inside six hot spheres. The PET data were obtained using two scanners (Aquiduo and Celesteion; Toshiba Medical Systems, Tochigi, Japan. We set the start time for image reconstruction when the total radioactivity in the phantom was 2.53 kBq/cc, and employed the counts of the first 2-min acquisition as the standard. To maintain the number of counts for each image, we set the acquisition time for image reconstruction depending on the decay of radioactivity. We obtained 50 images, and calculated the SUVmax and SUVpeak of all six spheres in each image. The average values of the SUVmax were used to calculate the recovery coefficients to compare those measured by the two different scanners. Bland-Altman analyses of the SUVs measured by the two scanners were also performed. The measured SUVs using the two scanners exhibited a 10-30% difference, and the standard deviation (SD of the measured SUVs was between 0.1-0.2. The Celesteion always exhibited higher values than the Aquiduo. The smaller sphere exhibited a larger SD, and the SUVpeak had a smaller SD than the SUVmax. The Bland-Altman analyses showed poor agreement between the SUVs measured by the two scanners. The recovery coefficient curves obtained from the two scanners were considerably different. The Celesteion exhibited higher recovery coefficients than the Aquiduo, especially at approximately 20-mm-diameter. Additionally, the curves

  15. New approach to energy loss measurements

    CERN Document Server

    Trzaska, W H; Alanko, T; Mutterer, M; Raeisaenen, J; Tjurin, G; Wojdyr, M

    2002-01-01

    A new approach to energy loss measurements is proposed. In the same experiment electronic stopping force (power) in gold, nickel, carbon, polycarbonate and Havar for sup 4 sup 0 Ar, sup 2 sup 8 Si, sup 1 sup 6 O, sup 4 He and sup 1 H ions in the energy range 0.12-11 MeV/u has been measured. In this paper we give the full results for gold, nickel, and carbon and for sup 4 sup 0 Ar, sup 1 sup 6 O, sup 4 He and sup 1 H ions. Good agreement of the measured stopping force values for light ions with literature data is interpreted as the positive test of the experimental technique. The same technique used with heavy ions yields agreement with the published data only for energies above 1 MeV/u. At lower energies we observe progressively increasing discrepancy. This discrepancy is removed completely as soon as we neglect pulse height defect compensation. This observation makes us believe that the majority of the published results as well as semi-empirical calculations based on them (like the popular SRIM) may be in er...

  16. Determining Criteria to Predict Repeatability of Performance in Older Adults: Using Coefficients of Variation for Strength and Functional Measures.

    Science.gov (United States)

    Raj, Isaac Selva; Bird, Stephen R; Westfold, Ben A; Shield, Anthony J

    2017-01-01

    Reliable measures of muscle strength and functional capacity in older adults are essential. The aim of this study was to determine whether coefficients of variation (CVs) of individuals obtained at the first session can infer repeatability of performance in a subsequent session. Forty-eight healthy older adults (mean age 68.6 ± 6.1 years; age range 60-80 years) completed two assessment sessions, and on each occasion undertook: dynamometry for isometric and isokinetic quadriceps strength, 6 meter fast walk (6MFWT), timed up and go (TUG), stair climb and descent, and vertical jump. Significant linear relationships were observed between CVs in session 1 and the percentage difference between sessions 1 and 2 for torque at 60, 120, 240 and 360°/s, 6MFWT, TUG, stair climb, and stair descent. The results of this study could be used to establish criteria for determining an acceptably reliable performance in strength and functional tests.

  17. Repeat, Low Altitude Measurements of Vegetation Status and Biomass Using Manned Aerial and UAS Imagery in a Piñon-Juniper Woodland

    Science.gov (United States)

    Krofcheck, D. J.; Lippitt, C.; Loerch, A.; Litvak, M. E.

    2015-12-01

    Measuring the above ground biomass of vegetation is a critical component of any ecological monitoring campaign. Traditionally, biomass of vegetation was measured with allometric-based approach. However, it is also time-consuming, labor-intensive, and extremely expensive to conduct over large scales and consequently is cost-prohibitive at the landscape scale. Furthermore, in semi-arid ecosystems characterized by vegetation with inconsistent growth morphologies (e.g., piñon-juniper woodlands), even ground-based conventional allometric approaches are often challenging to execute consistently across individuals and through time, increasing the difficulty of the required measurements and consequently the accuracy of the resulting products. To constrain the uncertainty associated with these campaigns, and to expand the extent of our measurement capability, we made repeat measurements of vegetation biomass in a semi-arid piñon-juniper woodland using structure-from-motion (SfM) techniques. We used high-spatial resolution overlapping aerial images and high-accuracy ground control points collected from both manned aircraft and multi-rotor UAS platforms, to generate digital surface model (DSM) for our experimental region. We extracted high-precision canopy volumes from the DSM and compared these to the vegetation allometric data, s to generate high precision canopy volume models. We used these models to predict the drivers of allometric equations for Pinus edulis and Juniperous monosperma (canopy height, diameter at breast height, and root collar diameter). Using this approach, we successfully accounted for the carbon stocks in standing live and standing dead vegetation across a 9 ha region, which contained 12.6 Mg / ha of standing dead biomass, with good agreement to our field plots. Here we present the initial results from an object oriented workflow which aims to automate the biomass estimation process of tree crown delineation and volume calculation, and partition

  18. Brachial artery responses to ambient pollution, temperature, and humidity in people with type 2 diabetes: a repeated-measures study.

    Science.gov (United States)

    Zanobetti, Antonella; Luttmann-Gibson, Heike; Horton, Edward S; Cohen, Allison; Coull, Brent A; Hoffmann, Barbara; Schwartz, Joel D; Mittleman, Murray A; Li, Yongsheng; Stone, Peter H; de Souza, Celine; Lamparello, Brooke; Koutrakis, Petros; Gold, Diane R

    2014-03-01

    Extreme weather and air pollution are associated with increased cardiovascular risk in people with diabetes. In a population with diabetes, we conducted a novel assessment of vascular brachial artery responses both to ambient pollution and to weather (temperature and water vapor pressure, a measure of humidity). Sixty-four 49- to 85-year-old Boston residents with type 2 diabetes completed up to five study visits (279 repeated measures). Brachial artery diameter (BAD) was measured by ultrasound before and after brachial artery occlusion [i.e., flow-mediated dilation (FMD)] and before and after nitroglycerin-mediated dilation (NMD). Ambient concentrations of fine particulate mass (PM2.5), black carbon (BC), organic carbon (OC), elemental carbon, particle number, and sulfate were measured at our monitoring site; ambient concentrations of carbon monoxide, nitrogen dioxide, and ozone were obtained from state monitors. Particle exposure in the home and during each trip to the clinic (home/trip exposure) was measured continuously and as a 5-day integrated sample. We used linear models with fixed effects for participants, adjusting for date, season, temperature, and water vapor pressure on the day of each visit, to estimate associations between our outcomes and interquartile range increases in exposure. Baseline BAD was negatively associated with particle pollution, including home/trip-integrated BC (-0.02 mm; 95% CI: -0.04, -0.003, for a 0.28 μg/m3 increase in BC), OC (-0.08 mm; 95% CI: -0.14, -0.03, for a 1.61 μg/m3 increase) as well as PM2.5, 5-day average ambient PM2.5, and BC. BAD was positively associated with ambient temperature and water vapor pressure. However, exposures were not consistently associated with FMD or NMD. Brachial artery diameter, a predictor of cardiovascular risk, decreased in association with particle pollution and increased in association with ambient temperature in our study population of adults with type 2 diabetes. Zanobetti A, Luttmann

  19. Linear systems a measurement based approach

    CERN Document Server

    Bhattacharyya, S P; Mohsenizadeh, D N

    2014-01-01

    This brief presents recent results obtained on the analysis, synthesis and design of systems described by linear equations. It is well known that linear equations arise in most branches of science and engineering as well as social, biological and economic systems. The novelty of this approach is that no models of the system are assumed to be available, nor are they required. Instead, a few measurements made on the system can be processed strategically to directly extract design values that meet specifications without constructing a model of the system, implicitly or explicitly. These new concepts are illustrated by applying them to linear DC and AC circuits, mechanical, civil and hydraulic systems, signal flow block diagrams and control systems. These applications are preliminary and suggest many open problems. The results presented in this brief are the latest effort in this direction and the authors hope these will lead to attractive alternatives to model-based design of engineering and other systems.

  20. Inhibited quantum processes through repeated measurements: An approach to quantum zero effect?

    International Nuclear Information System (INIS)

    Crespo, G.; Proto, A.N.; Cerdeira, H.A.

    1992-04-01

    The dynamics of a finite set of relevant observables, associated to a Hamiltonian of a three level system is analyzed in connection with the quantum Zeno effect. Since we use the Hamiltonian that completely describes the physical situation related to the experiment under study (W.M. Itano et al, Phys. Rev. A41, 2295 (1990)), no reduction or collapse of the wave function is required to describe the quantum Zeno effect. (author). 18 refs, 18 figs

  1. Accuracy and repeatability of quantitative fluoroscopy for the measurement of sagittal plane translation and finite centre of rotation in the lumbar spine.

    Science.gov (United States)

    Breen, Alexander; Breen, Alan

    2016-07-01

    Quantitative fluoroscopy (QF) was developed to measure intervertebral mechanics in vivo and has been found to have high repeatability and accuracy for the measurement of intervertebral rotations. However, sagittal plane translation and finite centre of rotation (FCR) are potential measures of stability but have not yet been fully validated for current QF. This study investigated the repeatability and accuracy of QF for measuring these variables. Repeatability was assessed from L2-S1 in 20 human volunteers. Accuracy was investigated using 10 consecutive measurements from each of two pairs of linked and instrumented dry human vertebrae as reference; one which tilted without translation and one which translated without tilt. The results found intra- and inter-observer repeatability for translation to be 1.1mm or less (SEM) with fair to substantial reliability (ICC 0.533-0.998). Intra-observer repeatability of FCR location for inter-vertebral rotations of 5° and above ranged from 1.5mm to 1.8mm (SEM) with moderate to substantial reliability (ICC 0.626-0.988). Inter-observer repeatability for FCR ranged from 1.2mm to 5.7mm, also with moderate to substantial reliability (ICC 0.621-0.878). Reliability was substantial (ICC>0.81) for 10/16 measures for translation and 5/8 for FCR location. Accuracy for translation was 0.1mm (fixed centre) and 2.2mm (moveable centre), with an FCR error of 0.3mm(x) and 0.4mm(y) (fixed centre). This technology was found to have a high level of accuracy and with a few exceptions, moderate to substantial repeatability for the measurement of translation and FCR from fluoroscopic motion sequences. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. Validation and Reliability of a Smartphone Application for the International Prostate Symptom Score Questionnaire: A Randomized Repeated Measures Crossover Study

    Science.gov (United States)

    Shim, Sung Ryul; Sun, Hwa Yeon; Ko, Young Myoung; Chun, Dong-Il; Yang, Won Jae

    2014-01-01

    Background Smartphone-based assessment may be a useful diagnostic and monitoring tool for patients. There have been many attempts to create a smartphone diagnostic tool for clinical use in various medical fields but few have demonstrated scientific validity. Objective The purpose of this study was to develop a smartphone application of the International Prostate Symptom Score (IPSS) and to demonstrate its validity and reliability. Methods From June 2012 to May 2013, a total of 1581 male participants (≥40 years old), with or without lower urinary tract symptoms (LUTS), visited our urology clinic via the health improvement center at Soonchunhyang University Hospital (Republic of Korea) and were enrolled in this study. A randomized repeated measures crossover design was employed using a smartphone application of the IPSS and the conventional paper form of the IPSS. Paired t test under a hypothesis of non-inferior trial was conducted. For the reliability test, the intraclass correlation coefficient (ICC) was measured. Results The total score of the IPSS (P=.289) and each item of the IPSS (P=.157-1.000) showed no differences between the paper version and the smartphone version of the IPSS. The mild, moderate, and severe LUTS groups showed no differences between the two versions of the IPSS. A significant correlation was noted in the total group (ICC=.935, Psmartphones could participate. Conclusions The validity and reliability of the smartphone application version were comparable to the conventional paper version of the IPSS. The smartphone application of the IPSS could be an effective method for measuring lower urinary tract symptoms. PMID:24513507

  3. An open source, 3D printed preclinical MRI phantom for repeated measures of contrast agents and reference standards.

    Science.gov (United States)

    Cox, B L; Ludwig, K D; Adamson, E B; Eliceiri, K W; Fain, S B

    2018-03-01

    In medical imaging, clinicians, researchers and technicians have begun to use 3D printing to create specialized phantoms to replace commercial ones due to their customizable and iterative nature. Presented here is the design of a 3D printed open source, reusable magnetic resonance imaging (MRI) phantom, capable of flood-filling, with removable samples for measurements of contrast agent solutions and reference standards, and for use in evaluating acquisition techniques and image reconstruction performance. The phantom was designed using SolidWorks, a computer-aided design software package. The phantom consists of custom and off-the-shelf parts and incorporates an air hole and Luer Lock system to aid in flood filling, a marker for orientation of samples in the filled mode and bolt and tube holes for assembly. The cost of construction for all materials is under $90. All design files are open-source and available for download. To demonstrate utility, B 0 field mapping was performed using a series of gadolinium concentrations in both the unfilled and flood-filled mode. An excellent linear agreement (R 2 >0.998) was observed between measured relaxation rates (R 1 /R 2 ) and gadolinium concentration. The phantom provides a reliable setup to test data acquisition and reconstruction methods and verify physical alignment in alternative nuclei MRI techniques (e.g. carbon-13 and fluorine-19 MRI). A cost-effective, open-source MRI phantom design for repeated quantitative measurement of contrast agents and reference standards in preclinical research is presented. Specifically, the work is an example of how the emerging technology of 3D printing improves flexibility and access for custom phantom design.

  4. Predictive value of repeated measurements of luteal progesterone and estradiol levels in patients with intrauterine insemination and controlled ovarian stimulation.

    Science.gov (United States)

    Bakas, Panagiotis; Simopoulou, Maria; Giner, Maria; Drakakis, Petros; Panagopoulos, Perikles; Vlahos, Nikolaos

    2017-10-01

    The objective of this study is to assess if the difference of repeated measurements of estradiol and progesterone during luteal phase predict the outcome of intrauterine insemination. Prospective study. Reproductive clinic. 126 patients with infertility. Patients underwent controlled ovarian stimulation with recombinant FSH (50-150 IU/d). The day of IUI patients were given p.o natural micronized progesterone in a dose of 100 mg/tds. The area under the receiver characteristic operating curve (ROC curve) in predicting clinical pregnancy for % change of estradiol level on days 6 and 10 was 0.892 with 95% CI: 0.82-0.94. A cutoff value of change > -29.5% had a sensitivity of 85.7 with a specificity of 90.2. The corresponding ROC curve for % change of progesterone level was 0.839 with 95% CI: 0.76-0.90. A cutoff value of change > -33% had a sensitivity of 85 with a specificity of 75. The % change of estradiol and progesterone between days 6 and 10 has a predictive ability of pregnancy after IUI with COS of 89.2% and 83.4%, respectively. The addition of % of progesterone to % change of estradiol does not improve the predictive ability of % estradiol and should not be used.

  5. Comparing a single case to a control group - Applying linear mixed effects models to repeated measures data.

    Science.gov (United States)

    Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus

    2015-10-01

    In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Can father inclusive practice reduce paternal postnatal anxiety? A repeated measures cohort study using the hospital anxiety and depression scale

    Directory of Open Access Journals (Sweden)

    Tohotoa Jenny

    2012-07-01

    Full Text Available Abstract Background Perinatal research on anxiety and depression has primarily focused on mothers. We have limited knowledge of fathers’ anxiety during the perinatal period yet there is evidence that the parenting capacity of a person can be compromised by anxiety and depression. The purpose of this paper is to identify the impact of a father inclusive intervention on perinatal anxiety and depression. The prime focus of the intervention was to provide education and support to fathers of breastfeeding partners with the aim of increasing both initiation and duration of breastfeeding. Methods A repeated measures cohort study was conducted during a RCT that was implemented across eight public maternity hospitals in Perth, Western Australia between May 2008 and June 2009. A baseline questionnaire which included the Hospital Anxiety and Depression Scale (HADS was administered to all participants on the first night of their hospital based antenatal education program and was repeated at six weeks postnatal. SPSS version 17 was used for reporting descriptive results. Results The mean anxiety levels at baseline for the fathers in the intervention group (n=289 and control group (n=244 were 4.58 and 4.22 respectively. At 6 weeks postnatal (only matched pairs, intervention and control group were 3.93 and 3.79. More intervention group fathers self-rated less anxiety compared to the fathers in the control group from baseline to post test (p=0.048. Depression scores for intervention fathers at baseline (mean =1.09 and at six weeks (mean=1.09 were very similar to fathers in the control group at baseline (mean=1.11 and at six weeks (mean =1.07 with no significant changes. Conclusions Both intervention and control group fathers experienced some anxiety prior to the birth of their baby, but this was rapidly reduced at six weeks. Paternal anxiety is common to new fathers and providing them with information and strategies for problem-solving can increase their

  7. Deployment Repeatability

    Science.gov (United States)

    2016-04-01

    evaluating the deployment repeatability builds upon the testing or analysis of deployment kinematics (Chapter 6) and adds repetition. Introduction...material yield or failure during a test. For the purposes of this chapter, zero shift will refer to permanent changes in the structure, while reversible ...the content of other chapters in this book: Gravity Compensation (Chapter 4) and Deployment Kinematics and Dynamics (Chapter 6). Repeating the

  8. An approach for evaluating the repeatability of rapid wetland assessment methods: The effects of training and experience

    Science.gov (United States)

    We sampled 92 wetlands from four different basins in the United States to quantify observer repeatability in rapid wetland condition assessment using the Delaware Rapid Assessment Protocol (DERAP). In the Inland Bays basin of Delaware, 58 wetland sites were sampled by multiple ob...

  9. Repeated tumor pO2 measurements by multi-site EPR oximetry as a prognostic marker for enhanced therapeutic efficacy of fractionated radiotherapy

    International Nuclear Information System (INIS)

    Hou Huagang; Lariviere, Jean P.; Demidenko, Eugene; Gladstone, David; Swartz, Harold; Khan, Nadeem

    2009-01-01

    Purpose: To investigate the temporal effects of single or fractionated radiotherapy on subcutaneous RIF-1 tumor pO 2 and to determine the therapeutic outcomes when the timing of fractionations is guided by tumor pO 2 . Methods: The time-course of the tumor pO 2 changes was followed by multi-site electron paramagnetic resonance (EPR) oximetry. The tumors were treated with single 10, 20, and 10 Gy x 2 doses, and the tumor pO 2 was measured repeatedly for six consecutive days. In the 10 Gy x 2 group, the second dose of 10 Gy was delivered at a time when the tumors were either relatively oxygenated or hypoxic. The changes in tumor volumes were followed for nine days to determine the therapeutic outcomes. Results: A significant increase in tumor pO 2 was observed at 24 h post 10 Gy, while 20 Gy resulted in a significant increase in tumor pO 2 at 72-120 h post irradiation. The tumors irradiated with a second dose of 10 Gy at 24 h, when the tumors were oxygenated, had a significant increase in tumor doubling times (DTs), as compared to tumors treated at 48 h when they were hypoxic (p 2 repeatedly during fractionated schemes to optimize radiotherapeutic outcome. This technique could also be used to identify responsive and non-responsive tumors, which will facilitate the design of other therapeutic approaches for non-responsive tumors at early time points during the course of therapy.

  10. Boolean Approach to Dichotomic Quantum Measurement Theories

    Energy Technology Data Exchange (ETDEWEB)

    Nagata, K. [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Nakamura, T. [Keio University, Yokohama (Japan); Batle, J. [Universitat de les Illes Balears, Balearic Islands (Spain); Abdalla, S. [King Abdulaziz University Jeddah, Jeddah (Saudi Arabia); Farouk, A. [Al-Zahra College for Women, Muscat (Egypt)

    2017-02-15

    Recently, a new measurement theory based on truth values was proposed by Nagata and Nakamura [Int. J. Theor. Phys. 55, 3616 (2016)], that is, a theory where the results of measurements are either 0 or 1. The standard measurement theory accepts a hidden variable model for a single Pauli observable. Hence, we can introduce a classical probability space for the measurement theory in this particular case. Additionally, we discuss in the present contribution the fact that projective measurement theories (the results of which are either +1 or −1) imply the Bell, Kochen, and Specker (BKS) paradox for a single Pauli observable. To justify our assertion, we present the BKS theorem in almost all the two-dimensional states by using a projective measurement theory. As an example, we present the BKS theorem in two-dimensions with white noise. Our discussion provides new insight into the quantum measurement problem by using this measurement theory based on the truth values.

  11. Diabetic Foot Prevention: Repeatability of the Loran Platform Plantar Pressure and Load Distribution Measurements in Nondiabetic Subjects during Bipedal Standing—A Pilot Study

    Directory of Open Access Journals (Sweden)

    Martha Zequera

    2011-01-01

    Full Text Available This study was designed to assess the repeatability of the Loran Platform and evaluate the variability of plantar pressure and postural balance, during barefoot standing in nondiabetic subjects, for future diabetic foot clinical evaluation. Measurements were taken for eight nondiabetic subjects (4 females, 4 males, aged 47±7.2 years who had no musculoskeletal symptoms. Five variables were measured with the platform in the barefoot standing position. Ten measurements were taken using two different techniques for feet and posture positioning, during three sessions, once a week. For most measurements, no significant effect over time was found with Student's t-test (P<.000125. The ANOVA test of statistical significance confirmed that measurement differences between subjects showed higher variations than measurements taken from the same subject (P<.001. The measurements taken by the Loran Platform system were found to be repeatable.

  12. Evaluating measurement accuracy a practical approach

    CERN Document Server

    Rabinovich, Semyon G

    2017-01-01

    This book presents a systematic and comprehensive exposition of the theory of measurement accuracy and provides solutions that fill significant and long-standing gaps in the classical theory. It eliminates the shortcomings of the classical theory by including methods for estimating accuracy of single measurements, the most common type of measurement. The book also develops methods of reduction and enumeration for indirect measurements, which do not require Taylor series and produce a precise solution to this problem. It produces grounded methods and recommendations for summation of errors. The monograph also analyzes and critiques two foundation metrological documents, the International Vocabulary of Metrology (VIM) and the Guide to the Expression of Uncertainty in Measurement (GUM), and discusses directions for their revision. This new edition adds a step-by-step guide on how to evaluate measurement accuracy and recommendations on how to calculate systematic error of multiple measurements. There is also an e...

  13. Classical field approach to quantum weak measurements.

    Science.gov (United States)

    Dressel, Justin; Bliokh, Konstantin Y; Nori, Franco

    2014-03-21

    By generalizing the quantum weak measurement protocol to the case of quantum fields, we show that weak measurements probe an effective classical background field that describes the average field configuration in the spacetime region between pre- and postselection boundary conditions. The classical field is itself a weak value of the corresponding quantum field operator and satisfies equations of motion that extremize an effective action. Weak measurements perturb this effective action, producing measurable changes to the classical field dynamics. As such, weakly measured effects always correspond to an effective classical field. This general result explains why these effects appear to be robust for pre- and postselected ensembles, and why they can also be measured using classical field techniques that are not weak for individual excitations of the field.

  14. Long-term, repeated measurements of mouse cortical microflow at the same region of interest with high spatial resolution.

    Science.gov (United States)

    Tomita, Yutaka; Pinard, Elisabeth; Tran-Dinh, Alexy; Schiszler, Istvan; Kubis, Nathalie; Tomita, Minoru; Suzuki, Norihiro; Seylaz, Jacques

    2011-02-04

    A method for long-term, repeated, semi-quantitative measurements of cerebral microflow at the same region of interest (ROI) with high spatial resolution was developed and applied to mice subjected to focal arterial occlusion. A closed cranial window was chronically implanted over the left parieto-occipital cortex. The anesthetized mouse was placed several times, e.g., weekly, under a dynamic confocal microscope, and Rhodamine B-isothiocyanate-dextran was each time intravenously injected as a bolus, while microflow images were video recorded. Left and right tail veins were sequentially catheterized in a mouse three times at maximum over a 1.5 months' observation period. Smearing of the input function resulting from the use of intravenous injection was shown to be sufficiently small. The distal middle cerebral artery (MCA) was thermocoagulated through the cranial window in six mice, and five sham-operated mice were studied in parallel. Dye injection and video recording were conducted four times in this series, i.e., before and at 10 min, 7 and 30 days after sham operation or MCA occlusion. Pixelar microflow values (1/MTT) in a matrix of approximately 50×50 pixels were displayed on a two-dimensional (2-D) map, and the frequency distribution of the flow values was also calculated. No significant changes in microflow values over time were detected in sham-operated mice, while the time course of flow changes in the ischemic penumbral area in operated mice was similar to those reported in the literature. This method provides a powerful tool to investigate long-term changes in mouse cortical microflow under physiological and pathological conditions. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Interactional justice at work is related to sickness absence: a study using repeated measures in the Swedish working population.

    Science.gov (United States)

    Leineweber, Constanze; Bernhard-Oettel, Claudia; Peristera, Paraskevi; Eib, Constanze; Nyberg, Anna; Westerlund, Hugo

    2017-12-08

    Research has shown that perceived unfairness contributes to higher rates of sickness absence. While shorter, but more frequent periods of sickness absence might be a possibility for the individual to get relief from high strain, long-term sickness absence might be a sign of more serious health problems. The Uncertainty Management Model suggests that justice is particularly important in times of uncertainty, e.g. perceived job insecurity. The present study investigated the association between interpersonal and informational justice at work with long and frequent sickness absence respectively, under conditions of job insecurity. Data were derived from the 2010, 2012, and 2014 biennial waves of the Swedish Longitudinal Occupational Survey of Health (SLOSH). The final analytic sample consisted of 19,493 individuals. We applied repeated measures regression analyses through generalized estimating equations (GEE), a method for longitudinal data that simultaneously analyses variables at different time points. We calculated risk of long and frequent sickness absence, respectively in relation to interpersonal and informational justice taking perceptions of job insecurity into account. We found informational and interpersonal justice to be associated with risk of long and frequent sickness absence independently of job insecurity and demographic variables. Results from autoregressive GEE provided some support for a causal relationship between justice perceptions and sickness absence. Contrary to expectations, we found no interaction between justice and job insecurity. Our results underline the need for fair and just treatment of employees irrespective of perceived job insecurity in order to keep the workforce healthy and to minimize lost work days due to sickness absence.

  16. Comparison of diet measures from a food-frequency questionnaire with measures from repeated 24-hour dietary recalls. The Norwegian Women and Cancer Study.

    Science.gov (United States)

    Hjartåker, Anette; Andersen, Lene Frost; Lund, Eiliv

    2007-10-01

    To compare diet measures from a food-frequency questionnaire (FFQ) with measures from 24-hour dietary recalls (24HDRs). The participants answered an FFQ after completing four, repeated 24HDRs during a year. Norway, nationwide. Of 500 women randomly selected from The Norwegian Women and Cancer Study (the Norwegian arm of the European Prospective Investigation into Cancer and Nutrition), 286 agreed to participate and 238 completed the study. On the group level, the FFQ overestimated absolute intake in seven and underestimated intake in six of 21 food groups. Intakes of energy, fat, added sugar and alcohol were lower in the FFQ than in the 24HDRs, whereas intake of fibre was higher. Spearman's rank correlation coefficient ranged from 0.13 (desserts) to 0.82 (coffee) for foods, and from 0.25 (beta-carotene) to 0.67 (alcohol) for nutrients. Three per cent of the observations on nutrient intake fell in the opposite quintile when classified according to the FFQ as compared with the 24HDR. The median calibration coefficient, calculated by regression of the 24HDR data on the FFQ data, was 0.57 for foods and 0.38 for nutrients. The FFQ's ability to rank subjects was good for foods eaten frequently and fairly good for macronutrients in terms of energy percentages. Weaker ranking abilities were seen for foods eaten infrequently and for some micronutrients. The results underline the necessity of performing measurement error corrections.

  17. Impact of a person-centred dementia care training programme on hospital staff attitudes, role efficacy and perceptions of caring for people with dementia: A repeated measures study.

    Science.gov (United States)

    Surr, C A; Smith, S J; Crossland, J; Robins, J

    2016-01-01

    People with dementia occupy up to one quarter of acute hospital beds. However, the quality of care delivered to this patient group is of national concern. Staff working in acute hospitals report lack of knowledge, skills and confidence in caring for people with dementia. There is limited evidence about the most effective approaches to supporting acute hospital staff to deliver more person-centred care. This study aimed to evaluate the efficacy of a specialist training programme for acute hospital staff regarding improving attitudes, satisfaction and feelings of caring efficacy, in provision of care to people with dementia. A repeated measures design, with measures completed immediately prior to commencing training (T1), after completion of Foundation level training (T2: 4-6 weeks post-baseline), and following Intermediate level training (T3: 3-4 months post-baseline). One NHS Trust in the North of England, UK. 40 acute hospital staff working in clinical roles, the majority of whom (90%) were nurses. All participants received the 3.5 day Person-centred Care Training for Acute Hospitals (PCTAH) programme, comprised of two levels, Foundation (0.5 day) and Intermediate (3 days), delivered over a 3-4 months period. Staff demographics and previous exposure to dementia training were collected via a questionnaire. Staff attitudes were measured using the Approaches to Dementia Questionnaire (ADQ), satisfaction in caring for people with dementia was captured using the Staff Experiences of Working with Demented Residents questionnaire (SEWDR) and perceived caring efficacy was measured using the Caring Efficacy Scale (CES). The training programme was effective in producing a significant positive change on all three outcome measures following intermediate training compared to baseline. A significant positive effect was found on the ADQ between baseline and after completion of Foundation level training, but not for either of the other measures. Training acute hospital staff in

  18. Quantum repeated games revisited

    International Nuclear Information System (INIS)

    Frąckiewicz, Piotr

    2012-01-01

    We present a scheme for playing quantum repeated 2 × 2 games based on Marinatto and Weber’s approach to quantum games. As a potential application, we study the twice repeated Prisoner’s Dilemma game. We show that results not available in the classical game can be obtained when the game is played in the quantum way. Before we present our idea, we comment on the previous scheme of playing quantum repeated games proposed by Iqbal and Toor. We point out the drawbacks that make their results unacceptable. (paper)

  19. Does repeat Hb measurement within 2 hours after a normal initial Hb in stable trauma patients add value to trauma evaluation?

    NARCIS (Netherlands)

    Sierink, Joanne C.; Joosse, Pieter; de Castro, Steve M. M.; Schep, Niels W. L.; Goslings, J. Carel

    2014-01-01

    In our level I trauma center, it is considered common practice to repeat blood haemoglobin measurements in patients within 2 h after admission. However, the rationale behind this procedure is elusive and can be considered labour-intensive, especially in patients in whom haemorrhaging is not to be

  20. Comparison of anterior segment measurements using Sirius Topographer® and Nidek Axial Length-Scan® with assessing repeatability in patients with cataracts

    Directory of Open Access Journals (Sweden)

    Resat Duman

    2018-01-01

    Full Text Available Purpose: The purpose of this study is to evaluate anterior segment measurements obtained using CSO Sirius Topographer® (CSO, Firenze, Italy and Nidek Axial Length (AL-Scan® (Nidek CO., Gamagori, Japan. Methods: A total of 43 eyes of 43 patients were included in this prospective study. The central corneal thickness (CCT, anterior chamber depth (ACD, white-to-white distance (WTW, flat keratometry (K1, steep keratometry (K2, and mean keratometry (K values were randomly measured three times with each device by the same examiner. The intraclass correlation coefficient of repeatability was analyzed. The compatibility of both devices was evaluated using the 95% limits of the agreement proposed by Bland and Altman. Results: Examiner achieved high repeatability for all parameters on each device except the WTW measured by Sirius. All measurements except WTW and K1 taken with the Sirius were higher than that taken with the Nidek AL-Scan®. The difference in CCT, ACD, and WTW values was statistically significant. Conclusion: High repeatability of the measurements was achieved on both devices. Although Km, K1, and K2 measurements of the Sirius and the AL-Scan® showed good agreement, WTW, CCT, and ACD measurements significantly differed between two devices. Thus, anterior segment measurements except for Km, K1, and K2 cannot be used interchangeably between Sirius and Nidek AL-Scan® devices.

  1. Repeatability of Volume and Regional Body Composition Measurements of the Lower Limb Using Dual-energy X-ray Absorptiometry

    DEFF Research Database (Denmark)

    Gjorup, Caroline A; Zerahn, Bo; Juul, Sarah

    2017-01-01

    was calculated using the density of bone mineral content, fat, and lean mass. The repeatability of the volume of the lower limb and regional thigh and lower leg tissue composition (bone mineral content, fat, and lean mass) was good with intraclass correlation coefficient values of 0.97 to 0.99, and narrow limits...

  2. Approaches to measuring cultural diversity in recreation

    Science.gov (United States)

    Chieh-Lu Li; James D. Absher; Yi-Chung Hsu; Alan R. Graefe

    2008-01-01

    Measuring cultural diversity in recreation has become an important topic because of the increasing coverage of and interest in ethnicity and cross-cultural aspects of recreation. Introducing theories and methods from established disciplines other than leisure studies/recreation and park studies is necessary to understand this important issue. In this article, we first...

  3. Recommendations for analysis of repeated-measures designs: testing and correcting for sphericity and use of manova and mixed model analysis.

    Science.gov (United States)

    Armstrong, Richard A

    2017-09-01

    A common experimental design in ophthalmic research is the repeated-measures design in which at least one variable is a within-subject factor. This design is vulnerable to lack of 'sphericity' which assumes that the variances of the differences among all possible pairs of within-subject means are equal. Traditionally, this design has been analysed using a repeated-measures analysis of variance (RM-anova) but increasingly more complex methods such as multivariate anova (manova) and mixed model analysis (MMA) are being used. This article surveys current practice in the analysis of designs incorporating different factors in research articles published in three optometric journals, namely Ophthalmic and Physiological Optics (OPO), Optometry and Vision Science (OVS), and Clinical and Experimental Optometry (CXO), and provides advice to authors regarding the analysis of repeated-measures designs. Of the total sample of articles, 66% used a repeated-measures design. Of those articles using a repeated-measures design, 59% and 8% analysed the data using RM-anova or manova respectively and 33% used MMA. The use of MMA relative to RM-anova has increased significantly since 2009/10. A further search using terms to select those papers testing and correcting for sphericity ('Mauchly's test', 'Greenhouse-Geisser', 'Huynh and Feld') identified 66 articles, 62% of which were published from 2012 to the present. If the design is balanced without missing data then manova should be used rather than RM-anova as it gives better protection against lack of sphericity. If the design is unbalanced or with missing data then MMA is the method of choice. However, MMA is a more complex analysis and can be difficult to set up and run, and care should be taken first, to define appropriate models to be tested and second, to ensure that sample sizes are adequate. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  4. Secondhand tobacco smoke exposure and heart rate variability and inflammation among non-smoking construction workers: a repeated measures study.

    Science.gov (United States)

    Zhang, Jinming; Fang, Shona C; Mittleman, Murray A; Christiani, David C; Cavallari, Jennifer M

    2013-10-02

    Although it has been well recognized that exposure to secondhand tobacco smoke (SHS) is associated with cardiovascular mortality, the mechanisms and time course by which SHS exposure may lead to cardiovascular effects are still being explored. Non-smoking workers were recruited from a local union and monitored inside a union hall while exposed to SHS over approximately 6 hours. Participants were fitted with a continuous electrocardiographic monitor upon enrollment which was removed at the end of a 24-hr monitoring period. A repeated measures study design was used where resting ECGs and blood samples were taken from individuals before SHS exposure (baseline), immediately following SHS exposure (post) and the morning following SHS exposure (next-morning).Inflammatory markers, including high sensitivity C-reactive protein (CRP) and white blood cell count (WBC) were analyzed. Heart rate variability (HRV) was analyzed from the ECG recordings in time (SDNN, rMSSD) and frequency (LF, HF) domain parameters over 5-minute periods. SHS exposure was quantified using a personal fine particulate matter (PM2.5) monitor.Linear mixed effects regression models were used to examine within-person changes in inflammatory and HRV parameters across the 3 time periods. Exposure-response relationships with PM2.5 were examined using mixed effects models. All models were adjusted for age, BMI and circadian variation. A total of 32 male non-smokers were monitored between June 2010 and June 2012. The mean PM2.5 from SHS exposure was 132 μg/m3. Immediately following SHS exposure, a 100 μg/m3 increase in PM2.5 was associated with declines in HRV (7.8% [standard error (SE) =3%] SDNN, 8.0% (SE = 3.9%) rMSSD, 17.2% (SE = 6.3%) LF, 29.0% (SE = 10.1%) HF) and increases in WBC count 0.42 (SE = 0.14) k/μl. Eighteen hours following SHS exposure, a 100 μg/m3 increase in PM2.5 was associated with 24.2% higher CRP levels. Our study suggest that short-term SHS exposure is associated

  5. A NEW APPROACH FOR MEASURING CORPORATE REPUTATION

    Directory of Open Access Journals (Sweden)

    Percy Marquina Feldman

    2014-01-01

    Full Text Available This study describes the concept of corporate reputation and reviews some of the major points that exist when it comes to measuring it. It thus suggests a new index for measurement and its advantages and disadvantages are pointed out. The consistency of the seven key variables for the collecting indicator is described by the results of a factor analysis and correlations. Finally, the indicator is put to test by gathering the perception of corporate reputation of 1500 individuals for 69 companies belonging to 15 different industrial sectors, in Peru. The results indicate that the proposed index variables are not necessarily of greatest interest to the study sample in which companies have a better performance. Also greater memorial companies aren’t necessarily those that enjoy a greater corporate reputation. Managerial implications for the organizations in the process of managing and monitoring the dimensions involved of this key asset are also referenced.

  6. Repeating Marx

    DEFF Research Database (Denmark)

    Fuchs, Christian; Monticelli, Lara

    2018-01-01

    This introduction sets out the context of the special issue “Karl Marx @ 200: Debating Capitalism & Perspectives for the Future of Radical Theory”, which was published on the occasion of Marx’s bicentenary on 5 May 2018. First, we give a brief overview of contemporary capitalism’s development...... and its crises. Second, we argue that it is important to repeat Marx today. Third, we reflect on lessons learned from 200 years of struggles for alternatives to capitalism. Fourth, we give an overview of the contributions in this special issue. Taken together, the contributions in this special issue show...... that Marx’s theory and politics remain key inspirations for understanding exploitation and domination in 21st-century society and for struggles that aim to overcome these phenomena and establishing a just and fair society. We need to repeat Marx today....

  7. Neuro-mechanical determinants of repeated treadmill sprints - Usefulness of an ‘hypoxic to normoxic recovery’ approach

    Directory of Open Access Journals (Sweden)

    Olivier eGIRARD

    2015-09-01

    Full Text Available To improve our understanding of the limiting factors during repeated sprinting, we manipulated hypoxia severity during an initial set and examined the effects on performance and associated neuro-mechanical alterations during a subsequent set performed in normoxia. On separate days, thirteen active males performed eight 5-s sprints (recovery = 25 s on an instrumented treadmill in either normoxia near sea-level (SL; FiO2 = 20.9%, moderate (MH; FiO2 = 16.8% or severe normobaric hypoxia (SH; FiO2 = 13.3% followed, 6 min later, by four 5-s sprints (recovery = 25 s in normoxia. Throughout the first set, along with distance covered [larger sprint decrement score in SH (-8.2% compared to SL (-5.3% and MH (-7.2%; P<0.05], changes in contact time, step frequency and root mean square activity (surface electromyography of the quadriceps (rectus femoris muscle in SH exceeded those in SL and MH (P<0.05. During first sprint of the subsequent normoxic set, the distance covered (99.6%, 96.4% and 98.3% of sprint 1 in SL, MH and SH, respectively, the main kinetic (mean, horizontal and resultant forces and kinematic (contact time and step frequency variables as well as surface electromyogram of quadriceps and plantar flexor muscles were fully recovered, with no significant difference between conditions. Despite differing hypoxic severity levels during sprints 1 to 8, performance and neuro-mechanical patterns did not differ during the four sprints of the second set performed in normoxia. In summary, under the circumstances of this study (participant background, exercise-to-rest ratio, hypoxia exposure, sprint mechanical performance and neural alterations were largerly influenced by the hypoxia severity in an initial set of repeated sprints. However, hypoxia had no residual effect during a subsequent set performed in normoxia. Hence, the recovery of performance and associated neuro-mechanical alterations was complete after resting for 6 min near sea level, with a

  8. Neuro-mechanical determinants of repeated treadmill sprints - Usefulness of an “hypoxic to normoxic recovery” approach

    Science.gov (United States)

    Girard, Olivier; Brocherie, Franck; Morin, Jean-Benoit; Millet, Grégoire P.

    2015-01-01

    To improve our understanding of the limiting factors during repeated sprinting, we manipulated hypoxia severity during an initial set and examined the effects on performance and associated neuro-mechanical alterations during a subsequent set performed in normoxia. On separate days, 13 active males performed eight 5-s sprints (recovery = 25 s) on an instrumented treadmill in either normoxia near sea-level (SL; FiO2 = 20.9%), moderate (MH; FiO2 = 16.8%) or severe normobaric hypoxia (SH; FiO2 = 13.3%) followed, 6 min later, by four 5-s sprints (recovery = 25 s) in normoxia. Throughout the first set, along with distance covered [larger sprint decrement score in SH (−8.2%) compared to SL (−5.3%) and MH (−7.2%); P < 0.05], changes in contact time, step frequency and root mean square activity (surface electromyography) of the quadriceps (Rectus femoris muscle) in SH exceeded those in SL and MH (P < 0.05). During first sprint of the subsequent normoxic set, the distance covered (99.6, 96.4, and 98.3% of sprint 1 in SL, MH, and SH, respectively), the main kinetic (mean vertical, horizontal, and resultant forces) and kinematic (contact time and step frequency) variables as well as surface electromyogram of quadriceps and plantar flexor muscles were fully recovered, with no significant difference between conditions. Despite differing hypoxic severity levels during sprints 1–8, performance and neuro-mechanical patterns did not differ during the four sprints of the second set performed in normoxia. In summary, under the circumstances of this study (participant background, exercise-to-rest ratio, hypoxia exposure), sprint mechanical performance and neural alterations were largely influenced by the hypoxia severity in an initial set of repeated sprints. However, hypoxia had no residual effect during a subsequent set performed in normoxia. Hence, the recovery of performance and associated neuro-mechanical alterations was complete after resting for 6 min near sea level

  9. Deployment Repeatability

    Science.gov (United States)

    2016-08-31

    large cohort of trials to spot unusual cases. However, deployment repeatability is inherently a nonlinear phenomenon, which makes modeling difficult...and GEMS tip position were both tracked during ground testing by a laser target tracking system. Earlier SAILMAST testing in 2005 [8] used...recalls the strategy used by SRTM, where a constellation of lights was installed at the tip of the boom and a modified star tracker was used to track tip

  10. NEW APPROACHES ON REVENUE RECOGNITION AND MEASUREMENT

    Directory of Open Access Journals (Sweden)

    Cristina-Aurora, BUNEA-BONTAȘ

    2014-11-01

    Full Text Available Revenue is an important indicator to users of financial statements in assessing an entity's financial performance and position. International Financial Reporting Standard 15 Revenue from Contracts with Customers (IFRS 15 issued in May 2014 provides a robust framework for addressing revenue issues. The standard establishes principles for reporting useful information to users of financial statements about the nature, amount, timing and uncertainty of revenue and cash flows arising from an entity's contracts with customers. This article outlines the basic principles that an entity should must apply to measure and recognise revenue and the related cash flows.

  11. Realistic Approach for Phasor Measurement Unit Placement

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul

    2015-01-01

    This paper presents a realistic cost-effectivemodel for optimal placement of phasor measurement units (PMUs) for complete observability of a power system considering practical cost implications. The proposed model considers hidden or otherwise unaccounted practical costs involved in PMU...... installation. Consideration of these hidden but significant and integral part of total PMU installation costs was inspired from practical experience on a real-life project. The proposedmodel focuses on the minimization of total realistic costs instead of a widely used theoretical concept of a minimal number...... of PMUs. The proposed model has been applied to IEEE 14-bus, IEEE 24-bus, IEEE 30-bus, New England 39-bus, and large power system of 300 buses and real life Danish grid. A comparison of the presented results with those reported by traditionalmethods has also been shown to justify the effectiveness...

  12. Analyzing repeated data collected by mobile phones and frequent text messages. An example of Low back pain measured weekly for 18 weeks

    DEFF Research Database (Denmark)

    Axén, Iben; Bodin, Lennart; Kongsted, Alice

    2012-01-01

    to recovery? This question was answered using survival analysis, illustrated in Kaplan-Meier curves, Proportional Hazard regression analyses and spline regression analyses. 4: How is the repeatedly measured data associated with baseline (predictor) variables? This question was answered using generalized...... involves some challenges. Vital issues to consider are the within-subject correlation, the between measurement occasion correlation and the presence of missing values. The overall aim of this commentary is to describe different methods of analyzing repeated data. It is meant to give an overview...... for the clinical researcher in order for complex outcome measures to be interpreted in a clinically meaningful way. METHODS: A model data set was formed using data from two clinical studies, where patients with low back pain were followed with weekly text messages for 18 weeks. Different research questions...

  13. Innovative United Kingdom Approaches To Measuring Service Quality.

    Science.gov (United States)

    Winkworth, Ian

    2001-01-01

    Reports on approaches to measuring the service quality of academic libraries in the United Kingdom. Discusses the role of government and the national background of quality measurement; measurement frameworks; better use of statistics; benchmarking; measuring user satisfaction; and possible future development. (Author/LRW)

  14. PTSD and DNA Methylation in Select Immune Function Gene Promoter Regions: A Repeated Measures Case-control Study of U.S. Military Service Members

    Science.gov (United States)

    2013-06-24

    other relevant exposures which may influ- ence DNA methylation , such as dietary factors ( folate , vitamin B12 intake) (Fenech, 2001; Piyathilake and...ARTICLE published: 24 June 2013 doi: 10.3389/fpsyt.2013.00056 PTSD and DNA methylation in select immune function gene promoter regions: a repeated measures...largely unknown. Dis- tinct expression signatures for PTSD have been found, in particular for immune activation transcripts. DNA methylation may be

  15. Effects of annealing on the sensitivity of LiF TLD-100 after repeated use for low dose measurements

    International Nuclear Information System (INIS)

    Ogunleye, O.T.; Richmond, R.G.

    1987-01-01

    The changes in sensitivity of LiF TLD-100 extruded ribbons subjected to repeated use up to 100 times were investigated. Three different annealing regimes were compared. The dosemeters were annealed at 400 0 C followed by (i) a slow or (ii) fast cooling to room temperature or (iii) utilising a 20 s readout process in the reader without a high temperature annealing at 400 0 C. Each of the three groups consisted of two sets of 20 chips each, with one set receiving 500 μ Gy of 90 Sr beta radiation and the other unirradiated. Sensitivity evaluations were performed every five cycles through the first 50 cycles, and on each tenth cycle thereafter. On the average, the fast cooled group maintained their integrity best, while a maximum variation in sensitivity of about 15% was observed in the irradiated set of the slowly cooled group. A permanent increase in sensitivity of at least 10% was observed for the set of dosemeters receiving radiation without annealing. Glow curve analyses showed an increase in the ratio of peaks 4 and 5 with repeated use of this group. (author)

  16. Phylogenomic approaches to common problems encountered in the analysis of low copy repeats: The sulfotransferase 1A gene family example

    Directory of Open Access Journals (Sweden)

    Benner Steven A

    2005-03-01

    Full Text Available Abstract Background Blocks of duplicated genomic DNA sequence longer than 1000 base pairs are known as low copy repeats (LCRs. Identified by their sequence similarity, LCRs are abundant in the human genome, and are interesting because they may represent recent adaptive events, or potential future adaptive opportunities within the human lineage. Sequence analysis tools are needed, however, to decide whether these interpretations are likely, whether a particular set of LCRs represents nearly neutral drift creating junk DNA, or whether the appearance of LCRs reflects assembly error. Here we investigate an LCR family containing the sulfotransferase (SULT 1A genes involved in drug metabolism, cancer, hormone regulation, and neurotransmitter biology as a first step for defining the problems that those tools must manage. Results Sequence analysis here identified a fourth sulfotransferase gene, which may be transcriptionally active, located on human chromosome 16. Four regions of genomic sequence containing the four human SULT1A paralogs defined a new LCR family. The stem hominoid SULT1A progenitor locus was identified by comparative genomics involving complete human and rodent genomes, and a draft chimpanzee genome. SULT1A expansion in hominoid genomes was followed by positive selection acting on specific protein sites. This episode of adaptive evolution appears to be responsible for the dopamine sulfonation function of some SULT enzymes. Each of the conclusions that this bioinformatic analysis generated using data that has uncertain reliability (such as that from the chimpanzee genome sequencing project has been confirmed experimentally or by a "finished" chromosome 16 assembly, both of which were published after the submission of this manuscript. Conclusion SULT1A genes expanded from one to four copies in hominoids during intra-chromosomal LCR duplications, including (apparently one after the divergence of chimpanzees and humans. Thus, LCRs may

  17. Repeatability and Reproducibility of Retinal Nerve Fiber Layer Parameters Measured by Scanning Laser Polarimetry with Enhanced Corneal Compensation in Normal and Glaucomatous Eyes.

    Science.gov (United States)

    Ara, Mirian; Ferreras, Antonio; Pajarin, Ana B; Calvo, Pilar; Figus, Michele; Frezzotti, Paolo

    2015-01-01

    To assess the intrasession repeatability and intersession reproducibility of peripapillary retinal nerve fiber layer (RNFL) thickness parameters measured by scanning laser polarimetry (SLP) with enhanced corneal compensation (ECC) in healthy and glaucomatous eyes. One randomly selected eye of 82 healthy individuals and 60 glaucoma subjects was evaluated. Three scans were acquired during the first visit to evaluate intravisit repeatability. A different operator obtained two additional scans within 2 months after the first session to determine intervisit reproducibility. The intraclass correlation coefficient (ICC), coefficient of variation (COV), and test-retest variability (TRT) were calculated for all SLP parameters in both groups. ICCs ranged from 0.920 to 0.982 for intravisit measurements and from 0.910 to 0.978 for intervisit measurements. The temporal-superior-nasal-inferior-temporal (TSNIT) average was the highest (0.967 and 0.946) in normal eyes, while nerve fiber indicator (NFI; 0.982) and inferior average (0.978) yielded the best ICC in glaucomatous eyes for intravisit and intervisit measurements, respectively. All COVs were under 10% in both groups, except NFI. TSNIT average had the lowest COV (2.43%) in either type of measurement. Intervisit TRT ranged from 6.48 to 12.84. The reproducibility of peripapillary RNFL measurements obtained with SLP-ECC was excellent, indicating that SLP-ECC is sufficiently accurate for monitoring glaucoma progression.

  18. Repeatability of two-dimensional chemical shift imaging multivoxel proton magnetic resonance spectroscopy for measuring human cerebral choline-containing compounds.

    Science.gov (United States)

    Puri, Basant K; Egan, Mary; Wallis, Fintan; Jakeman, Philip

    2018-03-22

    To investigate the repeatability of proton magnetic resonance spectroscopy in the in vivo measurement of human cerebral levels of choline-containing compounds (Cho). Two consecutive scans were carried out in six healthy resting subjects at a magnetic field strength of 1.5 T. On each occasion, neurospectroscopy data were collected from 64 voxels using the same 2D chemical shift imaging (CSI) sequence. The data were analyzed in the same way, using the same software, to obtain the values for each voxel of the ratio of Cho to creatine. The Wilcoxon related-samples signed-rank test, coefficient of variation (CV), repeatability coefficient (RC), and intraclass correlation coefficient (ICC) were used to assess the repeatability. The CV ranged from 2.75% to 33.99%, while the minimum RC was 5.68%. There was excellent reproducibility, as judged by significant ICC values, in 26 voxels. Just three voxels showed significant differences according to the Wilcoxon related-samples signed-rank test. It is therefore concluded that when CSI multivoxel proton neurospectroscopy is used to measure cerebral choline-containing compounds at 1.5 T, the reproducibility is highly acceptable.

  19. Multidimensional poverty: an alternative measurement approach for the United States?

    Science.gov (United States)

    Waglé, Udaya R

    2008-06-01

    International poverty research has increasingly underscored the need to use multidimensional approaches to measure poverty. Largely embraced in Europe and elsewhere, this has not had much impact on the way poverty is measured in the United States. In this paper, I use a comprehensive multidimensional framework including economic well-being, capability, and social inclusion to examine poverty in the US. Data from the 2004 General Social Survey support the interconnectedness among these poverty dimensions, indicating that the multidimensional framework utilizing a comprehensive set of information provides a compelling value added to poverty measurement. The suggested demographic characteristics of the various categories of the poor are somewhat similar between this approach and other traditional approaches. But the more comprehensive and accurate measurement outcomes from this approach help policymakers target resources at the specific groups.

  20. Measuring University Students' Approaches to Learning Statistics: An Invariance Study

    Science.gov (United States)

    Chiesi, Francesca; Primi, Caterina; Bilgin, Ayse Aysin; Lopez, Maria Virginia; del Carmen Fabrizio, Maria; Gozlu, Sitki; Tuan, Nguyen Minh

    2016-01-01

    The aim of the current study was to provide evidence that an abbreviated version of the Approaches and Study Skills Inventory for Students (ASSIST) was invariant across different languages and educational contexts in measuring university students' learning approaches to statistics. Data were collected on samples of university students attending…

  1. Repeatability of Brain Volume Measurements Made with the Atlas-based Method from T1-weighted Images Acquired Using a 0.4 Tesla Low Field MR Scanner.

    Science.gov (United States)

    Goto, Masami; Suzuki, Makoto; Mizukami, Shinya; Abe, Osamu; Aoki, Shigeki; Miyati, Tosiaki; Fukuda, Michinari; Gomi, Tsutomu; Takeda, Tohoru

    2016-10-11

    An understanding of the repeatability of measured results is important for both the atlas-based and voxel-based morphometry (VBM) methods of magnetic resonance (MR) brain volumetry. However, many recent studies that have investigated the repeatability of brain volume measurements have been performed using static magnetic fields of 1-4 tesla, and no study has used a low-strength static magnetic field. The aim of this study was to investigate the repeatability of measured volumes using the atlas-based method and a low-strength static magnetic field (0.4 tesla). Ten healthy volunteers participated in this study. Using a 0.4 tesla magnetic resonance imaging (MRI) scanner and a quadrature head coil, three-dimensional T 1 -weighted images (3D-T 1 WIs) were obtained from each subject, twice on the same day. VBM8 software was used to construct segmented normalized images [gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images]. The regions-of-interest (ROIs) of GM, WM, CSF, hippocampus (HC), orbital gyrus (OG), and cerebellum posterior lobe (CPL) were generated using WFU PickAtlas. The percentage change was defined as[100 × (measured volume with first segmented image - mean volume in each subject)/(mean volume in each subject)]The average percentage change was calculated as the percentage change in the 6 ROIs of the 10 subjects. The mean of the average percentage changes for each ROI was as follows: GM, 0.556%; WM, 0.324%; CSF, 0.573%; HC, 0.645%; OG, 1.74%; and CPL, 0.471%. The average percentage change was higher for the orbital gyrus than for the other ROIs. We consider that repeatability of the atlas-based method is similar between 0.4 and 1.5 tesla MR scanners. To our knowledge, this is the first report to show that the level of repeatability with a 0.4 tesla MR scanner is adequate for the estimation of brain volume change by the atlas-based method.

  2. (In)Consistencies in Responses to Sodium Bicarbonate Supplementation: A Randomised, Repeated Measures, Counterbalanced and Double-Blind Study.

    Science.gov (United States)

    Froio de Araujo Dias, Gabriela; da Eira Silva, Vinicius; de Salles Painelli, Vitor; Sale, Craig; Giannini Artioli, Guilherme; Gualano, Bruno; Saunders, Bryan

    2015-01-01

    Intervention studies do not account for high within-individual variation potentially compromising the magnitude of an effect. Repeat administration of a treatment allows quantification of individual responses and determination of the consistency of responses. We determined the consistency of metabolic and exercise responses following repeated administration of sodium bicarbonate (SB). 15 physically active males (age 25±4 y; body mass 76.0±7.3 kg; height 1.77±0.05 m) completed six cycling capacity tests at 110% of maximum power output (CCT110%) following ingestion of either 0.3 g∙kg-1BM of SB (4 trials) or placebo (PL, 2 trials). Blood pH, bicarbonate, base excess and lactate were determined at baseline, pre-exercise, post-exercise and 5-min post-exercise. Total work done (TWD) was recorded as the exercise outcome. SB supplementation increased blood pH, bicarbonate and base excess prior to every trial (all p ≤ 0.001); absolute changes in pH, bicarbonate and base excess from baseline to pre-exercise were similar in all SB trials (all p > 0.05). Blood lactate was elevated following exercise in all trials (p ≤ 0.001), and was higher in some, but not all, SB trials compared to PL. TWD was not significantly improved with SB vs. PL in any trial (SB1: +3.6%; SB2 +0.3%; SB3: +2.1%; SB4: +6.7%; all p > 0.05), although magnitude-based inferences suggested a 93% likely improvement in SB4. Individual analysis showed ten participants improved in at least one SB trial above the normal variation of the test although five improved in none. The mechanism for improved exercise with SB was consistently in place prior to exercise, although this only resulted in a likely improvement in one trial. SB does not consistently improve high intensity cycling capacity, with results suggesting that caution should be taken when interpreting the results from single trials as to the efficacy of SB supplementation. ClinicalTrials.gov NCT02474628.

  3. (InConsistencies in Responses to Sodium Bicarbonate Supplementation: A Randomised, Repeated Measures, Counterbalanced and Double-Blind Study.

    Directory of Open Access Journals (Sweden)

    Gabriela Froio de Araujo Dias

    Full Text Available Intervention studies do not account for high within-individual variation potentially compromising the magnitude of an effect. Repeat administration of a treatment allows quantification of individual responses and determination of the consistency of responses. We determined the consistency of metabolic and exercise responses following repeated administration of sodium bicarbonate (SB.15 physically active males (age 25±4 y; body mass 76.0±7.3 kg; height 1.77±0.05 m completed six cycling capacity tests at 110% of maximum power output (CCT110% following ingestion of either 0.3 g∙kg-1BM of SB (4 trials or placebo (PL, 2 trials. Blood pH, bicarbonate, base excess and lactate were determined at baseline, pre-exercise, post-exercise and 5-min post-exercise. Total work done (TWD was recorded as the exercise outcome.SB supplementation increased blood pH, bicarbonate and base excess prior to every trial (all p ≤ 0.001; absolute changes in pH, bicarbonate and base excess from baseline to pre-exercise were similar in all SB trials (all p > 0.05. Blood lactate was elevated following exercise in all trials (p ≤ 0.001, and was higher in some, but not all, SB trials compared to PL. TWD was not significantly improved with SB vs. PL in any trial (SB1: +3.6%; SB2 +0.3%; SB3: +2.1%; SB4: +6.7%; all p > 0.05, although magnitude-based inferences suggested a 93% likely improvement in SB4. Individual analysis showed ten participants improved in at least one SB trial above the normal variation of the test although five improved in none.The mechanism for improved exercise with SB was consistently in place prior to exercise, although this only resulted in a likely improvement in one trial. SB does not consistently improve high intensity cycling capacity, with results suggesting that caution should be taken when interpreting the results from single trials as to the efficacy of SB supplementation.ClinicalTrials.gov NCT02474628.

  4. Very long Detection Times after High and repeated intake of Heroin and Methadone, measured in Oral Fluid

    Directory of Open Access Journals (Sweden)

    Vindenes V.

    2014-12-01

    Full Text Available When detection times for psychoactive drugs in oral fluid are reported, they are most often based on therapeutic doses administered in clinical studies. Repeated ingestions of high doses, as seen after drug abuse, are however likely to cause positive samples for extended time periods. Findings of drugs of abuse in oral fluid might lead to negative sanctions, and the knowledge of detection times of these drugs are important to ensure correct interpretation. The aim of this study was to investigate the detection times of opioids in oral fluid. 25 patients with a history of heavy drug abuse admitted to a detoxification ward were included. Oral fluid and urine were collected daily and, if the patient gave consent, a blood sample was drawn during the first five days after admission. Morphine, codeine and/or 6-monoacetyl morphine (6-MAM were found in oral fluid and/or urine from 20 patients. The maximum detection times in oral fluid for codeine, morphine and 6-MAM were 1, 3 and 8 days, respectively. Positive oral fluid samples were interspersed with negative samples, mainly for concentrations around cut off. Elimination curves for methadone in oral fluid were found for two subjects, and the detection times were 5 and 8 days. Oral fluid is likely to become a good method for detection of drug abuse in the future

  5. Fixed-flexion knee radiography using a new positioning device produced highly repeatable measurements of joint space width: ELSA-Brasil Musculoskeletal Study (ELSA-Brasil MSK).

    Science.gov (United States)

    Telles, Rosa Weiss; Costa-Silva, Luciana; Machado, Luciana A C; Reis, Rodrigo Citton Padilha Dos; Barreto, Sandhi Maria

    To describe the performance of a non-fluoroscopic fixed-flexion PA radiographic protocol with a new positioning device, developed for the assessment of knee osteoarthritis (OA) in Brazilian Longitudinal Study of Adult Health Musculoskeletal Study (ELSA-Brasil MSK). A test-retest design including 19 adults (38 knee images) was conducted. Feasibility of the radiographic protocol was assessed by image quality parameters and presence of radioanatomic alignment according to intermargin distance (IMD) values. Repeatability was assessed for IMD and joint space width (JSW) measured at three different locations. Approximately 90% of knee images presented excellent quality. Frequencies of nearly perfect radioanatomic alignment (IMD ≤1mm) ranged from 29% to 50%, and satisfactory alignment was found in up to 71% and 76% of the images (IMD ≤1.5mm and ≤1.7mm, respectively). Repeatability analyses yielded the following results: IMD [SD of mean difference=1.08; coefficient of variation (%CV)=54.68%; intraclass correlation coefficient (ICC) (95%CI)=0.59 (0.34-0.77)]; JSW [SD of mean difference=0.34-0.61; %CV=4.48%-9.80%; ICC (95%CI)=0.74 (0.55-0.85)-0.94 (0.87-0.97)]. Adequately reproducible measurements of IMD and JSW were found in 68% and 87% of the images, respectively. Despite the difficulty in achieving consistent radioanatomic alignment between subsequent radiographs in terms of IMD, the protocol produced highly repeatable JSW measurements when these were taken at midpoint and 10mm from the medial extremity of the medial tibial plateau. Therefore, measurements of JSW at these locations can be considered adequate for the assessment of knee OA in ELSA-Brasil MSK. Copyright © 2016. Published by Elsevier Editora Ltda.

  6. Fixed-flexion knee radiography using a new positioning device produced highly repeatable measurements of joint space width: ELSA-Brasil Musculoskeletal Study (ELSA-Brasil MSK

    Directory of Open Access Journals (Sweden)

    Rosa Weiss Telles

    Full Text Available Abstract Objective: To describe the performance of a non-fluoroscopic fixed-flexion PA radiographic protocol with a new positioning device, developed for the assessment of knee osteoarthritis (OA in Brazilian Longitudinal Study of Adult Health Musculoskeletal Study (ELSA-Brasil MSK. Material and methods: A test–retest design including 19 adults (38 knee images was conducted. Feasibility of the radiographic protocol was assessed by image quality parameters and presence of radioanatomic alignment according to intermargin distance (IMD values. Repeatability was assessed for IMD and joint space width (JSW measured at three different locations. Results: Approximately 90% of knee images presented excellent quality. Frequencies of nearly perfect radioanatomic alignment (IMD ≤1 mm ranged from 29% to 50%, and satisfactory alignment was found in up to 71% and 76% of the images (IMD ≤1.5 mm and ≤1.7 mm, respectively. Repeatability analyses yielded the following results: IMD [SD of mean difference = 1.08; coefficient of variation (%CV = 54.68%; intraclass correlation coefficient (ICC (95%CI = 0.59 (0.34–0.77]; JSW [SD of mean difference = 0.34–0.61; %CV = 4.48%–9.80%; ICC (95%CI = 0.74 (0.55–0.85–0.94 (0.87–0.97]. Adequately reproducible measurements of IMD and JSW were found in 68% and 87% of the images, respectively. Conclusions: Despite the difficulty in achieving consistent radioanatomic alignment between subsequent radiographs in terms of IMD, the protocol produced highly repeatable JSW measurements when these were taken at midpoint and 10 mm from the medial extremity of the medial tibial plateau. Therefore, measurements of JSW at these locations can be considered adequate for the assessment of knee OA in ELSA-Brasil MSK.

  7. Complete clinical responses to cancer therapy caused by multiple divergent approaches: a repeating theme lost in translation

    International Nuclear Information System (INIS)

    Coventry, Brendon J; Ashdown, Martin L

    2012-01-01

    Over 50 years of cancer therapy history reveals complete clinical responses (CRs) from remarkably divergent forms of therapies (eg, chemotherapy, radiotherapy, surgery, vaccines, autologous cell transfers, cytokines, monoclonal antibodies) for advanced solid malignancies occur with an approximately similar frequency of 5%–10%. This has remained frustratingly almost static. However, CRs usually underpin strong durable 5-year patient survival. How can this apparent paradox be explained? Over some 20 years, realization that (1) chronic inflammation is intricately associated with cancer, and (2) the immune system is delicately balanced between responsiveness and tolerance of cancer, provides a greatly significant insight into ways cancer might be more effectively treated. In this review, divergent aspects from the largely segmented literature and recent conferences are drawn together to provide observations revealing some emerging reasoning, in terms of “final common pathways” of cancer cell damage, immune stimulation, and auto-vaccination events, ultimately leading to cancer cell destruction. Created from this is a unifying overarching concept to explain why multiple approaches to cancer therapy can provide complete responses at almost equivalent rates. This “missing” aspect provides a reasoned explanation for what has, and is being, increasingly reported in the mainstream literature – that inflammatory and immune responses appear intricately associated with, if not causative of, complete responses induced by divergent forms of cancer therapy. Curiously, whether by chemotherapy, radiation, surgery, or other means, therapy-induced cell injury results, leaving inflammation and immune system stimulation as a final common denominator across all of these mechanisms of cancer therapy. This aspect has been somewhat obscured and has been “lost in translation” to date

  8. Complete clinical responses to cancer therapy caused by multiple divergent approaches: a repeating theme lost in translation

    Directory of Open Access Journals (Sweden)

    Coventry BJ

    2012-05-01

    Full Text Available Brendon J Coventry, Martin L AshdownDiscipline of Surgery, University of Adelaide, Royal Adelaide Hospital and Faculty of Medicine, University of Melbourne, AustraliaAbstract: Over 50 years of cancer therapy history reveals complete clinical responses (CRs from remarkably divergent forms of therapies (eg, chemotherapy, radiotherapy, surgery, vaccines, autologous cell transfers, cytokines, monoclonal antibodies for advanced solid malignancies occur with an approximately similar frequency of 5%–10%. This has remained frustratingly almost static. However, CRs usually underpin strong durable 5-year patient survival. How can this apparent paradox be explained?Over some 20 years, realization that (1 chronic inflammation is intricately associated with cancer, and (2 the immune system is delicately balanced between responsiveness and tolerance of cancer, provides a greatly significant insight into ways cancer might be more effectively treated. In this review, divergent aspects from the largely segmented literature and recent conferences are drawn together to provide observations revealing some emerging reasoning, in terms of "final common pathways" of cancer cell damage, immune stimulation, and auto-vaccination events, ultimately leading to cancer cell destruction. Created from this is a unifying overarching concept to explain why multiple approaches to cancer therapy can provide complete responses at almost equivalent rates. This "missing" aspect provides a reasoned explanation for what has, and is being, increasingly reported in the mainstream literature – that inflammatory and immune responses appear intricately associated with, if not causative of, complete responses induced by divergent forms of cancer therapy. Curiously, whether by chemotherapy, radiation, surgery, or other means, therapy-induced cell injury results, leaving inflammation and immune system stimulation as a final common denominator across all of these mechanisms of cancer

  9. New approach for measuring the microwave Hall mobility of semiconductors

    International Nuclear Information System (INIS)

    Murthy, D. V. B.; Subramanian, V.; Murthy, V. R. K.

    2006-01-01

    Measurement of Hall mobility in semiconductor samples using bimodal cavity method gives distinct advantages due to noncontact nature as well as the provision to measure anisotropic mobility. But the measurement approaches followed till now have a disadvantage of having high error values primarily due to the problem in evaluating the calibration constant of the whole experimental arrangement. This article brings out a new approach that removes such disadvantage and presents the calibration constant with 1% accuracy. The overall error in the carrier mobility values is within 5%

  10. MEASURING INFLATION THROUGH STOCHASTIC APPROACH TO INDEX NUMBERS FOR PAKISTAN

    Directory of Open Access Journals (Sweden)

    Zahid Asghar

    2010-09-01

    Full Text Available This study attempts to estimate the rate of inflation in Pakistan through stochastic approach to index numbers which provides not only point estimate but also confidence interval for the rate of inflation. There are two types of approaches to index number theory namely: the functional economic approaches and the stochastic approach. The attraction of stochastic approach is that it estimates the rate of inflation in which uncertainty and statistical ideas play a major roll of screening index numbers. We have used extended stochastic approach to index numbers for measuring inflation by allowing for the systematic changes in the relative prices. We use CPI data covering the period July 2001--March 2008 for Pakistan.

  11. The reliability of repeated TMS measures in older adults and in patients with subacute and chronic stroke

    Directory of Open Access Journals (Sweden)

    Heidi M. Schambra

    2015-09-01

    Full Text Available The reliability of transcranial magnetic stimulation (TMS measures in healthy older adults and stroke patients has been insufficiently characterized. We determined whether common TMS measures could reliably evaluate change in individuals and in groups using the smallest detectable change (SDC, or could tell subjects apart using the intraclass correlation coefficient (ICC. We used a single-rater test-retest design in older healthy, subacute stroke, and chronic stroke subjects. At twice daily sessions on two consecutive days, we recorded resting motor threshold, test stimulus intensity, recruitment curves, short-interval intracortical inhibition and facilitation, and long-interval intracortical inhibition. Using variances estimated from a random effects model, we calculated the SDC and ICC for each TMS measure. For all TMS measures in all groups, SDCs for single subjects were large; only with modest group sizes did the SDCs become low. Thus, while these TMS measures cannot be reliably used as a biomarker to detect individual change, they can reliably detect change exceeding measurement noise in moderate-sized groups. For several of the TMS measures, ICCs were universally high, suggesting that they can reliably discriminate between subjects. Though most TMS measures have sufficient reliability in particular contexts, work establishing their validity, responsiveness, and clinical relevance is still needed.

  12. A new measure-correlate-predict approach for resource assessment

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Landberg, L [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark); Madsen, H [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    In order to find reasonable candidate site for wind farms, it is of great importance to be able to calculate the wind resource at potential sites. One way to solve this problem is to measure wind speed and direction at the site, and use these measurements to predict the resource. If the measurements at the potential site cover less than e.g. one year, which most likely will be the case, it is not possible to get a reliable estimate of the long-term resource, using this approach. If long-term measurements from e.g. some nearby meteorological station are available, however, then statistical methods can be used to find a relation between the measurements at the site and at the meteorological station. This relation can then be used to transform the long-term measurements to the potential site, and the resource can be calculated using the transformed measurements. Here, a varying-coefficient model, estimated using local regression, is applied in order to establish a relation between the measurements. The approach is evaluated using measurements from two sites, located approximately two kilometres apart, and the results show that the resource in this case can be predicted accurately, although this approach has serious shortcomings. (au)

  13. IMPLANTABLE RESONATORS – A TECHNIQUE FOR REPEATED MEASUREMENT OF OXYGEN AT MULTIPLE DEEP SITES WITH IN VIVO EPR

    Science.gov (United States)

    Li, Hongbin; Hou, Huagang; Sucheta, Artur; Williams, Benjamin B.; Lariviere, Jean P.; Khan, Nadeem; Lesniewski, Piotr N.; Swartz, Harold M.

    2013-01-01

    EPR oximetry using implantable resonators allow measurements at much deeper sites than are possible with surface resonators (> 80 mm vs. 10 mm) and have greater sensitivity at any depth. We report here the development of an improvement of the technique that now enables us to obtain the information from multiple sites and at a variety of depths. The measurements from the various sites are resolved using a simple magnetic field gradient. In the rat brain multi-probe implanted resonators measured pO2 at several sites simultaneously for over 6 months to record under normoxic, hypoxic and hyperoxic conditions. This technique also facilitates measurements in moving parts of the animal such as the heart, because the orientation of the paramagnetic material relative to the sensitive small loop is not altered by the motion. The measured response is very fast, enabling measurements in real time of physiological and pathological changes such as experimental cardiac ischemia in the mouse heart. The technique also is quite useful for following changes in tumor pO2, including applications with simultaneous measurements in tumors and adjacent normal tissues. PMID:20204802

  14. Repeated measurements of NT-pro-B-type natriuretic peptide, troponin T or C-reactive protein do not predict future allograft rejection in heart transplant recipients.

    Science.gov (United States)

    Battes, Linda C; Caliskan, Kadir; Rizopoulos, Dimitris; Constantinescu, Alina A; Robertus, Jan L; Akkerhuis, Martijn; Manintveld, Olivier C; Boersma, Eric; Kardys, Isabella

    2015-03-01

    Studies on the prognostic value of serial biomarker assays for future occurrence of allograft rejection (AR) are scarce. We examined whether repeated measurements of NT-pro-B-type natriuretic peptide (NT-proBNP), troponin T (TropT) and C-reactive protein (CRP) predict AR. From 2005 to 2010, 77 consecutive heart transplantation (HTx) recipients were included. The NT-proBNP, TropT, and CRP were measured at 16 ± 4 (mean ± standard deviation) consecutive routine endomyocardial biopsy surveillance visits during the first year of follow-up. Allograft rejection was defined as International Society for Heart and Lung Transplantation (ISHLT) grade 2R or higher at endomyocardial biopsy. Joint modeling was used to assess the association between repeated biomarker measurements and occurrence of future AR. Joint modeling accounts for dependence among repeated observations in individual patients. The mean age of the patients at HTx was 49 ± 9.2 years, and 68% were men. During the first year of follow-up, 1,136 biopsies and concurrent blood samples were obtained, and 56 patients (73%) experienced at least one episode of AR. All biomarkers were elevated directly after HTx and achieved steady-state after ∼ 12 weeks, both in patients with or without AR. No associations were present between the repeated measurements of NT-proBNP, TropT, or CRP and AR both early (weeks 0-12) and late (weeks 13-52) in the course after HTx (hazard ratios for weeks 13-52: 0.96 (95% confidence interval, 0.55-1.68), 0.67 (0.27-1.69), and 1.44 (0.90-2.30), respectively, per ln[unit]). Combining the three biomarkers in one model also rendered null results. The temporal evolution of NT-proBNP, TropT, and CRP before AR did not predict occurrence of acute AR both in the early and late course of the first year after HTx.

  15. A probit- log- skew-normal mixture model for repeated measures data with excess zeros, with application to a cohort study of paediatric respiratory symptoms

    Directory of Open Access Journals (Sweden)

    Johnston Neil W

    2010-06-01

    Full Text Available Abstract Background A zero-inflated continuous outcome is characterized by occurrence of "excess" zeros that more than a single distribution can explain, with the positive observations forming a skewed distribution. Mixture models are employed for regression analysis of zero-inflated data. Moreover, for repeated measures zero-inflated data the clustering structure should also be modeled for an adequate analysis. Methods Diary of Asthma and Viral Infections Study (DAVIS was a one year (2004 cohort study conducted at McMaster University to monitor viral infection and respiratory symptoms in children aged 5-11 years with and without asthma. Respiratory symptoms were recorded daily using either an Internet or paper-based diary. Changes in symptoms were assessed by study staff and led to collection of nasal fluid specimens for virological testing. The study objectives included investigating the response of respiratory symptoms to respiratory viral infection in children with and without asthma over a one year period. Due to sparse data daily respiratory symptom scores were aggregated into weekly average scores. More than 70% of the weekly average scores were zero, with the positive scores forming a skewed distribution. We propose a random effects probit/log-skew-normal mixture model to analyze the DAVIS data. The model parameters were estimated using a maximum marginal likelihood approach. A simulation study was conducted to assess the performance of the proposed mixture model if the underlying distribution of the positive response is different from log-skew normal. Results Viral infection status was highly significant in both probit and log-skew normal model components respectively. The probability of being symptom free was much lower for the week a child was viral positive relative to the week she/he was viral negative. The severity of the symptoms was also greater for the week a child was viral positive. The probability of being symptom free was

  16. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    Science.gov (United States)

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  17. Discontinuous Patterns of Cigarette Smoking From Ages 18 to 50 in the United States: A Repeated-Measures Latent Class Analysis.

    Science.gov (United States)

    Terry-McElrath, Yvonne M; O'Malley, Patrick M; Johnston, Lloyd D

    2017-12-13

    Effective cigarette smoking prevention and intervention programming is enhanced by accurate understanding of developmental smoking pathways across the life span. This study investigated within-person patterns of cigarette smoking from ages 18 to 50 among a US national sample of high school graduates, focusing on identifying ages of particular importance for smoking involvement change. Using data from approximately 15,000 individuals participating in the longitudinal Monitoring the Future study, trichotomous measures of past 30-day smoking obtained at 11 time points were modeled using repeated-measures latent class analyses. Sex differences in latent class structure and membership were examined. Twelve latent classes were identified: three characterized by consistent smoking patterns across age (no smoking; smoking developing effective smoking prevention and intervention programming. This study examined cigarette smoking among a national longitudinal US sample of high school graduates from ages 18 to 50 and identified distinct latent classes characterized by patterns of movement between no cigarette use, light-to-moderate smoking, and the conventional definition of heavy smoking at 11 time points via repeated-measures latent class analysis. Membership probabilities for each smoking class were estimated, and critical ages of susceptibility to change in smoking behaviors were identified. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Analysis of Longitudinal Studies With Repeated Outcome Measures: Adjusting for Time-Dependent Confounding Using Conventional Methods.

    Science.gov (United States)

    Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn

    2018-05-01

    Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.

  19. Pragmatic Approach for Multistage Phasor Measurement Unit Placement

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thoegersen, Poul

    2016-01-01

    Effective phasor measurement unit (PMU) placement is a key to the implementation of efficient and economically feasible wide area measurement systems in modern power systems. This paper proposes a pragmatic approach for cost-effective stage-wise deployment of PMUs while considering realistic...... constraints. Inspired from a real world experience, the proposed approach optimally allocates PMU placement in a stage-wise manner. The proposed approach also considers large-scale wind integration for effective grid state monitoring of wind generation dynamics. The proposed approach is implemented...... on the Danish power system projected for the year 2040. Furthermore, practical experience learnt from an optimal PMU placement project aimed at PMU placement in the Danish power system is presented, which is expected to provide insight of practical challenges at ground level that could be considered by PMU...

  20. Productivity Measurement in Manufacturing and the Expenditure Approach

    DEFF Research Database (Denmark)

    Schjerning, Bertel; Sørensen, Anders

    2008-01-01

    This paper studies conversion factors based on the expenditure approach and evaluates the appropriateness for international comparisons of output levels in manufacturing. We apply a consistency check based on the insight that relative productivity levels should be invariant to the choice of base....... The conclusion is insensitive to the applied method for developing conversion factors. The implication is that we cannot measure relative productivity levels in manufacturing across countries using the expenditure approach....

  1. Repeatability of Retinal Sensitivity Measurements Using a Medmont Dark-Adapted Chromatic Perimeter in Healthy and Age-Related Macular Degeneration Cases.

    Science.gov (United States)

    Tan, Rose S; Guymer, Robyn H; Luu, Chi D

    2018-05-01

    To determine the intrasession and intersession test-retest repeatability of retinal sensitivity measurements using a dark-adapted chromatic perimeter (DACP). For intrasession testing, retinal sensitivity within the central 24° for the 505-nm stimulus was measured after 20, 30, and 40 minutes of dark adaptation (DA) and for the 625-nm stimulus was measured after the first and second 505-nm tests. For intersession testing, retinal sensitivity for both stimuli was measured after 30 minutes of DA at baseline and 1 month. The point-wise sensitivity (PWS) difference and coefficient of repeatability (CoR) of each stimulus and group were determined. For intrasession testing, 10 age-related macular degeneration (AMD) and eight control subjects were recruited. The overall CoR for the 505-nm stimulus was 8.4 dB for control subjects and 9.1 dB for AMD cases, and for the 625-nm stimulus was 6.7 dB for control subjects and 9.5 dB for AMD cases. For intersession testing, seven AMD cases and 13 control subjects returned an overall CoR for the 505-nm stimulus of 8.2 dB for the control and 11.7 dB for the AMD group. For the 625-nm stimulus the CoR was 6.2 dB for the control group and 8.4 dB for the AMD group. Approximately 80% of all test points had a PWS difference of ±5 dB between the two intrasession or intersession measurements for both stimuli. The CoR for the DACP is larger than that reported for scotopic perimeters; however, the majority of test points had a PWS difference of ±5 dB between tests. The DACP offers an opportunity to measure static and dynamic rod function at multiple locations with an acceptable reproducibility level.

  2. Repeatability and reproducibility of in situ measurements of sound reflection and airborne sound insulation index of noise barriers

    NARCIS (Netherlands)

    Garai, M.; Schoen, E.; Behler, G.; Bragado, B.; Chudalla, M.; Conter, M.; Defrance, J.; Demizieux, P.; Glorieux, C.; Guidorzi, P.

    2014-01-01

    In Europe, in situ measurements of sound reflection and airborne sound insulation of noise barriers are usually done according to CEN/TS 1793-5. This method has been improved substantially during the EU funded QUIESST collaborative project. Within the same framework, an inter-laboratory test has

  3. In-field radon measurement in water: a novel approach

    International Nuclear Information System (INIS)

    Talha, S.A.; Meijer, R.J. de; Lindsay, R.; Newman, R.T.; Maleka, P.P.; Hlatshwayo, I.N.

    2010-01-01

    This paper presents a novel approach of measuring radon in-water in the field by inserting a MEDUSA gamma-ray detector into a 210 L or 1000 L container. The experimental measurements include investigating the effect of ambient background gamma-rays on in-field radon measurement, calibrating the detector efficiency using several amounts of KCl salt dissolved in tap water, and measuring radon in borehole water. The results showed that there is fairly good agreement between the field and laboratory measurements of radon in water, based on measurements with Marinelli beakers on a HPGe detector. The MDA of the method is 0.5 Bq L -1 radon in-water. -- Research highlights: →Radon-in-water, large volume container, in-field measurements, MEDUSA gamma-ray detection system.

  4. A PRACTICAL APPROACH TO THE GROUND OSCILLATION VELOCITY MEASUREMENT METHOD

    Directory of Open Access Journals (Sweden)

    Siniša Stanković

    2017-01-01

    Full Text Available The use of an explosive’s energy during blasting includes undesired effects on the environment. The seismic influence of a blast, as a major undesired effect, is determined by many national standards, recommendations and calculations where the main parameter is ground oscillation velocity at the field measurement location. There are a few approaches and methods for calculation of expected ground oscillation velocities according to charge weight per delay and the distance from the blast to the point of interest. Utilizations of these methods and formulas do not provide satisfactory results, thus the measured values on diverse distance from the blast field more or less differ from values given by previous calculations. Since blasting works are executed in diverse geological conditions, the aim of this research is the development of a practical and reliable approach which will give a different model for each construction site where blasting works have been or will be executed. The approach is based on a greater number of measuring points in line from the blast field at predetermined distances. This new approach has been compared with other generally used methods and formulas through the use of measurements taken during research along with measurements from several previously executed projects. The results confirmed that the suggested model gives more accurate values.

  5. A Novel Measurement Matrix Optimization Approach for Hyperspectral Unmixing

    Directory of Open Access Journals (Sweden)

    Su Xu

    2017-01-01

    Full Text Available Each pixel in the hyperspectral unmixing process is modeled as a linear combination of endmembers, which can be expressed in the form of linear combinations of a number of pure spectral signatures that are known in advance. However, the limitation of Gaussian random variables on its computational complexity or sparsity affects the efficiency and accuracy. This paper proposes a novel approach for the optimization of measurement matrix in compressive sensing (CS theory for hyperspectral unmixing. Firstly, a new Toeplitz-structured chaotic measurement matrix (TSCMM is formed by pseudo-random chaotic elements, which can be implemented by a simple hardware; secondly, rank revealing QR factorization with eigenvalue decomposition is presented to speed up the measurement time; finally, orthogonal gradient descent method for measurement matrix optimization is used to achieve optimal incoherence. Experimental results demonstrate that the proposed approach can lead to better CS reconstruction performance with low extra computational cost in hyperspectral unmixing.

  6. Dynamic Pain Phenotypes are Associated with Spinal Cord Stimulation-Induced Reduction in Pain: A Repeated Measures Observational Pilot Study.

    Science.gov (United States)

    Campbell, Claudia M; Buenaver, Luis F; Raja, Srinivasa N; Kiley, Kasey B; Swedberg, Lauren J; Wacnik, Paul W; Cohen, Steven P; Erdek, Michael A; Williams, Kayode A; Christo, Paul J

    2015-07-01

    Spinal cord stimulation (SCS) has become a widely used treatment option for a variety of pain conditions. Substantial variability exists in the degree of benefit obtained from SCS and patient selection is a topic of expanding interest and importance. However, few studies have examined the potential benefits of dynamic quantitative sensory testing (QST) to develop objective measures of SCS outcomes or as a predictive tool to help patient selection. Psychological characteristics have been shown to play an important role in shaping individual differences in the pain experience and may aid in predicting responses to SCS. Static laboratory pain-induction measures have also been examined in their capacity for predicting SCS outcomes. The current study evaluated clinical, psychological and laboratory pain measures at baseline, during trial SCS lead placement, as well as 1 month and 3 months following permanent SCS implantation in chronic pain patients who received SCS treatment. Several QST measures were conducted, with specific focus on examination of dynamic models (central sensitization and conditioned pain modulation [CPM]) and their association with pain outcomes 3 months post SCS implantation. Results suggest few changes in QST over time. However, central sensitization and CPM at baseline were significantly associated with clinical pain at 3 months following SCS implantation, controlling for psycho/behavioral factors and pain at baseline. Specifically, enhanced central sensitization and reduced CPM were associated with less self-reported pain 3 months following SCS implantation. These findings suggest a potentially important role for dynamic pain assessment in individuals undergoing SCS, and hint at potential mechanisms through which SCS may impart its benefit. Wiley Periodicals, Inc.

  7. A measure theoretic approach to traffic flow optimization on networks

    OpenAIRE

    Cacace, Simone; Camilli, Fabio; De Maio, Raul; Tosin, Andrea

    2018-01-01

    We consider a class of optimal control problems for measure-valued nonlinear transport equations describing traffic flow problems on networks. The objective isto minimise/maximise macroscopic quantities, such as traffic volume or average speed,controlling few agents, for example smart traffic lights and automated cars. The measuretheoretic approach allows to study in a same setting local and nonlocal drivers interactionsand to consider the control variables as additional measures interacting ...

  8. On the equivalence of generalized least-squares approaches to the evaluation of measurement comparisons

    Science.gov (United States)

    Koo, A.; Clare, J. F.

    2012-06-01

    Analysis of CIPM international comparisons is increasingly being carried out using a model-based approach that leads naturally to a generalized least-squares (GLS) solution. While this method offers the advantages of being easier to audit and having general applicability to any form of comparison protocol, there is a lack of consensus over aspects of its implementation. Two significant results are presented that show the equivalence of three differing approaches discussed by or applied in comparisons run by Consultative Committees of the CIPM. Both results depend on a mathematical condition equivalent to the requirement that any two artefacts in the comparison are linked through a sequence of measurements of overlapping pairs of artefacts. The first result is that a GLS estimator excluding all sources of error common to all measurements of a participant is equal to the GLS estimator incorporating all sources of error, including those associated with any bias in the standards or procedures of the measuring laboratory. The second result identifies the component of uncertainty in the estimate of bias that arises from possible systematic effects in the participants' measurement standards and procedures. The expression so obtained is a generalization of an expression previously published for a one-artefact comparison with no inter-participant correlations, to one for a comparison comprising any number of repeat measurements of multiple artefacts and allowing for inter-laboratory correlations.

  9. A 3-Dimensional Approach for Analysis in Orthognathic Surgery-Using Free Software for Voxel-Based Alignment and Semiautomatic Measurement

    DEFF Research Database (Denmark)

    Stokbro, Kasper; Thygesen, Torben

    2018-01-01

    PURPOSE: In orthognathic surgery, the repeatability of 3-dimensional (3D) measurements is limited by the need for manual reidentification of reference points, which can incorporate errors greater than 1 mm for every 4 repeated measurements. This report describes a semiautomatic approach to decrease...... the manual reidentification error. This study evaluated the repeatability of surgical outcome measurements using the semiautomatic approach. Furthermore, a step-by-step guide is provided to enable researchers and clinicians to perform the 3D analysis by themselves. MATERIALS AND METHODS: Evaluating surgical......, the reference points should be identical if the pre- and postoperative scans are aligned at the maxilla. Therefore, the authors propose the insertion of reference points on the preoperative scan and then repositioning a copy of the preoperative reference points relative to the postoperative scan. To align...

  10. Repeated measures of body mass index and C-reactive protein in relation to all-cause mortality and cardiovascular disease

    DEFF Research Database (Denmark)

    O'Doherty, Mark G; Jørgensen, Torben; Borglykke, Anders

    2014-01-01

    Obesity has been linked with elevated levels of C-reactive protein (CRP), and both have been associated with increased risk of mortality and cardiovascular disease (CVD). Previous studies have used a single 'baseline' measurement and such analyses cannot account for possible changes in these which...... may lead to a biased estimation of risk. Using four cohorts from CHANCES which had repeated measures in participants 50 years and older, multivariate time-dependent Cox proportional hazards was used to estimate hazard ratios (HR) and 95 % confidence intervals (CI) to examine the relationship between......, they may participate in distinct/independent pathways. Accounting for independent changes in risk factors over time may be crucial for unveiling their effects on mortality and disease morbidity....

  11. Two Approaches to Measuring Task Interdependence in Elementary Schools.

    Science.gov (United States)

    Charters, W. W., Jr.

    This report compares two approaches to measuring task interdependence, a theoretically fruitful concept for analyzing an organization's technical system. Task interdependence exists among operating personnel in the degree that task performance of one operative constrains, augments, or otherwise poses contingencies for the performance of another.…

  12. Tolerability of the Oscar 2 ambulatory blood pressure monitor among research participants: a cross-sectional repeated measures study

    Directory of Open Access Journals (Sweden)

    Hinderliter Alan L

    2011-04-01

    Full Text Available Abstract Background Ambulatory blood pressure monitoring (ABPM is increasingly used to measure blood pressure (BP in research studies. We examined ease of use, comfort, degree of disturbance, reported adverse effects, factors associated with poor tolerability, and association of poor tolerability with data acquisition of 24-hour ABPM using the Oscar 2 monitor in the research setting. Methods Sixty adults participating in a research study of people with a history of borderline clinic BP reported on their experience with ABPM on two occasions one week apart. Poor tolerability was operationalized as an overall score at or above the 75th percentile using responses to questions adapted from a previously developed questionnaire. In addition to descriptive statistics (means for responses to Likert-scaled "0 to 10" questions and proportions for Yes/No questions, we examined reproducibility of poor tolerability as well as associations with poor tolerability and whether poor tolerability was associated with removal of the monitor or inadequate number of BP measurements. Results The mean ambulatory BP of participants by an initial ABPM session was 148/87 mm Hg. After wearing the monitor the first time, the degree to which the monitor was felt to be cumbersome ranged from a mean of 3.0 to 3.8, depending on whether at work, home, driving, or other times. The most bother was interference with normal sleeping pattern (mean 4.2. Wearers found the monitor straightforward to use (mean 7.5. Nearly 67% reported that the monitor woke them after falling asleep, and 8.6% removed it at some point during the night. Reported adverse effects included pain (32%, skin irritation (37%, and bruising (7%. Those categorized as having poor tolerability (kappa = 0.5 between sessions, p = 0.0003 were more likely to report being in fair/poor health (75% vs 22%, p = 0.01 and have elevated 24-hour BP average (systolic: 28% vs 17%, p = 0.56; diastolic: 30% vs 17%, p = 0.37. They were

  13. A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Campos, E [Argonne National Lab. (ANL), Argonne, IL (United States); Sisterson, Douglas [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-12-01

    : 1) identifying how the uncertainty of individual ARM measurements is currently expressed, 2) identifying a consistent approach to measurement uncertainty, and then 3) reclassifying ARM instrument measurement uncertainties in a common framework.

  14. A statistical approach designed for finding mathematically defined repeats in shotgun data and determining the length distribution of clone-inserts

    DEFF Research Database (Denmark)

    Zhong, Lan; Zhang, Kunlin; Huang, Xiangang

    2003-01-01

    that repeats of different copy number have different probabilities of appearance in shotgun data, so based on this principle, we constructed a statistical model and inferred criteria for mathematically defined repeats (MDRs) at different shotgun coverages. According to these criteria, we developed software...... MDRmasker to identify and mask MDRs in shotgun data. With repeats masked prior to assembly, the speed of assembly was increased with lower error probability. In addition, clone-insert size affect the accuracy of repeat assembly and scaffold construction, we also designed length distribution of clone...

  15. Test–retest repeatability of quantitative cardiac 11C-meta-hydroxyephedrine measurements in rats by small animal positron emission tomography

    International Nuclear Information System (INIS)

    Thackeray, James T.; Renaud, Jennifer M.; Kordos, Myra; Klein, Ran; Kemp, Robert A. de; Beanlands, Rob S.B.; DaSilva, Jean N.

    2013-01-01

    Introduction: The norepinephrine analogue 11 C-meta-hydroxyephedrine (HED) has been used to interrogate sympathetic neuronal reuptake in cardiovascular disease. Application for longitudinal studies in small animal models of disease necessitates an understanding of test–retest variability. This study evaluated the repeatability of multiple quantitative cardiac measurements of HED retention and washout and the pharmacological response to reuptake blockade and enhanced norepinephrine levels. Methods: Small animal PET images were acquired over 60 min following HED administration to healthy male Sprague Dawley rats. Paired test and retest scans were undertaken in individual animals over 7 days. Additional HED scans were conducted following administration of norepinephrine reuptake inhibitor desipramine or continuous infusion of exogenous norepinephrine. HED retention was quantified by retention index, standardized uptake value (SUV), monoexponential and one-compartment washout. Plasma and cardiac norepinephrine were measured by high performance liquid chromatography. Results: Test retest variability was lower for retention index (15% ± 12%) and SUV (19% ± 15%) as compared to monoexponential washout rates (21% ± 13%). Desipramine pretreatment reduced myocardial HED retention index by 69% and SUV by 85%. Chase treatment with desipramine increased monoexponential HED washout by 197% compared to untreated controls. Norepinephrine infusion dose-dependently reduced HED accumulation, reflected by both retention index and SUV, with a corresponding increase in monoexponential washout. Plasma and cardiac norepinephrine levels correlated with HED quantitative measurements. Conclusion: The repeatability of HED retention index, SUV, and monoexponential washout supports its suitability for longitudinal PET studies in rats. Uptake and washout of HED are sensitive to acute increases in norepinephrine concentration

  16. Repeated measurements of cerebral blood flow in the left superior temporal gyrus reveal tonic hyperactivity in patients with auditory verbal hallucinations: A possible trait marker

    Directory of Open Access Journals (Sweden)

    Philipp eHoman

    2013-06-01

    Full Text Available Background: The left superior temporal gyrus (STG has been suggested to play a key role in auditory verbal hallucinations in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant auditory verbal hallucinations and 19 healthy controls underwent perfusion magnetic resonance imaging with arterial spin labeling. Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF, which consisted of the regional CBF measurement in the left superior temporal gyrus (STG and the global CBF measurement in the whole brain.Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001 and to the global CBF in patients (p < 0.004 at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007, and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001. After TMS, PANSS (p = 0.003 and PSYRATS (p = 0.01 scores decreased significantly in patients.Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of auditory verbal hallucinations in schizophrenia.

  17. Alternative Measuring Approaches in Gamma Scanning on Spent Nuclear Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Sihm Kvenangen, Karen

    2007-06-15

    In the future, the demand for energy is predicted to grow and more countries plan to utilize nuclear energy as their source of electric energy. This gives rise to many important issues connected to nuclear energy, such as finding methods that can verify that the spent nuclear fuel has been handled safely and used in ordinary power producing cycles as stated by the operators. Gamma ray spectroscopy is one method used for identification and verification of spent nuclear fuel. In the specific gamma ray spectroscopy method called gamma scanning the gamma radiation from the fission products Cs-137, Cs-134 and Eu-154 are measured in a spent fuel assembly. From the results, conclusions can be drawn about the fuels characteristics. This degree project examines the possibilities of using alternative measuring approaches when using the gamma scanning method. The focus is on examining how to increase the quality of the measured data. How to decrease the measuring time as compared with the present measuring strategy, has also been investigated. The main part of the study comprises computer simulations of gamma scanning measurements. The simulations have been validated with actual measurements on spent nuclear fuel at the central interim storage, Clab. The results show that concerning the quality of the measuring data the conventional strategy is preferable, but with other starting positions and with a more optimized equipment. When focusing on the time aspect, the helical measuring strategy can be an option, but this needs further investigation.

  18. Alternative Measuring Approaches in Gamma Scanning on Spent Nuclear Fuel

    International Nuclear Information System (INIS)

    Sihm Kvenangen, Karen

    2007-06-01

    In the future, the demand for energy is predicted to grow and more countries plan to utilize nuclear energy as their source of electric energy. This gives rise to many important issues connected to nuclear energy, such as finding methods that can verify that the spent nuclear fuel has been handled safely and used in ordinary power producing cycles as stated by the operators. Gamma ray spectroscopy is one method used for identification and verification of spent nuclear fuel. In the specific gamma ray spectroscopy method called gamma scanning the gamma radiation from the fission products Cs-137, Cs-134 and Eu-154 are measured in a spent fuel assembly. From the results, conclusions can be drawn about the fuels characteristics. This degree project examines the possibilities of using alternative measuring approaches when using the gamma scanning method. The focus is on examining how to increase the quality of the measured data. How to decrease the measuring time as compared with the present measuring strategy, has also been investigated. The main part of the study comprises computer simulations of gamma scanning measurements. The simulations have been validated with actual measurements on spent nuclear fuel at the central interim storage, Clab. The results show that concerning the quality of the measuring data the conventional strategy is preferable, but with other starting positions and with a more optimized equipment. When focusing on the time aspect, the helical measuring strategy can be an option, but this needs further investigation

  19. Two to five repeated measurements per patient reduced the required sample size considerably in a randomized clinical trial for patients with inflammatory rheumatic diseases

    Directory of Open Access Journals (Sweden)

    Smedslund Geir

    2013-02-01

    Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.

  20. Experimental evaluation of rigor mortis. VIII. Estimation of time since death by repeated measurements of the intensity of rigor mortis on rats.

    Science.gov (United States)

    Krompecher, T

    1994-10-21

    The development of the intensity of rigor mortis was monitored in nine groups of rats. The measurements were initiated after 2, 4, 5, 6, 8, 12, 15, 24, and 48 h post mortem (p.m.) and lasted 5-9 h, which ideally should correspond to the usual procedure after the discovery of a corpse. The experiments were carried out at an ambient temperature of 24 degrees C. Measurements initiated early after death resulted in curves with a rising portion, a plateau, and a descending slope. Delaying the initial measurement translated into shorter rising portions, and curves initiated 8 h p.m. or later are comprised of a plateau and/or a downward slope only. Three different phases were observed suggesting simple rules that can help estimate the time since death: (1) if an increase in intensity was found, the initial measurements were conducted not later than 5 h p.m.; (2) if only a decrease in intensity was observed, the initial measurements were conducted not earlier than 7 h p.m.; and (3) at 24 h p.m., the resolution is complete, and no further changes in intensity should occur. Our results clearly demonstrate that repeated measurements of the intensity of rigor mortis allow a more accurate estimation of the time since death of the experimental animals than the single measurement method used earlier. A critical review of the literature on the estimation of time since death on the basis of objective measurements of the intensity of rigor mortis is also presented.

  1. Automated genotyping of dinucleotide repeat markers

    Energy Technology Data Exchange (ETDEWEB)

    Perlin, M.W.; Hoffman, E.P. [Carnegie Mellon Univ., Pittsburgh, PA (United States)]|[Univ. of Pittsburgh, PA (United States)

    1994-09-01

    The dinucleotide repeats (i.e., microsatellites) such as CA-repeats are a highly polymorphic, highly abundant class of PCR-amplifiable markers that have greatly streamlined genetic mapping experimentation. It is expected that over 30,000 such markers (including tri- and tetranucleotide repeats) will be characterized for routine use in the next few years. Since only size determination, and not sequencing, is required to determine alleles, in principle, dinucleotide repeat genotyping is easily performed on electrophoretic gels, and can be automated using DNA sequencers. Unfortunately, PCR stuttering with these markers generates not one band for each allele, but a pattern of bands. Since closely spaced alleles must be disambiguated by human scoring, this poses a key obstacle to full automation. We have developed methods that overcome this obstacle. Our model is that the observed data is generated by arithmetic superposition (i.e., convolution) of multiple allele patterns. By quantitatively measuring the size of each component band, and exploiting the unique stutter pattern associated with each marker, closely spaced alleles can be deconvolved; this unambiguously reconstructs the {open_quotes}true{close_quotes} allele bands, with stutter artifact removed. We used this approach in a system for automated diagnosis of (X-linked) Duchenne muscular dystrophy; four multiplexed CA-repeats within the dystrophin gene were assayed on a DNA sequencer. Our method accurately detected small variations in gel migration that shifted the allele size estimate. In 167 nonmutated alleles, 89% (149/167) showed no size variation, 9% (15/167) showed 1 bp variation, and 2% (3/167) showed 2 bp variation. We are currently developing a library of dinucleotide repeat patterns; together with our deconvolution methods, this library will enable fully automated genotyping of dinucleotide repeats from sizing data.

  2. A Generalized Approach for Measuring Relationships Among Genes.

    Science.gov (United States)

    Wang, Lijun; Ahsan, Md Asif; Chen, Ming

    2017-07-21

    Several methods for identifying relationships among pairs of genes have been developed. In this article, we present a generalized approach for measuring relationships between any pairs of genes, which is based on statistical prediction. We derive two particular versions of the generalized approach, least squares estimation (LSE) and nearest neighbors prediction (NNP). According to mathematical proof, LSE is equivalent to the methods based on correlation; and NNP is approximate to one popular method called the maximal information coefficient (MIC) according to the performances in simulations and real dataset. Moreover, the approach based on statistical prediction can be extended from two-genes relationships to multi-genes relationships. This application would help to identify relationships among multi-genes.

  3. Methods for measuring denitrification: Diverse approaches to a difficult problem

    DEFF Research Database (Denmark)

    Groffman, Peter M.; Altabet, Mark A.; Böhlke, J. K.

    2006-01-01

    , and global scales. Unfortunately, this process is very difficult to measure, and existing methods are problematic for different reasons in different places at different times. In this paper, we review the major approaches that have been taken to measure denitrification in terrestrial and aquatic environments...... based on stable isotopes, (8) in situ gradients with atmospheric environmental tracers, and (9) molecular approaches. Our review makes it clear that the prospects for improved quantification of denitrification vary greatly in different environments and at different scales. While current methodology allows...... for the production of accurate estimates of denitrification at scales relevant to water and air quality and ecosystem fertility questions in some systems (e.g., aquatic sediments, well-defined aquifers), methodology for other systems, especially upland terrestrial areas, still needs development. Comparison of mass...

  4. A non-Gaussian approach to risk measures

    Science.gov (United States)

    Bormetti, Giacomo; Cisana, Enrica; Montagna, Guido; Nicrosini, Oreste

    2007-03-01

    Reliable calculations of financial risk require that the fat-tailed nature of prices changes is included in risk measures. To this end, a non-Gaussian approach to financial risk management is presented, modelling the power-law tails of the returns distribution in terms of a Student- t distribution. Non-Gaussian closed-form solutions for value-at-risk and expected shortfall are obtained and standard formulae known in the literature under the normality assumption are recovered as a special case. The implications of the approach for risk management are demonstrated through an empirical analysis of financial time series from the Italian stock market and in comparison with the results of the most widely used procedures of quantitative finance. Particular attention is paid to quantify the size of the errors affecting the market risk measures obtained according to different methodologies, by employing a bootstrap technique.

  5. The effect of technical replicate (repeats) on Nix Pro Color Sensor™ measurement precision for meat: A case-study on aged beef colour stability.

    Science.gov (United States)

    Holman, Benjamin W B; Collins, Damian; Kilgannon, Ashleigh K; Hopkins, David L

    2018-01-01

    The Nix Pro Colour Sensor™ (NIX) can be potentially used to measure meat colour, but procedural guidelines that assure measurement reproducibility and repeatability (precision) must first be established. Technical replicate number (r) will minimise response variation, measureable as standard error of predicted mean (SEM), and contribute to improved precision. Consequently, we aimed to explore the effects of r on NIX precision when measuring aged beef colour (colorimetrics; L*, a*, b*, hue and chroma values). Each colorimetric SEM declined with increasing r to indicate improved precision and followed a diminishing rate of improvement that allowed us to recommend r=7 for meat colour studies using the NIX. This definition was based on practical limitations and a* variability, as additional r would be required if other colorimetrics or advanced levels of precision are necessary. Beef ageing and display period, holding temperature, loin and sampled portion were also found to contribute to colorimetric variation, but were incorporated within our definition of r. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  6. Evidence of radiation-induced reduction of height and body weight from repeated measurements of adults exposed in childhood to the atomic bombs

    International Nuclear Information System (INIS)

    Otake, Masanori; Funamoto, Sachiyo; Fujikoshi, Yasunori; Schull, W.J.

    1994-01-01

    Reduction of growth from exposure to atomic bomb radiation has been examined using individuals under 10 years old at the time of the bombing (ATB) and a growth curve analysis based on measurements of height and weight made in the course of the 4th-7th cycles of the Adult Health Study examinations (1964-1972). As expected, the largest difference in growth to emerge is between males and females. However, a highly significant reduction of growth associated with dose (DS86) was observed among those survivors for whom four repeated measurements of height and weight were available. Longitudinal analysis of a more extended data set (n = 821), using expected values based on simple linear regression models fitted to the three available sets of measurements of height and weight on the 254 individuals with a missing measurement, also indicates a significant radiation-related growth reduction. The possible contribution of such factors as poor nutrition and disruption of normal family life in the years immediately after the war is difficult to evaluate, but the effects of socioeconomic factors on the analysis of these data are discussed. 33 refs., 5 figs., 3 tabs

  7. Approach for measuring the angle of hallux valgus

    Directory of Open Access Journals (Sweden)

    Jin Zhou

    2013-01-01

    Materials and Methods: Fifteen age, body weight, and height matched male students were included and those with foot disorders, deformities, or injuries were excluded from the study. The dorsal protrusions of the first metatarsal and the hallux were marked by palpating from three experienced observers; then their barefoot model in standing was collected by a three dimensional laser scanning system. The AoH was defined in the X-Y plane by the angle between the line joining the marks of centre of head and centre of base of metatarsal shaft and the one connecting the marks of the centre of metatarsal head and the hallux. The same procedure was repeated a week later. Besides, other measures based on the footprint, outline, and the radiography were also available for comparisons. Paired t-test, linear regression, and reliability analysis were applied for statistical analysis with significant level of 0.05 and 95% confidence interval. Results: There were no significant differences recorded between the new method and the radiographic method ( P = 0.069. The AoH was superior to the methods of footprint and outline and it displayed a relative higher correlation with the radiographic method (r = 0.94, r2 = 0.89. Moreover both the inter and intraobserver reliabilities of this method were proved to be good. Conclusion: This new method can be used for hallux valgus inspection and evaluation.

  8. Measuring and monitoring IT using a balanced scorecard approach.

    Science.gov (United States)

    Gash, Deborah J; Hatton, Todd

    2007-01-01

    Ensuring that the information technology department is aligned with the overall health system strategy and is performing at a consistently high level is a priority at Saint Luke's Health System in Kansas City, Mo. The information technology department of Saint Luke's Health System has been using the balanced scorecard approach described in this article to measure and monitor its performance for four years. This article will review the structure of the IT department's scorecard; the categories and measures used; how benchmarks are determined; how linkage to the organizational scorecard is made; how results are reported; how changes are made to the scorecard; and tips for using a scorecard in other IT departments.

  9. A New Laser Based Approach for Measuring Atmospheric Greenhouse Gases

    Directory of Open Access Journals (Sweden)

    Jeremy Dobler

    2013-11-01

    Full Text Available In 2012, we developed a proof-of-concept system for a new open-path laser absorption spectrometer concept for measuring atmospheric CO2. The measurement approach utilizes high-reliability all-fiber-based, continuous-wave laser technology, along with a unique all-digital lock-in amplifier method that, together, enables simultaneous transmission and reception of multiple fixed wavelengths of light. This new technique, which utilizes very little transmitted energy relative to conventional lidar systems, provides high signal-to-noise (SNR measurements, even in the presence of a large background signal. This proof-of-concept system, tested in both a laboratory environment and a limited number of field experiments over path lengths of 680 m and 1,600 m, demonstrated SNR values >1,000 for received signals of ~18 picoWatts averaged over 60 s. A SNR of 1,000 is equivalent to a measurement precision of ±0.001 or ~0.4 ppmv. The measurement method is expected to provide new capability for automated monitoring of greenhouse gas at fixed sites, such as carbon sequestration facilities, volcanoes, the short- and long-term assessment of urban plumes, and other similar applications. In addition, this concept enables active measurements of column amounts from a geosynchronous orbit for a network of ground-based receivers/stations that would complement other current and planned space-based measurement capabilities.

  10. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  11. Technology and education: First approach for measuring temperature with Arduino

    Science.gov (United States)

    Carrillo, Alejandro

    2017-04-01

    This poster session presents some ideas and approaches to understand concepts of thermal equilibrium, temperature and heat in order to bulid a man-nature relationship in a harmonious and responsible manner, emphasizing the interaction between science and technology, without neglecting the relationship of the environment and society, an approach to sustainability. It is proposed the development of practices that involve the use of modern technology, of easy access and low cost to measure temperature. We believe that the Arduino microcontroller and some temperature sensors can open the doors of innovation to carry out such practices. In this work we present some results of simple practices presented to a population of students between the ages of 16 and 17 years old. The practices in this proposal are: Zero law of thermodynamics and the concept of temperature, calibration of thermometers and measurement of temperature for heating and cooling of three different substances under the same physical conditions. Finally the student is asked to make an application that involves measuring of temperature and other physical parameters. Some suggestions are: to determine the temperature at which we take some food, measure the temperature difference at different rooms of a house, housing constructions that favour optimal condition, measure the temperature of different regions, measure of temperature trough different colour filters, solar activity and UV, propose applications to understand current problems such as global warming, etc. It is concluded that the Arduino practices and electrical sensors increase the cultural horizon of the students while awaking their interest to understand their operation, basic physics and its application from a modern perspective.

  12. Comparison of Different Approaches for Measuring Tibial Cartilage Thickness

    Directory of Open Access Journals (Sweden)

    Maier Jennifer

    2017-07-01

    Full Text Available Osteoarthritis is a degenerative disease affecting bones and cartilage especially in the human knee. In this context, cartilage thickness is an indicator for knee cartilage health. Thickness measurements are performed on medical images acquired in-vivo. Currently, there is no standard method agreed upon that defines a distance measure in articular cartilage. In this work, we present a comparison of different methods commonly used in literature. These methods are based on nearest neighbors, surface normal vectors, local thickness and potential field lines. All approaches were applied to manual segmentations of tibia and lateral and medial tibial cartilage performed by experienced raters. The underlying data were contrast agent-enhanced cone-beam C-arm CT reconstructions of one healthy subject’s knee. The subject was scanned three times, once in supine position and two times in a standing weight-bearing position. A comparison of the resulting thickness maps shows similar distributions and high correlation coefficients between the approaches above 0.90. The nearest neighbor method results on average in the lowest cartilage thickness values, while the local thickness approach assigns the highest values. We showed that the different methods agree in their thickness distribution. The results will be used for a future evaluation of cartilage change under weight-bearing conditions.

  13. All-photonic quantum repeaters

    Science.gov (United States)

    Azuma, Koji; Tamaki, Kiyoshi; Lo, Hoi-Kwong

    2015-01-01

    Quantum communication holds promise for unconditionally secure transmission of secret messages and faithful transfer of unknown quantum states. Photons appear to be the medium of choice for quantum communication. Owing to photon losses, robust quantum communication over long lossy channels requires quantum repeaters. It is widely believed that a necessary and highly demanding requirement for quantum repeaters is the existence of matter quantum memories. Here we show that such a requirement is, in fact, unnecessary by introducing the concept of all-photonic quantum repeaters based on flying qubits. In particular, we present a protocol based on photonic cluster-state machine guns and a loss-tolerant measurement equipped with local high-speed active feedforwards. We show that, with such all-photonic quantum repeaters, the communication efficiency scales polynomially with the channel distance. Our result paves a new route towards quantum repeaters with efficient single-photon sources rather than matter quantum memories. PMID:25873153

  14. An Approach for Measurement of Company Business Effectiveness

    International Nuclear Information System (INIS)

    Nenkova, B.; Manchev, B.; Tomov, E.

    2016-01-01

    meeting the objectives or that the company is being effective and efficient. The Performance measurement system is composed of three elements: Performance criteria - relative elements used for comparison at the performance evaluation; Performance indicators - specific values of the performance criteria over some specified time period, or 'numerical or quantitative indices that show how well each objective is being met'. Performance standards - accepted levels of performance for each criterion. The purpose of this report is to present an approach for measurement of business effectiveness of Consultancy Company, operating in energy sector. (author).

  15. Baseline repeated measures from controlled human exposure studies: associations between ambient air pollution exposure and the systemic inflammatory biomarkers IL-6 and fibrinogen.

    Science.gov (United States)

    Thompson, Aaron M S; Zanobetti, Antonella; Silverman, Frances; Schwartz, Joel; Coull, Brent; Urch, Bruce; Speck, Mary; Brook, Jeffrey R; Manno, Michael; Gold, Diane R

    2010-01-01

    Systemic inflammation may be one of the mechanisms mediating the association between ambient air pollution and cardiovascular morbidity and mortality. Interleukin-6 (IL-6) and fibrinogen are biomarkers of systemic inflammation that are independent risk factors for cardio-vascular disease. We investigated the association between ambient air pollution and systemic inflammation using baseline measurements of IL-6 and fibrinogen from controlled human exposure studies. In this retrospective analysis we used repeated-measures data in 45 nonsmoking subjects. Hourly and daily moving averages were calculated for ozone, nitrogen dioxide, sulfur dioxide, and particulate matter pollutants on systemic IL-6 and fibrinogen. Effect modification by season was considered. We observed a positive association between IL-6 and O3 [0.31 SD per O3 interquartile range (IQR); 95% confidence interval (CI), 0.080.54] and between IL-6 and SO2 (0.25 SD per SO2 IQR; 95% CI, 0.060.43). We observed the strongest effects using 4-day moving averages. Responses to pollutants varied by season and tended to be higher in the summer, particularly for O3 and PM2.5. Fibrinogen was not associated with pollution. This study demonstrates a significant association between ambient pollutant levels and baseline levels of systemic IL-6. These findings have potential implications for controlled human exposure studies. Future research should consider whether ambient pollution exposure before chamber exposure modifies IL-6 response.

  16. Measuring inflation under rationing: A virtual price approach

    OpenAIRE

    Christophe Starzec; François Gardes

    2014-01-01

    URL des Documents de travail : http://ces.univ-paris1.fr/cesdp/cesdp2014.html; Documents de travail du Centre d'Economie de la Sorbonne 2014.01 - ISSN : 1955-611X; The presence of rationing or more generally of the situations of constrained demand can make the traditional methods of measuring inflation questionable and give an erroneous image of the reality. In this paper, we use the virtual price approach (Neary, Roberts, 1980) to estimate the real inflation level in a centrally planned econ...

  17. An Electrostatics Approach to the Determination of Extremal Measures

    Energy Technology Data Exchange (ETDEWEB)

    Meinguet, Jean [Universite Catholique de Louvain, Institut Mathematique, Chemin du Cyclotron 2 (Belgium)], E-mail: meinguet@anma.ucl.ac.be

    2000-12-15

    One of the most important aspects of the minimal energy (or induced equilibrium) problem in the presence of an external field - sometimes referred to as the Gauss variation problem - is the determination of the support of its solution (the so-called extremal measure associated with the field). A simple electrostatic interpretation is presented here, which is apparently new and anyway suggests a novel, rather systematic approach to the solution. By way of illustration, the classical results for Jacobi, Laguerre and Freud weights are explicitly recovered by this alternative method.

  18. An Electrostatics Approach to the Determination of Extremal Measures

    International Nuclear Information System (INIS)

    Meinguet, Jean

    2000-01-01

    One of the most important aspects of the minimal energy (or induced equilibrium) problem in the presence of an external field - sometimes referred to as the Gauss variation problem - is the determination of the support of its solution (the so-called extremal measure associated with the field). A simple electrostatic interpretation is presented here, which is apparently new and anyway suggests a novel, rather systematic approach to the solution. By way of illustration, the classical results for Jacobi, Laguerre and Freud weights are explicitly recovered by this alternative method

  19. A nuclear data approach for the Hubble constant measurements

    Directory of Open Access Journals (Sweden)

    Pritychenko Boris

    2017-01-01

    Full Text Available An extraordinary number of Hubble constant measurements challenges physicists with selection of the best numerical value. The standard U.S. Nuclear Data Program (USNDP codes and procedures have been applied to resolve this issue. The nuclear data approach has produced the most probable or recommended Hubble constant value of 67.2(69 (km/sec/Mpc. This recommended value is based on the last 20 years of experimental research and includes contributions from different types of measurements. The present result implies (14.55 ± 1.51 × 109 years as a rough estimate for the age of the Universe. The complete list of recommended results is given and possible implications are discussed.

  20. A new approach in measuring graduate employability skills

    Science.gov (United States)

    Zakaria, Mohd Hafiz; Yatim, Bidin; Ismail, Suzilah

    2014-06-01

    Globalization makes graduate recruitment for an organization becomes more complex because employers believe that a holistic workforce is the key success of an organization. Currently, although graduates are said to possess specific skills but they still lack of employability skills, and this lead to increment of training cost either by government or even employers. Therefore, graduate level of employability skills should be evaluated before entering work market. In this study, a valid and reliable instrument embedding a new approach of measuring employability skills was developed using Situational Judgment Test (SJT). The instrument comprises of twelve (12) items measuring communication skill, professional ethics and morality, entrepreneurial skill, critical thinking in problem solving and personal quality. Instrument's validity was achieved through expert opinion and the reliability (in terms of stability) was based on the Chi-Square for homogeneity test. Generally, the instrument is beneficial to graduates, employers, government agencies, university, and workforce recruitment agencies when evaluating the level of employability skills.

  1. Measuring core inflation in India: An asymmetric trimmed mean approach

    Directory of Open Access Journals (Sweden)

    Naresh Kumar Sharma

    2015-12-01

    Full Text Available The paper seeks to obtain an optimal asymmetric trimmed mean-based core inflation measure in the class of trimmed mean measures when the distribution of price changes is leptokurtic and skewed to the right for any given period. Several estimators based on asymmetric trimmed mean approach are constructed and estimates generated by use of these estimators are evaluated on the basis of certain established empirical criteria. The paper also provides the method of trimmed mean expression “in terms of percentile score.” This study uses 69 monthly price indices which are constituent components of Wholesale Price Index for the period, April 1994 to April 2009, with 1993–1994 as the base year. Results of the study indicate that an optimally trimmed estimator is found when we trim 29.5% from the left-hand tail and 20.5% from the right-hand tail of the distribution of price changes.

  2. Energy Approach to Measure the Region’s Assimilative Capacity

    Directory of Open Access Journals (Sweden)

    Irina Stepanovna Belik

    2017-12-01

    Full Text Available One of the important problems of the environmental economics is the development of methodology for quantifying the assimilative capacity (AC of a territory. The article analyzes the existing approaches to determining and assessing the AC of a territory. We justify the advantages of using the energy approach. The authors’ method consists in using the maximum permissible energy load (MPEL for quantitative assessment of the AC of a territory. MPEL is a value that the ecological and economic system can withstand for a long time without changing its properties. We determine MPEL on the basis of data on the ability of various categories of land to absorb greenhouse gases (GHG, as well as the specific GHG emissions per ton of conventional fuel. Further, we compare the calculated value of MPEL in fuel equivalents with the actual consumption of fuel resources for the needs of the national economy. These values ratio can serve as a standard for measuring and balancing the environmental and economic system. The authors have validated the described method on the example of the Sverdlovsk region, which is characterized by a high level of man’s impact. Calculations show that the actual consumption of fossil fuels in the region exceeds MPEL. That indicates an imbalance in the ecological and economic system and may lead to further deterioration of the environmental quality in the region. The proposed methodological approach and calculations can be used when developing strategic planning documents for a territory, including its energy strategy

  3. A two-stage DEA approach for environmental efficiency measurement.

    Science.gov (United States)

    Song, Malin; Wang, Shuhong; Liu, Wei

    2014-05-01

    The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.

  4. Usefulness of repeated N-terminal pro-B-type natriuretic peptide measurements as incremental predictor for long-term cardiovascular outcome after vascular surgery.

    Science.gov (United States)

    Goei, Dustin; van Kuijk, Jan-Peter; Flu, Willem-Jan; Hoeks, Sanne E; Chonchol, Michel; Verhagen, Hence J M; Bax, Jeroen J; Poldermans, Don

    2011-02-15

    Plasma N-terminal pro-B-type natriuretic peptide (NT-pro-BNP) levels improve preoperative cardiac risk stratification in vascular surgery patients. However, single preoperative measurements of NT-pro-BNP cannot take into account the hemodynamic stress caused by anesthesia and surgery. Therefore, the aim of the present study was to assess the incremental predictive value of changes in NT-pro-BNP during the perioperative period for long-term cardiac mortality. Detailed cardiac histories, rest left ventricular echocardiography, and NT-pro-BNP levels were obtained in 144 patients before vascular surgery and before discharge. The study end point was the occurrence of cardiovascular death during a median follow-up period of 13 months (interquartile range 5 to 20). Preoperatively, the median NT-pro-BNP level in the study population was 314 pg/ml (interquartile range 136 to 1,351), which increased to a median level of 1,505 pg/ml (interquartile range 404 to 6,453) before discharge. During the follow-up period, 29 patients (20%) died, 27 (93%) from cardiovascular causes. The median difference in NT-pro-BNP in the survivors was 665 pg/ml, compared to 5,336 pg/ml in the patients who died (p = 0.01). Multivariate Cox regression analyses, adjusted for cardiac history and cardiovascular risk factors (age, angina pectoris, myocardial infarction, stroke, diabetes mellitus, renal dysfunction, body mass index, type of surgery and the left ventricular ejection fraction), demonstrated that the difference in NT-pro-BNP level between pre- and postoperative measurement was the strongest independent predictor of cardiac outcome (hazard ratio 3.06, 95% confidence interval 1.36 to 6.91). In conclusion, the change in NT-pro-BNP, indicated by repeated measurements before surgery and before discharge is the strongest predictor of cardiac outcomes in patients who undergo vascular surgery. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Using the American alligator and a repeated-measures design to place constraints on in vivo shoulder joint range of motion in dinosaurs and other fossil archosaurs.

    Science.gov (United States)

    Hutson, Joel D; Hutson, Kelda N

    2013-01-15

    Using the extant phylogenetic bracket of dinosaurs (crocodylians and birds), recent work has reported that elbow joint range of motion (ROM) studies of fossil dinosaur forearms may be providing conservative underestimates of fully fleshed in vivo ROM. As humeral ROM occupies a more central role in forelimb movements, the placement of quantitative constraints on shoulder joint ROM could improve fossil reconstructions. Here, we investigated whether soft tissues affect the more mobile shoulder joint in the same manner in which they affect elbow joint ROM in an extant archosaur. This test involved separately and repeatedly measuring humeral ROM in Alligator mississippiensis as soft tissues were dissected away in stages to bare bone. Our data show that the ROMs of humeral flexion and extension, as well as abduction and adduction, both show a statistically significant increase as flesh is removed, but then decrease when the bones must be physically articulated and moved until they separate from one another and/or visible joint surfaces. A similar ROM pattern is inferred for humeral pronation and supination. All final skeletonized ROMs were less than initial fully fleshed ROMs. These results are consistent with previously reported elbow joint ROM patterns from the extant phylogenetic bracket of dinosaurs. Thus, studies that avoid separation of complementary articular surfaces may be providing fossil shoulder joint ROMs that underestimate in vivo ROM in dinosaurs, as well as other fossil archosaurs.

  6. The Queensland study of Melanoma: Environmental and Genetic Associations (Q-MEGA). Study design, baseline characteristics, and repeatability of phenotype and sun exposure measures

    Science.gov (United States)

    Baxter, Amanda J.; Hughes, Maria Celia; Kvaskoff, Marina; Siskind, Victor; Shekar, Sri; Aitken, Joanne F.; Green, Adele C.; Duffy, David L.; Hayward, Nicholas K.; Martin, Nicholas G.; Whiteman, David C.

    2013-01-01

    Cutaneous malignant melanoma (CMM) is a major health issue in Queensland, Australia which has the world’s highest incidence. Recent molecular and epidemiologic studies suggest that CMM arises through multiple etiological pathways involving gene-environment interactions. Understanding the potential mechanisms leading to CMM requires larger studies than those previously conducted. This article describes the design and baseline characteristics of Q-MEGA, the Queensland study of Melanoma: Environmental and Genetic Associations, which followed-up four population-based samples of CMM patients in Queensland, including children, adolescents, men aged over 50, and a large sample of adult cases and their families, including twins. Q-MEGA aims to investigate the roles of genetic and environmental factors, and their interaction, in the etiology of melanoma. 3,471 participants took part in the follow-up study and were administered a computer-assisted telephone interview in 2002–2005. Updated data on environmental and phenotypic risk factors, and 2,777 blood samples were collected from interviewed participants as well as a subset of relatives. This study provides a large and well-described population-based sample of CMM cases with follow-up data. Characteristics of the cases and repeatability of sun exposure and phenotype measures between the baseline and the follow-up surveys, from six to 17 years later, are also described. PMID:18361720

  7. Measuring and understanding soil water repellency through novel interdisciplinary approaches

    Science.gov (United States)

    Balshaw, Helen; Douglas, Peter; Doerr, Stefan; Davies, Matthew

    2017-04-01

    Food security and production is one of the key global issues faced by society. It has become evermore essential to work the land efficiently, through better soil management and agronomy whilst protecting the environment from air and water pollution. The failure of soil to absorb water - soil water repellency - can lead to major environmental problems such as increased overland flow and soil erosion, poor uptake of agricultural chemicals and increased risk of groundwater pollution due to the rapid transfer of contaminants and nutrient leaching through uneven wetting and preferential flow pathways. Understanding the causes of soil hydrophobicity is essential for the development of effective methods for its amelioration, supporting environmental stability and food security. Organic compounds deposited on soil mineral or aggregate surfaces have long been recognised as a major factor in causing soil water repellency. It is widely accepted that the main groups of compounds responsible are long-chain acids, alkanes and other organic compounds with hydrophobic properties. However, when reapplied to sands and soils, the degree of water repellency induced by these compounds and mixtures varied widely with compound type, amount and mixture, in a seemingly unpredictable way. Our research to date involves two new approaches for studying soil wetting. 1) We challenge the theoretical basis of current ideas on the measured water/soil contact angle measurements. Much past and current discussion involves Wenzel and Cassie-Baxter models to explain anomalously high contact angles for organics on soils, however here we propose that these anomalously high measured contact angles are a consequence of the measurement of a water drop on an irregular non-planar surface rather than the thermodynamic factors of the Cassie-Baxter and Wenzel models. In our analysis we have successfully used a much simpler geometric approach for non-flat surfaces such as soil. 2) Fluorescent and phosphorescent

  8. Effect of exposure to evening light on sleep initiation in the elderly: a longitudinal analysis for repeated measurements in home settings.

    Science.gov (United States)

    Obayashi, Kenji; Saeki, Keigo; Iwamoto, Junko; Okamoto, Nozomi; Tomioka, Kimiko; Nezu, Satoko; Ikada, Yoshito; Kurumatani, Norio

    2014-05-01

    Epidemiologic data have demonstrated associations of sleep-onset insomnia with a variety of diseases, including depression, dementia, diabetes and cardiovascular diseases. Sleep initiation is controlled by the suprachiasmatic nucleus of the hypothalamus and endogenous melatonin, both of which are influenced by environmental light. Exposure to evening light is hypothesized to cause circadian phase delay and melatonin suppression before bedtime, resulting in circadian misalignment and sleep-onset insomnia; however, whether exposure to evening light disturbs sleep initiation in home settings remains unclear. In this longitudinal analysis of 192 elderly individuals (mean age: 69.9 years), we measured evening light exposure and sleep-onset latency for 4 days using a wrist actigraph incorporating a light meter and an accelerometer. Mixed-effect linear regression analysis for repeated measurements was used to evaluate the effect of evening light exposure on subsequent sleep-onset latency. The median intensity of evening light exposure and the median sleep-onset latency were 27.3 lux (interquartile range, 17.9-43.4) and 17 min (interquartile range, 7-33), respectively. Univariate models showed significant associations between sleep-onset latency and age, gender, daytime physical activity, in-bed time, day length and average intensity of evening and nighttime light exposures. In a multivariate model, log-transformed average intensity of evening light exposure was significantly associated with log-transformed sleep-onset latency independent of the former potential confounding factors (regression coefficient, 0.133; 95% CI, 0.020-0.247; p = 0.021). Day length and nighttime light exposure were also significantly associated with log-transformed sleep-onset latency (p = 0.001 and p < 0.001, respectively). In conclusion, exposure to evening light in home setting prolongs subsequent sleep-onset latency in the elderly.

  9. Telomere shortening unrelated to smoking, body weight, physical activity, and alcohol intake: 4,576 general population individuals with repeat measurements 10 years apart.

    Directory of Open Access Journals (Sweden)

    Maren Weischer

    2014-03-01

    Full Text Available Cross-sectional studies have associated short telomere length with smoking, body weight, physical activity, and possibly alcohol intake; however, whether these associations are due to confounding is unknown. We tested these hypotheses in 4,576 individuals from the general population cross-sectionally, and with repeat measurement of relative telomere length 10 years apart. We also tested whether change in telomere length is associated with mortality and morbidity in the general population. Relative telomere length was measured with quantitative polymerase chain reaction. Cross-sectionally at the first examination, short telomere length was associated with increased age (P for trend across quartiles = 3 × 10(-77, current smoking (P = 8 × 10(-3, increased body mass index (P = 7 × 10(-14, physical inactivity (P = 4 × 10(-17, but not with increased alcohol intake (P = 0.10. At the second examination 10 years later, 56% of participants had lost and 44% gained telomere length with a mean loss of 193 basepairs. Change in leukocyte telomere length during 10 years was associated inversely with baseline telomere length (P<1 × 10(-300 and age at baseline (P = 1 × 10(-27, but not with baseline or 10-year inter-observational tobacco consumption, body weight, physical activity, or alcohol intake. Prospectively during a further 10 years follow-up after the second examination, quartiles of telomere length change did not associate with risk of all-cause mortality, cancer, chronic obstructive pulmonary disease, diabetes mellitus, ischemic cerebrovascular disease, or ischemic heart disease. In conclusion, smoking, increased body weight, and physical inactivity were associated with short telomere length cross-sectionally, but not with telomere length change during 10 years observation, and alcohol intake was associated with neither. Also, change in telomere length did not associate prospectively with mortality or morbidity in the general population.

  10. Measuring energy efficiency in economics: Shadow value approach

    Science.gov (United States)

    Khademvatani, Asgar

    For decades, academic scholars and policy makers have commonly applied a simple average measure, energy intensity, for studying energy efficiency. In contrast, we introduce a distinctive marginal measure called energy shadow value (SV) for modeling energy efficiency drawn on economic theory. This thesis demonstrates energy SV advantages, conceptually and empirically, over the average measure recognizing marginal technical energy efficiency and unveiling allocative energy efficiency (energy SV to energy price). Using a dual profit function, the study illustrates how treating energy as quasi-fixed factor called quasi-fixed approach offers modeling advantages and is appropriate in developing an explicit model for energy efficiency. We address fallacies and misleading results using average measure and demonstrate energy SV advantage in inter- and intra-country energy efficiency comparison. Energy efficiency dynamics and determination of efficient allocation of energy use are shown through factors impacting energy SV: capital, technology, and environmental obligations. To validate the energy SV, we applied a dual restricted cost model using KLEM dataset for the 35 US sectors stretching from 1958 to 2000 and selected a sample of the four sectors. Following the empirical results, predicted wedges between energy price and the SV growth indicate a misallocation of energy use in stone, clay and glass (SCG) and communications (Com) sectors with more evidence in the SCG compared to the Com sector, showing overshoot in energy use relative to optimal paths and cost increases from sub-optimal energy use. The results show that energy productivity is a measure of technical efficiency and is void of information on the economic efficiency of energy use. Decomposing energy SV reveals that energy, capital and technology played key roles in energy SV increases helping to consider and analyze policy implications of energy efficiency improvement. Applying the marginal measure, we also

  11. Identification of genomic biomarkers for anthracycline-induced cardiotoxicity in human iPSC-derived cardiomyocytes: an in vitro repeated exposure toxicity approach for safety assessment

    NARCIS (Netherlands)

    Chaudhari, U.; Nemade, H.; Wagh, V.; Ellis, J.K.; Srinivasan, S.; Louisse, J.

    2016-01-01

    The currently available techniques for the safety evaluation of candidate drugs are usually cost-intensive and time-consuming and are often insufficient to predict human relevant cardiotoxicity. The purpose of this study was to develop an in vitro repeated exposure toxicity methodology allowing the

  12. An Approach for Measuring the Dielectric Strength of OLED Materials

    Directory of Open Access Journals (Sweden)

    Sujith Sudheendran Swayamprabha

    2018-06-01

    Full Text Available Surface roughness of electrodes plays a key role in the dielectric breakdown of thin-film organic devices. The rate of breakdown will increase when there are stochastic sharp spikes on the surface of electrodes. Additionally, surface having spiking morphology makes the determination of dielectric strength very challenging, specifically when the layer is relatively thin. We demonstrate here a new approach to investigate the dielectric strength of organic thin films for organic light-emitting diodes (OLEDs. The thin films were deposited on a substrate using physical vapor deposition (PVD under high vacuum. The device architectures used were glass substrate/indium tin oxide (ITO/organic material/aluminum (Al and glass substrate/Al/organic material/Al. The dielectric strength of the OLED materials was evaluated from the measured breakdown voltage and layer thickness.

  13. Facultative Stabilization Pond: Measuring Biological Oxygen Demand using Mathematical Approaches

    Science.gov (United States)

    Wira S, Ihsan; Sunarsih, Sunarsih

    2018-02-01

    Pollution is a man-made phenomenon. Some pollutants which discharged directly to the environment could create serious pollution problems. Untreated wastewater will cause contamination and even pollution on the water body. Biological Oxygen Demand (BOD) is the amount of oxygen required for the oxidation by bacteria. The higher the BOD concentration, the greater the organic matter would be. The purpose of this study was to predict the value of BOD contained in wastewater. Mathematical modeling methods were chosen in this study to depict and predict the BOD values contained in facultative wastewater stabilization ponds. Measurements of sampling data were carried out to validate the model. The results of this study indicated that a mathematical approach can be applied to predict the BOD contained in the facultative wastewater stabilization ponds. The model was validated using Absolute Means Error with 10% tolerance limit, and AME for model was 7.38% (< 10%), so the model is valid. Furthermore, a mathematical approach can also be applied to illustrate and predict the contents of wastewater.

  14. Developing a multicriteria approach for the measurement of sustainable performance

    Energy Technology Data Exchange (ETDEWEB)

    Ding, G.

    2005-02-01

    In Australia, cost-benefit analysis (CBA) is one of the conventional tools used widely by the public and the private sectors in the appraisal of projects. It measures and compares the total costs and benefits of projects that are competing for scarce resources in monetary terms. Growing concerns that the values of environmental goods and services are often ignored or underestimated in the CBA approach have led to the overuse and depletion of environmental assets. A model of a sustainability index as an evaluation tool that combines economic, social and environmental criteria into an indexing algorithm is presented and described. The sustainability index uses monetary and non-monetary approaches to rank projects and facilities on their contribution to sustainability. This process enables the principle of trade-off to occur in the decision-making process and thereby allows environmental values to be considered when selecting a development option. This makes it possible to optimize financial return, maximize resource consumption and minimize detrimental effects to the natural and man-made world. A case study is used to demonstrate the model. (author)

  15. Development of Standardized Mobile Tracer Correlation Approach for Large Area Emission Measurements (DRAFT UNDER EPA REVIEW)

    Science.gov (United States)

    Foster-wittig, T. A.; Thoma, E.; Green, R.; Hater, G.; Swan, N.; Chanton, J.

    2013-12-01

    Improved understanding of air emissions from large area sources such as landfills, waste water ponds, open-source processing, and agricultural operations is a topic of increasing environmental importance. In many cases, the size of the area source, coupled with spatial-heterogeneity, make direct (on-site) emission assessment difficult; methane emissions, from landfills for example, can be particularly complex [Thoma et al, 2009]. Recently, whole-facility (remote) measurement approaches based on tracer correlation have been utilized [Scheutz et al, 2011]. The approach uses a mobile platform to simultaneously measure a metered-release of a conservative gas (the tracer) along with the target compound (methane in the case of landfills). The known-rate tracer release provides a measure of atmospheric dispersion at the downwind observing location allowing the area source emission to be determined by a ratio calculation [Green et al, 2010]. Although powerful in concept, the approach has been somewhat limited to research applications due to the complexities and cost of the high-sensitivity measurement equipment required to quantify the part-per billion levels of tracer and target gas at kilometer-scale distances. The advent of compact, robust, and easy to use near-infrared optical measurement systems (such as cavity ring down spectroscopy) allow the tracer correlation approach to be investigated for wider use. Over the last several years, Waste Management Inc., the U.S. EPA, and collaborators have conducted method evaluation activities to determine the viability of a standardized approach through execution of a large number of field measurement trials at U.S. landfills. As opposed to previous studies [Scheutz et al, 2011] conducted at night (optimal plume transport conditions), the current work evaluated realistic use-scenarios; these scenarios include execution by non-scientist personnel, daylight operation, and full range of atmospheric condition (all plume transport

  16. Seasonal variation in objectively measured physical activity, sedentary time, cardio-respiratory fitness and sleep duration among 8–11 year-old Danish children: a repeated-measures study

    DEFF Research Database (Denmark)

    Hjorth, Mads F.; Chaput, Jean-Philippe; Michaelsen, Kim

    2013-01-01

    BACKGROUND: Understanding fluctuations in lifestyle indicators is important to identify relevant time periods to intervene in order to promote a healthy lifestyle; however, objective assessment of multiple lifestyle indicators has never been done using a repeated-measures design. The primary aim...... was, therefore, to examine between-season and within-week variation in physical activity, sedentary behaviour, cardio-respiratory fitness and sleep duration among 8–11 year-old children. METHODS: A total of 1021 children from nine Danish schools were invited to participate and 834 accepted. Due...... to missing data, 730 children were included in the current analytical sample. An accelerometer was worn for 7 days and 8 nights during autumn, winter and spring, from which physical activity, sedentary time and sleep duration were measured. Cardio-respiratory fitness was assessed using a 10-min intermittent...

  17. Repeatability of quantitative 18F-FLT uptake measurements in solid tumors: an individual patient data multi-center meta-analysis.

    Science.gov (United States)

    Kramer, G M; Liu, Y; de Langen, A J; Jansma, E P; Trigonis, I; Asselin, M-C; Jackson, A; Kenny, L; Aboagye, E O; Hoekstra, O S; Boellaard, R

    2018-06-01

    3'-deoxy-3'-[ 18 F]fluorothymidine ( 18 F-FLT) positron emission tomography (PET) provides a non-invasive method to assess cellular proliferation and response to antitumor therapy. Quantitative 18 F-FLT uptake metrics are being used for evaluation of proliferative response in investigational setting, however multi-center repeatability needs to be established. The aim of this study was to determine the repeatability of 18 F-FLT tumor uptake metrics by re-analyzing individual patient data from previously published reports using the same tumor segmentation method and repeatability metrics across cohorts. A systematic search in PubMed, EMBASE.com and the Cochrane Library from inception-October 2016 yielded five 18 F-FLT repeatability cohorts in solid tumors. 18 F-FLT avid lesions were delineated using a 50% isocontour adapted for local background on test and retest scans. SUV max , SUV mean , SUV peak , proliferative volume and total lesion uptake (TLU) were calculated. Repeatability was assessed using the repeatability coefficient (RC = 1.96 × SD of test-retest differences), linear regression analysis, and the intra-class correlation coefficient (ICC). The impact of different lesion selection criteria was also evaluated. Images from four cohorts containing 30 patients with 52 lesions were obtained and analyzed (ten in breast cancer, nine in head and neck squamous cell carcinoma, and 33 in non-small cell lung cancer patients). A good correlation was found between test-retest data for all 18 F-FLT uptake metrics (R 2  ≥ 0.93; ICC ≥ 0.96). Best repeatability was found for SUV peak (RC: 23.1%), without significant differences in RC between different SUV metrics. Repeatability of proliferative volume (RC: 36.0%) and TLU (RC: 36.4%) was worse than SUV. Lesion selection methods based on SUV max  ≥ 4.0 improved the repeatability of volumetric metrics (RC: 26-28%), but did not affect the repeatability of SUV metrics. In multi-center studies

  18. New approach to radiation monitoring: citizen based radiation measurement

    International Nuclear Information System (INIS)

    Kuca, P.; Helebrant, J.

    2016-01-01

    Both the Fukushima Dai-chi NPP accident in Japan in 2011 and the Chernobyl NPP accident in USSR in 1986 similarly to the first one have shown a necessity to find a way how to improve confidence of the public to official authorities. It is important especially in such a case of severe accidents with significant consequences in large inhabited areas around the damaged NPP. A lack of public confidence to officials was caused mostly by rather poor communication between official authorities and the public, as well by restricted access to the information for the public. It may have extremely negative impacts on the public understanding of actual situation and its possible risks, on public acceptance of necessary protective measures and participation of the public in remediation of the affected areas. One of possible ways to improve the situation can be implementation of citizen radiation monitoring on voluntary basis. Making sure, the official results are compatible with public self-measured ones, the public probably has more confidence in them. In the Czech Republic the implementation of such an approach is tested in the framework of security research founded by the Czech Ministry of the Interior - the research project RAMESIS solved by SURO. (authors)

  19. Measurement approaches to support future warhead arms control transparency

    International Nuclear Information System (INIS)

    Olinger, C.T.; Frankle, C.M.; Johnson, M.W.; Poths, J.

    1998-01-01

    Transparency on warhead stockpiles, warhead dismantlement, and fissile material stockpiles in nuclear weapons states will become increasingly important in the move beyond START II toward lower quantities of warheads. Congressional support for further warhead reductions will likely depend on the degree of irreversibility, or in other words, the rapidity with which warhead inventories could be reconstituted. Whether irreversibility considerations can be satisfied will depend on monitoring dismantlement as well as constraining the available stockpile of fissile materials for possible refabrication into warheads. Measurement techniques designed to address the above problems will need to consider NPT Article 1 obligations as well as Russian and US classification regulations, which prohibit or restrict the transfer of nuclear warhead design information to other states. Classification considerations currently limit the potential completeness of future inspections of weapons materials. Many conventional international safeguards approaches are not currently viable for arms control applications because they would reveal weapons design information. The authors discuss a variety of technical measures that may help to improve transparence of warhead and fissile material stockpiles and may enable limited warhead dismantlement transparency

  20. Synthetic food coloring and behavior: a dose response effect in a double-blind, placebo-controlled, repeated-measures study.

    Science.gov (United States)

    Rowe, K S; Rowe, K J

    1994-11-01

    To establish whether there is an association between the ingestion of synthetic food colorings and behavioral change in children referred for assessment of "hyperactivity." From approximately 800 children referred to the Royal Children's Hospital (Melbourne) for assessment of suspected hyperactivity, 200 were included in a 6-week open trial of a diet free of synthetic food coloring. The parents of 150 children reported behavioral improvement with the diet, and deterioration on the introduction of foods noted to contain synthetic coloring. A 30-item behavioral rating inventory was devised from an examination of the clinical histories of 50 suspected reactors. Thirty-four other children (23 suspected reactors, 11 uncertain reactors) and 20 control subjects, aged 2 to 14 years, were studied. A 21-day, double-blind, placebo-controlled, repeated-measures study used each child as his or her own control. Placebo, or one of six dose levels of tartrazine (1, 2, 5, 10, 20, 50 mg), was administered randomly each morning, and behavioral ratings were recorded by parents at the end of each 24 hours. The study identified 24 children as clear reactors (19 of 23 "suspected reactors," 3 of 11 "uncertain reactors," and 2 of 20 "control subjects"). They were irritable and restless and had sleep disturbance. Significant reactions were observed at all six dose levels. A dose response effect was obtained. With a dose increase greater than 10 mg, the duration of effect was prolonged. Behavioral changes in irritability, restlessness, and sleep disturbance are associated with the ingestion of tartrazine in some children. A dose response effect was observed.

  1. Knowledge and Skill Retention of In-Service versus Preservice Nursing Professionals following an Informal Training Program in Pediatric Cardiopulmonary Resuscitation: A Repeated-Measures Quasiexperimental Study

    Directory of Open Access Journals (Sweden)

    Jhuma Sankar

    2013-01-01

    Full Text Available Our objective was to compare the impact of a training program in pediatric cardiopulmonary resuscitation (CPR on the knowledge and skills of in-service and preservice nurses at prespecified time points. This repeated-measures quasiexperimental study was conducted in the pediatric emergency and ICU of a tertiary care teaching hospital between January and March 2011. We assessed the baseline knowledge and skills of nursing staff (in-service nurses and final year undergraduate nursing students (preservice nurses using a validated questionnaire and a skill checklist, respectively. The participants were then trained on pediatric CPR using standard guidelines. The knowledge and skills were reassessed immediately after training and at 6 weeks after training. A total of 74 participants—28 in-service and 46 preservice professionals—were enrolled. At initial assessment, in-service nurses were found to have insignificant higher mean knowledge scores (6.6 versus 5.8, P=0.08 while the preservice nurses had significantly higher skill scores (6.5 versus 3.2, P<0.001. Immediately after training, the scores improved in both groups. At 6 weeks however, we observed a nonuniform decline in performance in both groups—in-service nurses performing better in knowledge test (10.5 versus 9.1, P=0.01 and the preservice nurses performing better in skill test (9.8 versus 7.4, P<0.001. Thus, knowledge and skills of in-service and preservice nurses in pediatric CPR improved with training. In comparison to preservice nurses, the in-service nurses seemed to retain knowledge better with time than skills.

  2. DESIGNING COMPANY PERFORMANCE MEASUREMENT SYSTEM USING BALANCE SCORECARD APPROACH

    Directory of Open Access Journals (Sweden)

    Cecep Mukti Soleh

    2015-05-01

    Full Text Available This research aimed to design how to measure company performance by using balance scorecard approach in coal transportation services industry. Depth interview was used to obtain qualitative data determination of strategic objectives, key performance indicators, strategic initiatives, and in charge units for each balanced scorecard perspectives while the quantitative data were obtained from weighting through questionnaires and analyzed using paired comparison to get a perspective what mostly affected the performance of the company. To measure the achievement of corporate performance, each KPI used (1 the scoring system with the methods that higher is better, lower is better and precise is better; (2 traffic light system with the help of green, yellow, red for identification of target achievement. This research result shows that in the balance scorecard perspective, the most influences on the overall performance of the company include the customer's perspective (31%, financial perspective (29%, internal business processes (21%, learning, and growth 19%. Keywords: balance scorecard, paired comparison, coal transportation serviceABSTRAKPenelitian ini bertujuan untuk merancang pengukuran kinerja perusahaan dengan menggunakan pendekatan balance scorecard di industri jasa pengangkutan batu bara. Data kualitatif diperoleh melalui indepth interview digunakan untuk menentukan sasaran strategik, indikator kinerja utama, inisiatif strategi dan penanggungjawab setiap divisi setiap perspektif balance scorecard, sedangkan data kuantitatif digunakan untuk pembobotan melalui kuesioner dan dianalisis dengan menggunakan metode paired comparisson untuk mendapatkan perspektif yang paling berpengaruh terhadap kinerja perusahaan. Ukuran pencapaian kinerja perusahaan dari setiap KPI menggunakan; (1 scoring system dengan bantuan metode higher is better, lower is better dan precise is better;(2 traffic light system dengan menggunakan bantuan warna hijau, kuning, merah

  3. An approach for the accurate measurement of social morality levels.

    Science.gov (United States)

    Liu, Haiyan; Chen, Xia; Zhang, Bo

    2013-01-01

    In the social sciences, computer-based modeling has become an increasingly important tool receiving widespread attention. However, the derivation of the quantitative relationships linking individual moral behavior and social morality levels, so as to provide a useful basis for social policy-making, remains a challenge in the scholarly literature today. A quantitative measurement of morality from the perspective of complexity science constitutes an innovative attempt. Based on the NetLogo platform, this article examines the effect of various factors on social morality levels, using agents modeling moral behavior, immoral behavior, and a range of environmental social resources. Threshold values for the various parameters are obtained through sensitivity analysis; and practical solutions are proposed for reversing declines in social morality levels. The results show that: (1) Population size may accelerate or impede the speed with which immoral behavior comes to determine the overall level of social morality, but it has no effect on the level of social morality itself; (2) The impact of rewards and punishment on social morality levels follows the "5∶1 rewards-to-punishment rule," which is to say that 5 units of rewards have the same effect as 1 unit of punishment; (3) The abundance of public resources is inversely related to the level of social morality; (4) When the cost of population mobility reaches 10% of the total energy level, immoral behavior begins to be suppressed (i.e. the 1/10 moral cost rule). The research approach and methods presented in this paper successfully address the difficulties involved in measuring social morality levels, and promise extensive application potentials.

  4. Validating polyphenol intake estimates from a food-frequency questionnaire by using repeated 24-h dietary recalls and a unique method-of-triads approach with 2 biomarkers.

    Science.gov (United States)

    Burkholder-Cooley, Nasira M; Rajaram, Sujatha S; Haddad, Ella H; Oda, Keiji; Fraser, Gary E; Jaceldo-Siegl, Karen

    2017-03-01

    Background: The assessment of polyphenol intake in free-living subjects is challenging, mostly because of the difficulty in accurately measuring phenolic content and the wide presence of phenolics in foods. Objective: The aims of this study were to evaluate the validity of polyphenol intake estimated from food-frequency questionnaires (FFQs) by using the mean of 6 measurements of a 24-h dietary recall (24-HR) as a reference and to apply a unique method-of-triads approach to assess validity coefficients (VCs) between latent "true" dietary estimates, total urinary polyphenol (TUP) excretion, and a surrogate biomarker (plasma carotenoids). Design: Dietary intake data from 899 adults of the Adventist Health Study 2 (AHS-2; 43% blacks and 67% women) were obtained. Pearson correlation coefficients ( r ), corrected for attenuation from within-person variation in the recalls, were calculated between 24-HRs and FFQs and between 24-HRs and TUPs. VCs and 95% CIs between true intake and polyphenol intakes from FFQs, 24-HRs, and the biomarkers TUPs and plasma carotenoids were calculated. Results: Mean ± SD polyphenol intakes were 717 ± 646 mg/d from FFQs and 402 ± 345 mg/d from 24-HRs. The total polyphenol intake from 24-HRs was correlated with FFQs in crude ( r = 0.51, P < 0.001) and deattenuated ( r = 0.63; 95% CI: 0.61, 0.69) models . In the triad model, the VC between the FFQs and theoretical true intake was 0.46 (95% CI: 0.20, 0.93) and between 24-HRs and true intake was 0.61 (95% CI: 0.38, 1.00). Conclusions: The AHS-2 FFQ is a reasonable indicator of total polyphenol intake in the AHS-2 cohort. Urinary polyphenol excretion is limited by genetic variance, metabolism, and bioavailability and should be used in addition to rather than as a replacement for dietary intake assessment. © 2017 American Society for Nutrition.

  5. Reproducibility of repeated measures of deuterium substituted [11C]L-deprenyl ([11C]L-deprenyl-D2) binding in the human brain

    International Nuclear Information System (INIS)

    Logan, Jean; Fowler, Joanna S.; Volkow, Nora D.; Wang, Gene-Jack; MacGregor, Robert R.; Shea, Colleen

    2000-01-01

    The purpose of this study was to assess the reproducibility of repeated positron emission tomography (PET) measures of brain monoamine oxidase B (MAO B) using deuterium-substituted [ 11 C]L-deprenyl ([ 11 C]L-deprenyl-D2) in normal subjects and to validate the method used for estimating the kinetic constants from the irreversible 3-compartment model applied to the tracer binding. Five normal healthy subjects (age range 23-73 years) each received two PET scans with [ 11 C]L-deprenyl-D2. The time interval between scans was 7-27 days. Time-activity data from eight regions of interest and an arterial plasma input function was used to calculate λk 3 , a model term proportional to MAO B, and K 1 , the plasma to brain transfer constant that is related to blood flow. Linear (LIN) and nonlinear least-squares (NLLSQ) estimation methods were used to calculate the optimum model constants. A comparison of time-activity curves for scan 1 and scan 2 showed that the percent of change for peak uptake varied from -18.5 to 15.0% and that increases and decreases in uptake on scan 2 were associated with increases and decreases in the value of the arterial input of the tracer. Calculation of λk 3 showed a difference between scan 1 and scan 2 in the global value ranging between -6.97 and 4.5% (average -2.1±4.7%). The average percent change for eight brain regions for the five subjects was -2.84±7.07%. Values of λk 3 for scan 1 and scan 2 were highly correlated (r 2 =0.98; p 1 showed a significant correlation between scan 1 and scan 2 (r 2 =0.61; p 11 C]L-deprenyl-D2 varied between scan 1 and scan 2, driven by the differences in arterial tracer input. Application of a 3-compartment model to regional time-activity data and arterial input function yielded λk 3 values for scan 1 and scan 2 with an average difference of -2.84 ± 7.07%. Linear regression applied to values of λk 3 from the LIN and NLLSQ methods validated the use of the linear method for calculating λk 3

  6. Evaluation of a Direct-Instruction Intervention to Improve Movement and Preliteracy Skills among Young Children: A Within-Subject Repeated-Measures Design.

    Science.gov (United States)

    Bedard, Chloe; Bremer, Emily; Campbell, Wenonah; Cairney, John

    2017-01-01

    School readiness involves the development of foundational skills such as emergent literacy and fundamental movement skills as well as the capacity to attentively engage in instructional situations. Children do not develop these skills naturally; therefore, they need the opportunity to develop these skills in their early years prior to entering school. The objective of the current study was to evaluate the effectiveness and feasibility of a direct-instruction movement and preliteracy intervention in children aged 3-4 years. A within-subject repeated-measures design, embedded within a wait-list control study, was used to evaluate the intervention. The intervention was run across 10 weeks with 1 h weekly sessions. Each weekly session consisted of 30-min of movement skill instruction (e.g., through single-step acquisition strategies), 15-min of free play during which time children had access to a variety of equipment (e.g., balls, hula hoops, etc.) or toys (e.g., puzzles, building blocks), and a 15-min interactive reading circle during which children read a storybook and were taught 1-2 preliteracy skills (e.g., alphabet knowledge, narrative knowledge, etc.). A convenience sample of 11 children (mean age = 45.6 months, SD = 7.3) was recruited. All children were assessed four times: baseline (Time 1), pre-intervention (Time 2), post-intervention (Time 3), and 5-week follow-up (Time 4). Gross motor skills and preliteracy skills were assessed at each time point. There was a statistically significant effect of time on the change in gross motor skills (Wilks' lambda = 0.09, p  = .002), print-concept skills (Wilks' lambda = 0.09, p  = .001), and alphabet knowledge (Wilks' lambda = 0.29, p  = .046). Post hoc analyses reveal non-significant changes between time 1 and 2 for motor and print-concept skills and significant changes in all three outcomes between time 2 and time 3. Participation in a direct-instruction movement and preliteracy

  7. Evaluation of a Direct-Instruction Intervention to Improve Movement and Preliteracy Skills among Young Children: A Within-Subject Repeated-Measures Design

    Directory of Open Access Journals (Sweden)

    Chloe Bedard

    2018-01-01

    Full Text Available ObjectiveSchool readiness involves the development of foundational skills such as emergent literacy and fundamental movement skills as well as the capacity to attentively engage in instructional situations. Children do not develop these skills naturally; therefore, they need the opportunity to develop these skills in their early years prior to entering school. The objective of the current study was to evaluate the effectiveness and feasibility of a direct-instruction movement and preliteracy intervention in children aged 3–4 years.MethodsA within-subject repeated-measures design, embedded within a wait-list control study, was used to evaluate the intervention. The intervention was run across 10 weeks with 1 h weekly sessions. Each weekly session consisted of 30-min of movement skill instruction (e.g., through single-step acquisition strategies, 15-min of free play during which time children had access to a variety of equipment (e.g., balls, hula hoops, etc. or toys (e.g., puzzles, building blocks, and a 15-min interactive reading circle during which children read a storybook and were taught 1–2 preliteracy skills (e.g., alphabet knowledge, narrative knowledge, etc.. A convenience sample of 11 children (mean age = 45.6 months, SD = 7.3 was recruited. All children were assessed four times: baseline (Time 1, pre-intervention (Time 2, post-intervention (Time 3, and 5-week follow-up (Time 4. Gross motor skills and preliteracy skills were assessed at each time point.ResultsThere was a statistically significant effect of time on the change in gross motor skills (Wilks’ lambda = 0.09, p = .002, print-concept skills (Wilks’ lambda = 0.09, p = .001, and alphabet knowledge (Wilks’ lambda = 0.29, p = .046. Post hoc analyses reveal non-significant changes between time 1 and 2 for motor and print-concept skills and significant changes in all three outcomes between time 2 and time 3.ConclusionParticipation in a

  8. A Combined Approach to Measure Micropollutant Behaviour during Riverbank Filtration

    Science.gov (United States)

    van Driezum, Inge; Saracevic, Ernis; Derx, Julia; Kirschner, Alexander; Sommer, Regina; Farnleitner, Andreas; Blaschke, Alfred Paul

    2016-04-01

    Riverbank filtration (RBF) systems are widely used as natural treatment process. The advantages of RBF over surface water abstraction are the elimination of for example suspended solids, biodegradable compounds (like specific micropollutants), bacteria and viruses (Hiscock and Grischek, 2002). However, in contrast to its importance, remarkably less is known on the respective external (e.g. industrial or municipal sewage) and the internal (e.g. wildlife and agricultural influence) sources of contaminants, the environmental availability and fate of the various hazardous substances, and its potential transport during soil and aquifer passage. The goal of this study is to get an insight in the behaviour of various micropollutants and microbial indicators during riverbank filtration. Field measurements were combined with numerical modelling approaches. The study area comprises an alluvial backwater and floodplain area downstream of Vienna. The river is highly dynamic, with discharges ranging from 900 m3/s during low flow to 11000 m3/s during flood events. Samples were taken in several monitoring wells along a transect extending from the river towards a backwater river in the floodplain. Three of the piezometers were situated in the first 20 meters away from the river in order to obtain information about micropollutant behaviour close to the river. A total of 9 different micropollutants were analysed in grab samples taken under different river flow conditions (n=33). Following enrichment using SPE, analysis was performed using high performance liquid chromatography-tandem mass spectrometry. Faecal indicators (E. coli and enterococci) and bacterial spores were enumerated in sample volumes of 1 L each using cultivation based methods (ISO 16649-1, ISO 7899-2:2000 and ISO 6222). The analysis showed that some compounds, e.g. ibuprofen and diclofenac, were only found in the river. These compounds were already degraded in the first ten meters away from the river. Analysis of

  9. CO2 Capacity Sorbent Analysis Using Volumetric Measurement Approach

    Science.gov (United States)

    Huang, Roger; Richardson, Tra-My Justine; Belancik, Grace; Jan, Darrell; Knox, Jim

    2017-01-01

    In support of air revitalization system sorbent selection for future space missions, Ames Research Center (ARC) has performed CO2 capacity tests on various solid sorbents to complement structural strength tests conducted at Marshall Space Flight Center (MSFC). The materials of interest are: Grace Davison Grade 544 13X, Honeywell UOP APG III, LiLSX VSA-10, BASF 13X, and Grace Davison Grade 522 5A. CO2 capacity was for all sorbent materials using a Micromeritics ASAP 2020 Physisorption Volumetric Analysis machine to produce 0C, 10C, 25C, 50C, and 75C isotherms. These data are to be used for modeling data and to provide a basis for continued sorbent research. The volumetric analysis method proved to be effective in generating consistent and repeatable data for the 13X sorbents, but the method needs to be refined to tailor to different sorbents.

  10. Repeated intermittent administration of psychomotor stimulant drugs alters the acquisition of Pavlovian approach behavior in rats: differential effects of cocaine, d-amphetamine and 3,4- methylenedioxymethamphetamine ("Ecstasy").

    Science.gov (United States)

    Taylor, J R; Jentsch, J D

    2001-07-15

    Psychomotor stimulant drugs can produce long-lasting changes in neurochemistry and behavior after multiple doses. In particular, neuroadaptations within corticolimbic brain structures that mediate incentive learning and motivated behavior have been demonstrated after chronic exposure to cocaine, d-amphetamine, and 3,4-methylenedioxymethamphetamine (MDMA). As stimulus-reward learning is likely relevant to addictive behavior (i.e., augmented conditioned reward and stimulus control of behavior), we have investigated whether prior repeated administration of psychomotor stimulant drugs (of abuse, including cocaine, d-amphetamine, or MDMA, would affect the acquisition of Pavlovian approach behavior. Water-deprived rats were tested for the acquisition of Pavlovian approach behavior after 5 days treatment with cocaine (15-20 mg/kg once or twice daily), d-amphetamine (2.5 mg/kg once or twice daily), or MDMA (2.5 mg/kg twice daily) followed by a 7-day, drug-free period. Prior repeated treatment with cocaine or d-amphetamine produced a significant enhancement of acquisition of Pavlovian approach behavior, indicating accelerated stimulus-reward learning, whereas MDMA administration produced increased inappropriate responding, indicating impulsivity. Abnormal drug-induced approach behavior was found to persist throughout the testing period. These studies demonstrate that psychomotor stimulant-induced sensitization can produce long-term alterations in stimulus-reward learning and impulse control that may contribute to the compulsive drug taking that typifies addiction.

  11. A new approach to polarimetric measurements based on birefringent crystals and diode lasers

    Energy Technology Data Exchange (ETDEWEB)

    Ribeiro, Lívia Paulia Dias; Rohwedder, Jarbas José Rodrigues [Chemistry Institute, Department of Analytical Chemistry, UNICAMP, Caixa Postal 6154, CEP: 13087-971 Campinas, SP (Brazil); Pasquini, Celio, E-mail: pasquini@iqm.unicamp.br [Chemistry Institute, Department of Analytical Chemistry, UNICAMP, Caixa Postal 6154, CEP: 13087-971 Campinas, SP (Brazil)

    2013-04-10

    Highlights: ► New approach to polarimetric measurements is evaluated. ► A robust, with no mechanical moving parts polarimeter is presented. ► The performance of the instrument was evaluated for saccharimetric measurements. ► The uncertainty of the instrument was evaluated as a function of the measured angle. ► Polarimeter allow the use of low cost lasers while obtaining precision as good as 0.003°. -- Abstract: A new polarimetric instrument and measurement method is described based on the use of diode lasers as radiation source (532, 650 and 1064 nm) and birefringent prisms, such as Glan-Laser and Wollaston, as analyzers. The laser radiation is passed through a dichroic polarizer film for further orientation of its polarization plane at 45° in relation to the polarization plane of the analyzer. The polarized beam, oriented in that way, passes the sample cell, impinges the prism surface, and the intensities of the two emerged beams are detected by two twin silicon detectors. Ideally, in the absence of any optically active substances, the crystals produces two orthogonally polarized refracted beams of equal intensity. In the presence of an optically active substance, the arctangent of the square root of the beam intensities ratio is equal to the new polarization angle (β) of the laser beam. The rotation angle imposed for any optically active substance present in the sample cell is then given by: α = (45 – β)°. Because the rotation is obtained by the ratio of the intensities of two beams, it is independent of the laser intensity, which can vary up to ±15% with no significant effect on the accuracy of the polarimetric measurement. The instrument has been evaluated for measurement of optically active substances such as sucrose and fructose. The instrument employs low cost components, is capable of attaining a repeatability of ±0.003° and can measure the rotation angle, over a ±45° range, in less than 2 s. Because it does not present any moving

  12. Measurements of Repeated Tightening and Loosening Torque of Seven Different Implant/Abutment Connection Designs and Their Modifications: An In Vitro Study.

    Science.gov (United States)

    Butkevica, Alena; Nathanson, Dan; Pober, Richard; Strating, Herman

    2018-02-01

    Repeated tightening and loosening of the abutment screw may alter its mechanical and physical properties affecting the optimal torque and ultimate reliability of an implant/abutment connection. The purpose of this study was to evaluate the effect of repeated tightening and loosening of implant/abutment screws on the loosening torque of implant/abutment connections of commercially available implant systems. Seven different implant/abutment connections and their modifications were tested. The screws of each system were tightened according to the manufacturer's specifications. After 20 minutes the screws were loosened. This procedure was repeated ten times, and the differences between the 1st and 10th cycle were expressed as a percentage change RTq(%) and correlated with initial torque, the number of threads, the length of shank, and thread surface area employing Spearman's analysis. All systems showed significant differences in residual torque (RTq) value (p 0.05). All connections but group 3 (p = 1.000) showed a significant change from the initial torque (ITq) to the RTq values. The first successive RTq values increased in two connection groups 1 and 2. The remaining connections showed reduced RTq values ranging from -1.2 % (group 5) to -23.5% (group 6). The RTq values declined gradually with every repeated tightening in groups 1, 2, 3, 8, 9, 11, 12. In group 2, after the tenth tightening the RTq was still above the ITq value. Only length of shank demonstrated a correlation with the RTq(%) change over the successive tightening loosening cycles (p abutment screws caused varying torque level changes among the different systems. These observations can probably be attributed to connection design. Limiting the number of tightening/loosening cycles in clinical and laboratory procedures is advisable for most of the implant systems tested. © 2016 by the American College of Prosthodontists.

  13. Identification of genomic biomarkers for anthracycline-induced cardiotoxicity in human iPSC-derived cardiomyocytes: an in vitro repeated exposure toxicity approach for safety assessment.

    Science.gov (United States)

    Chaudhari, Umesh; Nemade, Harshal; Wagh, Vilas; Gaspar, John Antonydas; Ellis, James K; Srinivasan, Sureshkumar Perumal; Spitkovski, Dimitry; Nguemo, Filomain; Louisse, Jochem; Bremer, Susanne; Hescheler, Jürgen; Keun, Hector C; Hengstler, Jan G; Sachinidis, Agapios

    2016-11-01

    The currently available techniques for the safety evaluation of candidate drugs are usually cost-intensive and time-consuming and are often insufficient to predict human relevant cardiotoxicity. The purpose of this study was to develop an in vitro repeated exposure toxicity methodology allowing the identification of predictive genomics biomarkers of functional relevance for drug-induced cardiotoxicity in human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs). The hiPSC-CMs were incubated with 156 nM doxorubicin, which is a well-characterized cardiotoxicant, for 2 or 6 days followed by washout of the test compound and further incubation in compound-free culture medium until day 14 after the onset of exposure. An xCELLigence Real-Time Cell Analyser was used to monitor doxorubicin-induced cytotoxicity while also monitoring functional alterations of cardiomyocytes by counting of the beating frequency of cardiomyocytes. Unlike single exposure, repeated doxorubicin exposure resulted in long-term arrhythmic beating in hiPSC-CMs accompanied by significant cytotoxicity. Global gene expression changes were studied using microarrays and bioinformatics tools. Analysis of the transcriptomic data revealed early expression signatures of genes involved in formation of sarcomeric structures, regulation of ion homeostasis and induction of apoptosis. Eighty-four significantly deregulated genes related to cardiac functions, stress and apoptosis were validated using real-time PCR. The expression of the 84 genes was further studied by real-time PCR in hiPSC-CMs incubated with daunorubicin and mitoxantrone, further anthracycline family members that are also known to induce cardiotoxicity. A panel of 35 genes was deregulated by all three anthracycline family members and can therefore be expected to predict the cardiotoxicity of compounds acting by similar mechanisms as doxorubicin, daunorubicin or mitoxantrone. The identified gene panel can be applied in the safety

  14. A gas turbine diagnostic approach with transient measurements.

    OpenAIRE

    Li, Y. G.

    2003-01-01

    Most gas turbine performance analysis based diagnostic methods use the information from steady state measurements. Unfortunately, steady state measurement may not be obtained easily in some situations, and some types of gas turbine fault contribute little to performance deviation at steady state operating conditions but significantly during transient processes. Therefore, gas turbine diagnostics with transient measurement is superior to that with steady state measurement. In this paper, an ac...

  15. Is a participatory approach effective to stimulate using ergonomic measures?

    NARCIS (Netherlands)

    Molen, H.F. van der; Sluiter, J.K.; Hulshof, C.T.J.; Vink, P.; Duivenbooden, J.C. van; Holman, R.; Frings-Dresen, M.H.W.

    2006-01-01

    The objective of this study was to examine the effect of a participatory ergonomics (PE) implementation strategy on the use of ergonomic measures reducing the physical work demands of construction work. The ergonomic measures consisted of adjusting working height (two measures) and mechanising the

  16. A Multi-Dimensional Approach to Measuring News Media Literacy

    Science.gov (United States)

    Vraga, Emily; Tully, Melissa; Kotcher, John E.; Smithson, Anne-Bennett; Broeckelman-Post, Melissa

    2015-01-01

    Measuring news media literacy is important in order for it to thrive in a variety of educational and civic contexts. This research builds on existing measures of news media literacy and two new scales are presented that measure self-perceived media literacy (SPML) and perceptions of the value of media literacy (VML). Research with a larger sample…

  17. Negative Measures Are Not Enough, A Constructive Approach Is Needed

    International Nuclear Information System (INIS)

    Broda, E.

    1983-01-01

    This text is Broda’s contribution to the Pugwash Symposium 1983 in Bucharest. In this document Broda analyses the paramount problem of his time: peace and especially the avoidance of atomic war. He explains further that ‘negative measures’ like the demands for an atomic freeze and disarmament are not sufficient, but a constructive approach is needed. For this constructive approach Broda assigns an important role to international organizations like IAEA a.o. (nowak)

  18. Alternativas de análises em dados de medidas repetidas de bovinos de corte Alternative analyses of repeated weight measurements of beef cattle

    Directory of Open Access Journals (Sweden)

    Alfredo Ribeiro de Freitas

    2005-12-01

    Full Text Available Objetivou-se estudar duas alternativas de análises de variâncias e covariâncias para dados de pesagens de bovinos. Foram utilizados dados de animais Nelore, Guzerá, Gir e Indubrasil, machos e fêmeas, pertencentes à Associação Brasileira de Criadores de Zebu - ABCZ. De cada indivíduo, foram obtidas, em intervalos trimestrais, nove medidas repetidas de pesos, do nascimento aos dois anos de idade. Na primeira análise, a variável resposta y i foi transformada por meio da família de transformação de Box-Cox y ilambda = (ylambda-lambda/lambda, (l ¹ 0 ou y i = log y i, (lambda = 0. Essa transformação foi efetiva na redução dos coeficientes de assimetria e da heterogeneidade de variância para todas as pesagens e raças. Na segunda análise, foi selecionada a estrutura de covariâncias mais adequada para representar a variabilidade dentro de indivíduo, considerando-se um modelo misto usual para medidas repetidas. Utilizando-se os critérios fornecidos pelo procedimento MIXED do SAS: distribuição de chi2, AIC ("Akaike's Information Criterion" e SBC ("Schwarz's Bayesian Criterion", a estrutura de covariância mais adequada para todas as raças foi a Não-Estruturada, seguida da estrutura Fator-Analítico para Nelore, Gir e Indubrasil e Simetria Composta Heterogênea para Guzerá.Data consisting of individual records of male and female animals of purebred Bos Indicus beef cattle (Nellore, Guzerá, Gir and Indubrasil weighted every three months from birth to 24 months of age available from National Archive of Brazilian Zebu Breeders Association (ABCZ were used to evaluate two alternatives (covariance analyses for body weight. In the first analysis the Box-Cox family transformation y ilambda = (ylambda-lambda/lambda (l ¹ 0 or y i = log y i, for lambda=0 was effective in reducing the asymmetry of the coefficients and variance heterogeneity for all weights and breeds. In the second analysis a usual standard mixed model for repeated

  19. Online learning in repeated auctions

    OpenAIRE

    Weed, Jonathan; Perchet, Vianney; Rigollet, Philippe

    2015-01-01

    Motivated by online advertising auctions, we consider repeated Vickrey auctions where goods of unknown value are sold sequentially and bidders only learn (potentially noisy) information about a good's value once it is purchased. We adopt an online learning approach with bandit feedback to model this problem and derive bidding strategies for two models: stochastic and adversarial. In the stochastic model, the observed values of the goods are random variables centered around the true value of t...

  20. Personal selling constructs and measures: Emic versus etic approaches to cross-national research

    NARCIS (Netherlands)

    J. Herché (Joel); M.J. Swenson (Michael); W.J.M.I. Verbeke (Willem)

    1996-01-01

    textabstractEvaluates transportability of personal selling measures across cultural boundaries. Concept of measurement development; Emic and etic approaches to developing measures for cross-cultural applications; Cross-national dimensionality, reliability and construct validity of adaptive selling

  1. A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Campos, E [Argonne National Laboratory; Sisterson, DL [Argonne National Laboratory

    2015-10-01

    The Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess, and report measurement quality. Therefore, an easily-accessible, well-articulated estimate of ARM measurement uncertainty is needed.

  2. PERFORMANCE MEASURES OF STUDENTS IN EXAMINATIONS: A STOCHASTIC APPROACH

    OpenAIRE

    Goutam Saha; GOUTAM SAHA

    2013-01-01

    Data on Secondary and Higher Secondary examination (science stream) results from Tripura (North-East India) schools are analyzed to measure the performance of students based on tests and also the performance measures of schools based on final results and continuous assessment processes are obtained. The result variation in terms of grade points in the Secondary and Higher Secondary examinations are analysed using different sets of performance measures. The transition probabilities from one g...

  3. A situational approach to the measurement of safety culture

    International Nuclear Information System (INIS)

    Semmer, N.; Regennass, A.

    1997-01-01

    Values and social norms are the main target of most approaches to the study of safety culture and many existing survey methodologies directly ask for these norms and values. However, a number of considerations point to the dangers of limiting the evaluation of safety culture to the analysis of these responses. Therefore the necessity is stressed to also consider how actual situations activate norms and behaviours. This relates to the fact that in any given situation both aspects of the appraisal of reality are present: the objective definition of the situation and its personal evaluation. The latter not only reflects the ''official'' norms and values but also ''basic underlying assumptions''. The situational approach introduced in this paper confronts people with situations which contain a dilemma with conflicting social norms and where various costs and benefits are associated with different types of behaviour. In addition, the prerequisites and limitations of the situational approach are discussed. (author). 9 refs, 1 fig

  4. An Approach to Noise Reduction in Human Skin Admittance Measurements

    National Research Council Canada - National Science Library

    Kondakci, Suleyman

    2001-01-01

    This paper presents the development of a signal averaging algorithm for recovering excitation responses contaminated by overwhelming amount of various types of interference in skin admittance measurements...

  5. Solid-state dosimeters: A new approach for mammography measurements

    International Nuclear Information System (INIS)

    Brateman, Libby F.; Heintz, Philip H.

    2015-01-01

    Purpose: To compare responses of modern commercially available solid-state dosimeters (SStDs) used in mammography medical physics surveys for two major vendors of current digital mammography units. To compare differences in dose estimates among SStD responses with ionization chamber (IC) measurements for several target/filter (TF) combinations and report their characteristics. To review scientific bases for measurements of quantities required for mammography for traditional measurement procedures and SStDs. Methods: SStDs designed for use with modern digital mammography units were acquired for evaluation from four manufacturers. Each instrument was evaluated under similar conditions with the available mammography beams provided by two modern full-field digital mammography units in clinical use: a GE Healthcare Senographe Essential (Essential) and a Hologic Selenia Dimensions 5000 (Dimensions), with TFs of Mo/Mo, Mo/Rh; and Rh/Rh and W/Rh, W/Ag, and W/Al, respectively. Measurements were compared among the instruments for the TFs over their respective clinical ranges of peak tube potentials for kVp and half-value layer (HVL) measurements. Comparisons for air kerma (AK) and their associated relative calculated average glandular doses (AGDs), i.e., using fixed mAs, were evaluated over the limited range of 28–30 kVp. Measurements were compared with reference IC measurements for AK, reference HVLs and calculated AGD, for two compression paddle heights for AK, to evaluate scatter effects from compression paddles. SStDs may require different positioning from current mammography measurement protocols. Results: Measurements of kVp were accurate in general for the SStDs (within −1.2 and +1.1 kVp) for all instruments over a wide range of set kVp’s and TFs and most accurate for Mo/Mo and W/Rh. Discrepancies between measurements and reference values were greater for HVL and AK. Measured HVL values differed from reference values by −6.5% to +3.5% depending on the SStD and

  6. The Relative Importance of Job Factors: A New Measurement Approach.

    Science.gov (United States)

    Nealey, Stanley M.

    This paper reports on a new two-phase measurement technique that permits a direct comparison of the perceived relative importance of economic vs. non-economic factors in a job situation in accounting for personnel retention, the willingness to produce, and job satisfaction. The paired comparison method was used to measure the preferences of 91…

  7. Solving Complex Problems: A Convergent Approach to Cognitive Load Measurement

    Science.gov (United States)

    Zheng, Robert; Cook, Anne

    2012-01-01

    The study challenged the current practices in cognitive load measurement involving complex problem solving by manipulating the presence of pictures in multiple rule-based problem-solving situations and examining the cognitive load resulting from both off-line and online measures associated with complex problem solving. Forty-eight participants…

  8. Revisiting the TALE repeat.

    Science.gov (United States)

    Deng, Dong; Yan, Chuangye; Wu, Jianping; Pan, Xiaojing; Yan, Nieng

    2014-04-01

    Transcription activator-like (TAL) effectors specifically bind to double stranded (ds) DNA through a central domain of tandem repeats. Each TAL effector (TALE) repeat comprises 33-35 amino acids and recognizes one specific DNA base through a highly variable residue at a fixed position in the repeat. Structural studies have revealed the molecular basis of DNA recognition by TALE repeats. Examination of the overall structure reveals that the basic building block of TALE protein, namely a helical hairpin, is one-helix shifted from the previously defined TALE motif. Here we wish to suggest a structure-based re-demarcation of the TALE repeat which starts with the residues that bind to the DNA backbone phosphate and concludes with the base-recognition hyper-variable residue. This new numbering system is consistent with the α-solenoid superfamily to which TALE belongs, and reflects the structural integrity of TAL effectors. In addition, it confers integral number of TALE repeats that matches the number of bound DNA bases. We then present fifteen crystal structures of engineered dHax3 variants in complex with target DNA molecules, which elucidate the structural basis for the recognition of bases adenine (A) and guanine (G) by reported or uncharacterized TALE codes. Finally, we analyzed the sequence-structure correlation of the amino acid residues within a TALE repeat. The structural analyses reported here may advance the mechanistic understanding of TALE proteins and facilitate the design of TALEN with improved affinity and specificity.

  9. Reconfigurable multiport EPON repeater

    Science.gov (United States)

    Oishi, Masayuki; Inohara, Ryo; Agata, Akira; Horiuchi, Yukio

    2009-11-01

    An extended reach EPON repeater is one of the solutions to effectively expand FTTH service areas. In this paper, we propose a reconfigurable multi-port EPON repeater for effective accommodation of multiple ODNs with a single OLT line card. The proposed repeater, which has multi-ports in both OLT and ODN sides, consists of TRs, BTRs with the CDR function and a reconfigurable electrical matrix switch, can accommodate multiple ODNs to a single OLT line card by controlling the connection of the matrix switch. Although conventional EPON repeaters require full OLT line cards to accommodate subscribers from the initial installation stage, the proposed repeater can dramatically reduce the number of required line cards especially when the number of subscribers is less than a half of the maximum registerable users per OLT. Numerical calculation results show that the extended reach EPON system with the proposed EPON repeater can save 17.5% of the initial installation cost compared with a conventional repeater, and can be less expensive than conventional systems up to the maximum subscribers especially when the percentage of ODNs in lightly-populated areas is higher.

  10. Discrepancies in reporting the CAG repeat lengths for Huntington's disease

    DEFF Research Database (Denmark)

    Quarrell, Oliver W; Handley, Olivia; O'Donovan, Kirsty

    2011-01-01

    Huntington's disease results from a CAG repeat expansion within the Huntingtin gene; this is measured routinely in diagnostic laboratories. The European Huntington's Disease Network REGISTRY project centrally measures CAG repeat lengths on fresh samples; these were compared with the original...

  11. Coordinated hybrid automatic repeat request

    KAUST Repository

    Makki, Behrooz

    2014-11-01

    We develop a coordinated hybrid automatic repeat request (HARQ) approach. With the proposed scheme, if a user message is correctly decoded in the first HARQ rounds, its spectrum is allocated to other users, to improve the network outage probability and the users\\' fairness. The results, which are obtained for single- and multiple-antenna setups, demonstrate the efficiency of the proposed approach in different conditions. For instance, with a maximum of M retransmissions and single transmit/receive antennas, the diversity gain of a user increases from M to (J+1)(M-1)+1 where J is the number of users helping that user.

  12. Blind Measurement Selection: A Random Matrix Theory Approach

    KAUST Repository

    Elkhalil, Khalil

    2016-12-14

    This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter vector. The exhaustive search inspecting each of the $n\\\\choose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.

  13. Incremental Value of Repeated Risk Factor Measurements for Cardiovascular Disease Prediction in Middle-Aged Korean Adults: Results From the NHIS-HEALS (National Health Insurance System-National Health Screening Cohort).

    Science.gov (United States)

    Cho, In-Jeong; Sung, Ji Min; Chang, Hyuk-Jae; Chung, Namsik; Kim, Hyeon Chang

    2017-11-01

    Increasing evidence suggests that repeatedly measured cardiovascular disease (CVD) risk factors may have an additive predictive value compared with single measured levels. Thus, we evaluated the incremental predictive value of incorporating periodic health screening data for CVD prediction in a large nationwide cohort with periodic health screening tests. A total of 467 708 persons aged 40 to 79 years and free from CVD were randomly divided into development (70%) and validation subcohorts (30%). We developed 3 different CVD prediction models: a single measure model using single time point screening data; a longitudinal average model using average risk factor values from periodic screening data; and a longitudinal summary model using average values and the variability of risk factors. The development subcohort included 327 396 persons who had 3.2 health screenings on average and 25 765 cases of CVD over 12 years. The C statistics (95% confidence interval [CI]) for the single measure, longitudinal average, and longitudinal summary models were 0.690 (95% CI, 0.682-0.698), 0.695 (95% CI, 0.687-0.703), and 0.752 (95% CI, 0.744-0.760) in men and 0.732 (95% CI, 0.722-0.742), 0.735 (95% CI, 0.725-0.745), and 0.790 (95% CI, 0.780-0.800) in women, respectively. The net reclassification index from the single measure model to the longitudinal average model was 1.78% in men and 1.33% in women, and the index from the longitudinal average model to the longitudinal summary model was 32.71% in men and 34.98% in women. Using averages of repeatedly measured risk factor values modestly improves CVD predictability compared with single measurement values. Incorporating the average and variability information of repeated measurements can lead to great improvements in disease prediction. URL: https://www.clinicaltrials.gov. Unique identifier: NCT02931500. © 2017 American Heart Association, Inc.

  14. Mature clustered, regularly interspaced, short palindromic repeats RNA (crRNA) length is measured by a ruler mechanism anchored at the precursor processing site.

    Science.gov (United States)

    Hatoum-Aslan, Asma; Maniv, Inbal; Marraffini, Luciano A

    2011-12-27

    Precise RNA processing is fundamental to all small RNA-mediated interference pathways. In prokaryotes, clustered, regularly interspaced, short palindromic repeats (CRISPR) loci encode small CRISPR RNAs (crRNAs) that protect against invasive genetic elements by antisense targeting. CRISPR loci are transcribed as a long precursor that is cleaved within repeat sequences by CRISPR-associated (Cas) proteins. In many organisms, this primary processing generates crRNA intermediates that are subject to additional nucleolytic trimming to render mature crRNAs of specific lengths. The molecular mechanisms underlying this maturation event remain poorly understood. Here, we defined the genetic requirements for crRNA primary processing and maturation in Staphylococcus epidermidis. We show that changes in the position of the primary processing site result in extended or diminished maturation to generate mature crRNAs of constant length. These results indicate that crRNA maturation occurs by a ruler mechanism anchored at the primary processing site. We also show that maturation is mediated by specific cas genes distinct from those genes involved in primary processing, showing that this event is directed by CRISPR/Cas loci.

  15. Repeated cycles of chemical and physical disinfection and their influence on Mycobacterium avium subsp. paratuberculosis viability measured by propidium monoazide F57 quantitative real time PCR.

    Science.gov (United States)

    Kralik, Petr; Babak, Vladimir; Dziedzinska, Radka

    2014-09-01

    Mycobacterium avium subsp. paratuberculosis (MAP) has a high degree of resistance to chemical and physical procedures frequently used for the elimination of other bacteria. Recently, a method for the determination of viability by exposure of MAP to propidium monoazide (PMA) and subsequent real time quantitative PCR (qPCR) was established and found to be comparable with culture. The aim of this study was to apply the PMA qPCR method to determine the impact of increasing concentration or time and repeated cycles of the application of selected disinfectants on MAP viability. Different MAP isolates responded to the same type of stress in different ways. The laboratory strain CAPM 6381 had the highest tolerance, while the 8819 low-passage field isolate was the most sensitive. Ultraviolet exposure caused only a partial reduction in MAP viability; all MAP isolates were relatively resistant to chlorine. Only the application of peracetic acid led to the total elimination of MAP. Repeated application of the treatments resulted in more significant decreases in MAP viability compared to single increases in the concentration or time of exposure to the disinfectant. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Pharmacokinetics of taurolidine following repeated intravenous infusions measured by HPLC-ESI-MS/MS of the derivatives taurultame and taurinamide in glioblastoma patients.

    Science.gov (United States)

    Stendel, Ruediger; Scheurer, Louis; Schlatterer, Kathrin; Stalder, Urs; Pfirrmann, Rolf W; Fiss, Ingo; Möhler, Hanns; Bigler, Laurent

    2007-01-01

    Taurolidine is known to have antimicrobial activity. Furthermore, at lower concentrations, it has been found to exert a selective antineoplastic effect in vitro and in vivo. The aim of this study was to investigate the pharmacokinetics of taurolidine in vivo following repeated intravenous infusion in a schedule used for the treatment of glioblastoma. As a prerequisite, the pharmacokinetics of taurolidine in human blood plasma and whole blood in vitro was investigated. The pharmacokinetics of taurolidine and its derivatives taurultame and taurinamide were investigated in human blood plasma and in whole blood in vitro using blood from a healthy male volunteer. During repeated intravenous infusion therapy with taurolidine, plasma samples were taken every hour for a period of 13 hours per day in seven patients (three male, four female; mean age 48.4 +/- 12.8 years, range 27-66 years) with a glioblastoma. Following dansyl derivatisation, the concentrations of taurultame and taurinamide were determined using a new method based on high-performance liquid chromatography (HPLC) online coupled to electrospray ionisation tandem mass spectrometry (ESI-MS/MS) in the multiple reaction monitoring mode. Under the experimental conditions used, taurolidine could not be determined directly and was back-calculated from the taurultame and taurinamide values. The new HPLC-ESI-MS/MS method demonstrated high accuracy and reproducibility. In vitro plasma concentrations of taurultame and taurinamide remained constant over the incubation period. In whole blood in vitro, a time-dependent formation of taurinamide was observed. At the start of the incubation, the taurultame-taurinamide ratio (TTR) was 0.95 at an initial taurolidine concentration of 50 microg/mL, and 1.69 at 100 microg/mL. The concentration of taurultame decreased at the same rate as the taurinamide concentration increased, showing logarithmic kinetics. The calculated taurolidine concentration remained largely constant over the

  17. GRAPH THEORY APPROACH TO QUANTIFY UNCERTAINTY OF PERFORMANCE MEASURES

    Directory of Open Access Journals (Sweden)

    Sérgio D. Sousa

    2015-03-01

    Full Text Available In this work, the performance measurement process is studied to quantify the uncertainty induced in the resulting performance measure (PM. To that end, the causes of uncertainty are identified, analysing the activities undertaken in the three following stages of the performance measurement process: design and implementation, data collection and record, and determination and analysis. A quantitative methodology based on graph theory and on the sources of uncertainty of the performance measurement process is used to calculate an uncertainty index to evaluate the level of uncertainty of a given PM or (key performance indicator. An application example is presented. The quantification of PM uncertainty could contribute to better represent the risk associated with a given decision and also to improve the PM to increase its precision and reliability.

  18. Hysteresis of magnetostructural transitions: Repeatable and non-repeatable processes

    Science.gov (United States)

    Provenzano, Virgil; Della Torre, Edward; Bennett, Lawrence H.; ElBidweihy, Hatem

    2014-02-01

    The Gd5Ge2Si2 alloy and the off-stoichiometric Ni50Mn35In15 Heusler alloy belong to a special class of metallic materials that exhibit first-order magnetostructural transitions near room temperature. The magnetic properties of this class of materials have been extensively studied due to their interesting magnetic behavior and their potential for a number of technological applications such as refrigerants for near-room-temperature magnetic refrigeration. The thermally driven first-order transitions in these materials can be field-induced in the reverse order by applying a strong enough field. The field-induced transitions are typically accompanied by the presence of large magnetic hysteresis, the characteristics of which are a complicated function of temperature, field, and magneto-thermal history. In this study we show that the virgin curve, the major loop, and sequentially measured MH loops are the results of both repeatable and non-repeatable processes, in which the starting magnetostructural state, prior to the cycling of field, plays a major role. Using the Gd5Ge2Si2 and Ni50Mn35In15 alloys, as model materials, we show that a starting single phase state results in fully repeatable processes and large magnetic hysteresis, whereas a mixed phase starting state results in non-repeatable processes and smaller hysteresis.

  19. Hysteresis of magnetostructural transitions: Repeatable and non-repeatable processes

    International Nuclear Information System (INIS)

    Provenzano, Virgil; Della Torre, Edward; Bennett, Lawrence H.; ElBidweihy, Hatem

    2014-01-01

    The Gd 5 Ge 2 Si 2 alloy and the off-stoichiometric Ni 50 Mn 35 In 15 Heusler alloy belong to a special class of metallic materials that exhibit first-order magnetostructural transitions near room temperature. The magnetic properties of this class of materials have been extensively studied due to their interesting magnetic behavior and their potential for a number of technological applications such as refrigerants for near-room-temperature magnetic refrigeration. The thermally driven first-order transitions in these materials can be field-induced in the reverse order by applying a strong enough field. The field-induced transitions are typically accompanied by the presence of large magnetic hysteresis, the characteristics of which are a complicated function of temperature, field, and magneto-thermal history. In this study we show that the virgin curve, the major loop, and sequentially measured MH loops are the results of both repeatable and non-repeatable processes, in which the starting magnetostructural state, prior to the cycling of field, plays a major role. Using the Gd 5 Ge 2 Si 2 and Ni 50 Mn 35 In 15 alloys, as model materials, we show that a starting single phase state results in fully repeatable processes and large magnetic hysteresis, whereas a mixed phase starting state results in non-repeatable processes and smaller hysteresis

  20. A general approach to welfare measurement through national income accounting

    OpenAIRE

    Asheim, Geir B.; Buchholz, Wolfgang

    2002-01-01

    We develop a framework for analyzing national income accounting using a revealed welfare approach that is sufficiently general to cover, e.g., both the standard discounted utilitarian and maximin criteria as special cases. We show that the basic welfare properties of comprehensive national income accounting, which were previously ascribed only to the discounted utilitarian case, in fact extend to this more general framework. In particular, it holds under a wide range of circumstances that rea...

  1. An Approach to Measuring Provisions for Collateralised Lending

    OpenAIRE

    Cho-hoi Hui; Tom Fong

    2006-01-01

    Under the framework of Basel II, banks which adopt the internal ratings-based approach will be required to compare their actual provisions with expected losses. Any shortfall (i.e., the expected loss exceeds the provision) should be deducted from capital of the bank. It is therefore important to ensure banks make adequate provisions against expected losses. In addition, both sound policy and the Banking Ordinance require banks to take a forward-looking view of provisions. These requirements r...

  2. A stitch in time saves nine? A repeated cross-sectional case study on the implementation of the intersectoral community approach Youth At a Healthy Weight

    NARCIS (Netherlands)

    Kleij, R.M.J.J. van der; Crone, M.R.; Paulussen, T.G.W.M.; Gaar, V.M. van de; Reis, R.

    2015-01-01

    Background The implementation of programs complex in design, such as the intersectoral community approach Youth At a Healthy Weight (JOGG), often deviates from their application as intended. There is limited knowledge of their implementation processes, making it difficult to formulate sound

  3. UK 2009-2010 repeat station report

    Directory of Open Access Journals (Sweden)

    Thomas J.G. Shanahan

    2013-03-01

    Full Text Available The British Geological Survey is responsible for conducting the UK geomagnetic repeat station programme. Measurements made at the UK repeat station sites are used in conjunction with the three UK magnetic observatories: Hartland, Eskdalemuir and Lerwick, to produce a regional model of the local field each year. The UK network of repeat stations comprises 41 stations which are occupied at approximately 3-4 year intervals. Practices for conducting repeat station measurements continue to evolve as advances are made in survey instrumentation and as the usage of the data continues to change. Here, a summary of the 2009 and 2010 UK repeat station surveys is presented, highlighting the measurement process and techniques, density of network, reduction process and recent results.

  4. A Thermographic Measurement Approach to Assess Supercapacitor Electrical Performances

    Directory of Open Access Journals (Sweden)

    Stanislaw Galla

    2017-12-01

    Full Text Available This paper describes a proposal for the qualitative assessment of condition of supercapacitors based on the conducted thermographic measurements. The presented measurement stand was accompanied by the concept of methodology of performing tests. Necessary conditions, which were needed to minimize the influence of disturbing factors on the performance of thermal imaging measurements, were also indicated. Mentioned factors resulted from both: the hardware limitations and from the necessity to prepare samples. The algorithm that was used to determine the basic parameters for assessment has been presented. The article suggests to use additional factors that may facilitate the analysis of obtained results. Measuring the usefulness of the proposed methodology was tested on commercial samples of supercapacitors. All of the tests were taken in conjunction with the classical methods based on capacitance (C and equivalent series resistance (ESR measurements, which were also presented in the paper. Selected results presenting the observed changes occurring in both: basic parameters of supercapacitors and accompanying fluctuations of thermal fields, along with analysis, were shown. The observed limitations of the proposed assessment method and the suggestions for its development were also described.

  5. Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Calyam, Prasad

    2014-09-15

    The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federation policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.

  6. LOCAL PUBLIC 0 EXPENDITURE AUTONOMY – MEASURING APPROACH

    Directory of Open Access Journals (Sweden)

    Irina BILAN

    2013-06-01

    Full Text Available The decentralization process was continuous in Romania starting with 1990, generating the implication of local authorities in local public finance, as a result of exclusives, shared and delegate competences and, so, the necessity of ensuring a good management of resources and expenditures. Therefore, the decentralization of competences / responsibilities from State to local governments was a major Romanian political theme and a first rank component of management of local public finance, as main driving instrument for local development. Specific legal framework of local responsibilities is established both to European and national level. Researchers based on regulation and practice have tried to quantify the responsibilities developing different models to measure local revenue and expenditures autonomy. The paper aims is to identify some models for measuring local expenditure autonomy and to apply for Romania. The study is oriented to measure local expenditure autonomy in Romania using Bell, Ebel, Kaiser and Rojchaichainthorn's model.

  7. Constrained independent component analysis approach to nonobtrusive pulse rate measurements

    Science.gov (United States)

    Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  8. Repeat migration and disappointment.

    Science.gov (United States)

    Grant, E K; Vanderkamp, J

    1986-01-01

    This article investigates the determinants of repeat migration among the 44 regions of Canada, using information from a large micro-database which spans the period 1968 to 1971. The explanation of repeat migration probabilities is a difficult task, and this attempt is only partly successful. May of the explanatory variables are not significant, and the overall explanatory power of the equations is not high. In the area of personal characteristics, the variables related to age, sex, and marital status are generally significant and with expected signs. The distance variable has a strongly positive effect on onward move probabilities. Variables related to prior migration experience have an important impact that differs between return and onward probabilities. In particular, the occurrence of prior moves has a striking effect on the probability of onward migration. The variable representing disappointment, or relative success of the initial move, plays a significant role in explaining repeat migration probabilities. The disappointment variable represents the ratio of actural versus expected wage income in the year after the initial move, and its effect on both repeat migration probabilities is always negative and almost always highly significant. The repeat probabilities diminish after a year's stay in the destination region, but disappointment in the most recent year still has a bearing on the delayed repeat probabilities. While the quantitative impact of the disappointment variable is not large, it is difficult to draw comparisons since similar estimates are not available elsewhere.

  9. Suggested Approaches to the Measurement of Computer Anxiety.

    Science.gov (United States)

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  10. Measuring highway efficiency : A DEA approach and the Malquist index

    NARCIS (Netherlands)

    Sarmento, Joaquim Miranda; Renneboog, Luc; Verga-Matos, Pedro

    A growing concern exists regarding the efficiency of public resources spent in transport infrastructures. In this paper, we measure the efficiency of seven highway projects in Portugal over the past decade by means of a data envelopment analysis and the Malmquist productivity and efficiency indices.

  11. Design of psychosocial factors questionnaires: a systematic measurement approach

    Science.gov (United States)

    Vargas, Angélica; Felknor, Sarah A

    2012-01-01

    Background Evaluation of psychosocial factors requires instruments that measure dynamic complexities. This study explains the design of a set of questionnaires to evaluate work and non-work psychosocial risk factors for stress-related illnesses. Methods The measurement model was based on a review of literature. Content validity was performed by experts and cognitive interviews. Pilot testing was carried out with a convenience sample of 132 workers. Cronbach’s alpha evaluated internal consistency and concurrent validity was estimated by Spearman correlation coefficients. Results Three questionnaires were constructed to evaluate exposure to work and non-work risk factors. Content validity improved the questionnaires coherence with the measurement model. Internal consistency was adequate (α=0.85–0.95). Concurrent validity resulted in moderate correlations of psychosocial factors with stress symptoms. Conclusions Questionnaires´ content reflected a wide spectrum of psychosocial factors sources. Cognitive interviews improved understanding of questions and dimensions. The structure of the measurement model was confirmed. PMID:22628068

  12. Corruption in Higher Education: Conceptual Approaches and Measurement Techniques

    Science.gov (United States)

    Osipian, Ararat L.

    2007-01-01

    Corruption is a complex and multifaceted phenomenon. Forms of corruption are multiple. Measuring corruption is necessary not only for getting ideas about the scale and scope of the problem, but for making simple comparisons between the countries and conducting comparative analysis of corruption. While the total impact of corruption is indeed…

  13. Validity of the Neuromuscular Recovery Scale: a measurement model approach.

    Science.gov (United States)

    Velozo, Craig; Moorhouse, Michael; Ardolino, Elizabeth; Lorenz, Doug; Suter, Sarah; Basso, D Michele; Behrman, Andrea L

    2015-08-01

    To determine how well the Neuromuscular Recovery Scale (NRS) items fit the Rasch, 1-parameter, partial-credit measurement model. Confirmatory factor analysis (CFA) and principal components analysis (PCA) of residuals were used to determine dimensionality. The Rasch, 1-parameter, partial-credit rating scale model was used to determine rating scale structure, person/item fit, point-measure item correlations, item discrimination, and measurement precision. Seven NeuroRecovery Network clinical sites. Outpatients (N=188) with spinal cord injury. Not applicable. NRS. While the NRS met 1 of 3 CFA criteria, the PCA revealed that the Rasch measurement dimension explained 76.9% of the variance. Ten of 11 items and 91% of the patients fit the Rasch model, with 9 of 11 items showing high discrimination. Sixty-nine percent of the ratings met criteria. The items showed a logical item-difficulty order, with Stand retraining as the easiest item and Walking as the most challenging item. The NRS showed no ceiling or floor effects and separated the sample into almost 5 statistically distinct strata; individuals with an American Spinal Injury Association Impairment Scale (AIS) D classification showed the most ability, and those with an AIS A classification showed the least ability. Items not meeting the rating scale criteria appear to be related to the low frequency counts. The NRS met many of the Rasch model criteria for construct validity. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  14. National policy measures. Right approach to foreign direct investment flows

    Directory of Open Access Journals (Sweden)

    Cătălin-Emilian HUIDUMAC-PETRESCU

    2013-02-01

    Full Text Available 2011 was a difficult year for all the countries, developed and emerging ones. For overcoming the negative effects of the financial crisis, many economies have established as purpose to adopt new economic policies regarding the foreign direct investment flows (FDI, even to stimulate the flows or to reduce it (protectionism measures. So, there can be identified two categories of national policies: measures for the FDI flows stimulation and measures whose aim was the weighting of FDI developing, through restriction and regulation. In the first category we could include the liberalization measures and promotional and faciletation policies. In this study we evidenced that the fundament of the second category of policies is the belief that the FDI outward lead to job exports, to a raise of unemployment and a weakness of the industrial base.Many reports on FDI flows, here we talk about those made by UNCTAD, show that the regulation and restriction policies are seen as a possible protectionism, especially in the agricultural and extractive industries, where there have been required nationalization processes and divestments. Even more, the economies which adopted this kind of policies have been less interested in investing abroad, the outward of FDI being affected and globally the total outward decreased.

  15. Measuring local autonomy: A decision-making approach

    NARCIS (Netherlands)

    Fleurke, F.; Willemse, R.

    2006-01-01

    In studies on central-local relations it is common to assess local autonomy in a deductive way. The extent of local autonomy is determined by measuring the central legal and financial competence, after which the remaining room for local decision-making is determined. The outcome of this indirect

  16. Flux Measurements in Trees: Methodological Approach and Application to Vineyards

    Directory of Open Access Journals (Sweden)

    Francesca De Lorenzi

    2008-03-01

    Full Text Available In this paper a review of two sap flow methods for measuring the transpiration in vineyards is presented. The objective of this work is to examine the potential of detecting transpiration in trees in response to environmental stresses, particularly the high concentration of ozone (O3 in troposphere. The methods described are the stem heat balance and the thermal dissipation probe; advantages and disadvantages of each method are detailed. Applications of both techniques are shown, in two large commercial vineyards in Southern Italy (Apulia and Sicily, submitted to semi-arid climate. Sap flow techniques allow to measure transpiration at plant scale and an upscaling procedure is necessary to calculate the transpiration at the whole stand level. Here a general technique to link the value of transpiration at plant level to the canopy value is presented, based on experimental relationships between transpiration and biometric characteristics of the trees. In both vineyards transpiration measured by sap flow methods compares well with evapotranspiration measured by micrometeorological techniques at canopy scale. Moreover soil evaporation component has been quantified. In conclusion, comments about the suitability of the sap flow methods for studying the interactions between trees and ozone are given.

  17. Individual match approach to Bowling performance measures in ...

    African Journals Online (AJOL)

    Match conditions can play a significant role in player performances in a cricket match. If the pitch is in a good condition, the batsmen can achieve good scores, making it difficult for the bowlers. In the case of an uneven pitch or adverse weather conditions, the bowlers may have the upper hand. In order to measure bowlers' ...

  18. Quantitative approach to measuring the cerebrospinal fluid space with CT

    Energy Technology Data Exchange (ETDEWEB)

    Zeumer, H.; Hacke, W.; Hartwich, P.

    1982-01-01

    A method for measuring the subarachnoid space by using an independent CT evaluation unit is described. The normal values have been calculated for patients, according to age, and three examples are presented demonstrating reversible decrease of brain volume in patients suffering anorexia nervosa and chronic alcoholism.

  19. Measuring real exchange rate misalignment in Croatia: cointegration approach

    Directory of Open Access Journals (Sweden)

    Irena Palić

    2014-12-01

    Full Text Available The purpose of the paper is to analyze misalignment of the real exchange rate in Croatia. The misalignment analysis is conducted using the permanent equilibrium exchange rate approach. The equilibrium real exchange rate is computed using the cointegration approach whereby the real exchange rate and its fundamentals, namely terms of trade, net foreign assets and the ratio of prices of tradables to non-tradables are included in cointegration analysis. The Hodrick and Prescott filter is used to obtain permanent values of the equilibrium real exchange rate. The real exchange rate misalignment is computed as the deviation of the RER from its permanent equilibrium level. Four overvaluation periods and three undervaluation periods are recorded in Croatia in the observed period. Overvaluation periods are more often and of longer duration than undervaluation periods. However, the real exchange rate does not deviate largely from its estimated equilibrium value in the observed period, and it is neither overvalued nor undervalued constantly, but the periods alternate. Considering the results of the analysis, together with the empirical characteristics of Croatian economy, namely the high foreign currency indebtedness, highly euroized economy and underdeveloped export oriented sector, the depreciation of the real exchange rate is not recommended to economic policy makers and the current Croatian exchange rate policy is appropriate.

  20. Measure theoretical approach to recurrent properties for quantum dynamics

    International Nuclear Information System (INIS)

    Otobe, Yoshiki; Sasaki, Itaru

    2011-01-01

    Poincaré's recurrence theorem, which states that every Hamiltonian dynamics enclosed in a finite volume returns to its initial position as close as one wishes, is a mathematical basis of statistical mechanics. It is Liouville's theorem that guarantees that the dynamics preserves the volume on the state space. A quantum version of Poincaré's theorem was obtained in the middle of the 20th century without any volume structures of the state space (Hilbert space). One of our aims in this paper is to establish such properties of quantum dynamics from an analog of Liouville's theorem, namely, we will construct a natural probability measure on the Hilbert space from a Hamiltonian defined on the space. Then we will show that the measure is invariant under the corresponding Schrödinger flow. Moreover, we show that the dynamics naturally causes an infinite-dimensional Weyl transformation. It also enables us to discuss the ergodic properties of such dynamics. (paper)

  1. Experimental approaches to the measurement of dielectronic recombination

    International Nuclear Information System (INIS)

    Datz, S.

    1984-01-01

    In dielectronic recombination, the first step involves a continuum electron which excites a previously bound electron and, in so doing, loses just enough energy to be captured in a bound state (nl). This results in a doubly excited ion of a lower charge state which may either autoionize or emit a photon resulting in a stabilized recombination. The complete signature of the event is an ion of reduced charge and an emitted photon. Methods of measuring this event are discussed

  2. Cervical length measurement: comparison of transabdominal and transvaginal approach

    DEFF Research Database (Denmark)

    Westerway, Sue C; Pedersen, Lars Henning; Hyett, Jon

    2015-01-01

    to external cervical os. Bland- Altman plots and Wilcoxon signed rank test were used to evaluate differences between TA and TV measurements. Results: The validity of the TA method depended on cervical length. Although the TA method underestimated cervical length by 2.0 mm on average (P ... plots showed an inverse trend with shorter cervixes. In women with a cervix test to detect cervical length

  3. A repeating fast radio burst.

    Science.gov (United States)

    Spitler, L G; Scholz, P; Hessels, J W T; Bogdanov, S; Brazier, A; Camilo, F; Chatterjee, S; Cordes, J M; Crawford, F; Deneva, J; Ferdman, R D; Freire, P C C; Kaspi, V M; Lazarus, P; Lynch, R; Madsen, E C; McLaughlin, M A; Patel, C; Ransom, S M; Seymour, A; Stairs, I H; Stappers, B W; van Leeuwen, J; Zhu, W W

    2016-03-10

    Fast radio bursts are millisecond-duration astronomical radio pulses of unknown physical origin that appear to come from extragalactic distances. Previous follow-up observations have failed to find additional bursts at the same dispersion measure (that is, the integrated column density of free electrons between source and telescope) and sky position as the original detections. The apparent non-repeating nature of these bursts has led to the suggestion that they originate in cataclysmic events. Here we report observations of ten additional bursts from the direction of the fast radio burst FRB 121102. These bursts have dispersion measures and sky positions consistent with the original burst. This unambiguously identifies FRB 121102 as repeating and demonstrates that its source survives the energetic events that cause the bursts. Additionally, the bursts from FRB 121102 show a wide range of spectral shapes that appear to be predominantly intrinsic to the source and which vary on timescales of minutes or less. Although there may be multiple physical origins for the population of fast radio bursts, these repeat bursts with high dispersion measure and variable spectra specifically seen from the direction of FRB 121102 support an origin in a young, highly magnetized, extragalactic neutron star.

  4. Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.

    Science.gov (United States)

    Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja

    2015-06-01

    Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.

  5. Measurement approaches to the sense of humor: Introduction and overview

    OpenAIRE

    Ruch, Willibald

    1996-01-01

    There has been a renaissance of research interest in the "sense of humor" in recent years, partly äs an attempt to define the concept but more strenuously to provide Instruments for its measurement. A quick count of recent publications shows an average of two to three new sense of humor- instruments per year — or one every four to six months. This intensity of research is unparalleled in the history of humor research and contrasts sharply with 25 years ago when the renewal of interest in hum...

  6. Exploring Approaches How to Measure a Lean Process

    Directory of Open Access Journals (Sweden)

    Österman Christer

    2014-08-01

    Full Text Available Purpose:The purpose of the research is to explore a practical method of measuring the implementation of lean in a process. The method will be based on examining the abilities of a group. At this scale the ability to work standardized and solve problems is important. These two abilities are dependent of each other and are fundamental for the group's ability to create a stable result. In this context the method of standardized work (SW is define to be the methods used in a process to generate stable results. Problem solving (PS is defined as the methods used to return a process to a condition where SW is possible.

  7. How to measure soundscapes. A theoretical and practical approach

    Science.gov (United States)

    Schulte-Fortkamp, Brigitte

    2002-11-01

    Noise sources interact with the specific acoustic and environmental makeup, topography, meteorology, land use pattern, and lifestyle. The evaluation of soundscapes needs subject-related methodological procedures. With such suitable measurements a way has to be found that allows us to rely on different dimensions on reaction to noise. Improving the soundscape of an urban environment imposes to account for the qualitative appreciation as a cognitive judgment given by listeners and, particularly, for the interaction between acoustic dimensions and other sensory modalities in qualitative judgments of an urban environment (Maffiolo). The structure of the residential area that is, the combination of noise sources are important for the judgment of a soundscope and are also important as subjective parameters which are relevant in people's point of view. Moreover, the relationship of both define the background for assessments. Studies are needed on the subject and its capability in perception and interpretation; studies on the subject inside the society, studies on the social and cultural context, and field studies including physical measurements. Soundscapes may be defined in its effects on man and vice versa and probably acoustical ecology will serve to understand the function of soundscapes.

  8. Measuring healthcare preparedness: an all-hazards approach

    Directory of Open Access Journals (Sweden)

    Marcozzi David E

    2012-10-01

    Full Text Available Abstract In a paper appearing in this issue, Adini, et al. describe a struggle familiar to many emergency planners—the challenge of planning for all scenarios. The authors contend that all-hazards, or capabilities-based planning, in which a set of core capabilities applicable to numerous types of events is developed, is a more efficient way to achieve general health care system emergency preparedness than scenario-based planning. Essentially, the core of what is necessary to plan for and respond to one kind of disaster (e.g. a biologic event is also necessary for planning and responding to other types of disasters, allowing for improvements in planning and maximizing efficiencies. While Adini, et al. have advanced the science of health care emergency preparedness through their consideration of 490 measures to assess preparedness, a shorter set of validated preparedness measures would support the dual goals of accountability and improved outcomes and could provide the basis for determining which actions in the name of preparedness really matter.

  9. Measure theoretical approach to recurrent properties for quantum dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Otobe, Yoshiki [Department of Mathematical Sciences, Shinshu University, Asahi 3-1-1, Matsumoto 390-8621 (Japan); Sasaki, Itaru, E-mail: otobe@math.shinshu-u.ac.jp, E-mail: isasaki@shinshu-u.ac.jp [Fiber-Nanotech Young Researcher Empowerment Center, Shinshu University, Asahi 3-1-1, Matsumoto 390-8621 (Japan)

    2011-11-18

    Poincare's recurrence theorem, which states that every Hamiltonian dynamics enclosed in a finite volume returns to its initial position as close as one wishes, is a mathematical basis of statistical mechanics. It is Liouville's theorem that guarantees that the dynamics preserves the volume on the state space. A quantum version of Poincare's theorem was obtained in the middle of the 20th century without any volume structures of the state space (Hilbert space). One of our aims in this paper is to establish such properties of quantum dynamics from an analog of Liouville's theorem, namely, we will construct a natural probability measure on the Hilbert space from a Hamiltonian defined on the space. Then we will show that the measure is invariant under the corresponding Schroedinger flow. Moreover, we show that the dynamics naturally causes an infinite-dimensional Weyl transformation. It also enables us to discuss the ergodic properties of such dynamics. (paper)

  10. Measuring market share of petrol stations using conditional probability approach

    Science.gov (United States)

    Sharif, Shamshuritawati; Lwee, Xue Yin

    2017-05-01

    Oil and gas production is the strength of Malaysia's growth over past decades. It is one of the most strategic economic branches in the world. Since the oil industry is essential for the economic growth of a country, only a few undertakings have been achieved to establish. It is a very risky business. Therefore the dealer must have some information in hand before setting up a new business plan. Understanding the current business situation is an important strategy to avoid risky ventures. In this study, the aim is to deliver a very simple but essential way to identify the market share based on customer's choice factors. This approach is presented to encourage the non-statisticians to use it easily in helping their business performance. From this study, the most important factors differ from one station to another station. The results show that the factors of customer's choice for BHPetrol, Caltex, PETRON, PETRONAS and SHELL are site location, service quality, service quality, size of the petrol station, and brand image, respectively.

  11. In vitro systems toxicology approach to investigate the effects of repeated cigarette smoke exposure on human buccal and gingival organotypic epithelial tissue cultures.

    Science.gov (United States)

    Schlage, Walter K; Iskandar, Anita R; Kostadinova, Radina; Xiang, Yang; Sewer, Alain; Majeed, Shoaib; Kuehn, Diana; Frentzel, Stefan; Talikka, Marja; Geertz, Marcel; Mathis, Carole; Ivanov, Nikolai; Hoeng, Julia; Peitsch, Manuel C

    2014-10-01

    Smoking has been associated with diseases of the lung, pulmonary airways and oral cavity. Cytologic, genomic and transcriptomic changes in oral mucosa correlate with oral pre-neoplasia, cancer and inflammation (e.g. periodontitis). Alteration of smoking-related gene expression changes in oral epithelial cells is similar to that in bronchial and nasal epithelial cells. Using a systems toxicology approach, we have previously assessed the impact of cigarette smoke (CS) seen as perturbations of biological processes in human nasal and bronchial organotypic epithelial culture models. Here, we report our further assessment using in vitro human oral organotypic epithelium models. We exposed the buccal and gingival organotypic epithelial tissue cultures to CS at the air-liquid interface. CS exposure was associated with increased secretion of inflammatory mediators, induction of cytochrome P450s activity and overall weak toxicity in both tissues. Using microarray technology, gene-set analysis and a novel computational modeling approach leveraging causal biological network models, we identified CS impact on xenobiotic metabolism-related pathways accompanied by a more subtle alteration in inflammatory processes. Gene-set analysis further indicated that the CS-induced pathways in the in vitro buccal tissue models resembled those in the in vivo buccal biopsies of smokers from a published dataset. These findings support the translatability of systems responses from in vitro to in vivo and demonstrate the applicability of oral organotypical tissue models for an impact assessment of CS on various tissues exposed during smoking, as well as for impact assessment of reduced-risk products.

  12. A measure theoretical approach to quantum stochastic processes

    Energy Technology Data Exchange (ETDEWEB)

    Waldenfels, Wilhelm von

    2014-04-01

    Authored by a leading researcher in the field. Self-contained presentation of the subject matter. Examines a number of worked examples in detail. This monograph takes as starting point that abstract quantum stochastic processes can be understood as a quantum field theory in one space and in one time coordinate. As a result it is appropriate to represent operators as power series of creation and annihilation operators in normal-ordered form, which can be achieved using classical measure theory. Considering in detail four basic examples (e.g. a two-level atom coupled to a heat bath of oscillators), in each case the Hamiltonian of the associated one-parameter strongly continuous group is determined and the spectral decomposition is explicitly calculated in the form of generalized eigen-vectors. Advanced topics include the theory of the Hudson-Parthasarathy equation and the amplified oscillator problem. To that end, a chapter on white noise calculus has also been included.

  13. Theoretical information measurement in nonrelativistic time-dependent approach

    Science.gov (United States)

    Najafizade, S. A.; Hassanabadi, H.; Zarrinkamar, S.

    2018-02-01

    The information-theoretic measures of time-dependent Schrödinger equation are investigated via the Shannon information entropy, variance and local Fisher quantities. In our calculations, we consider the two first states n = 0,1 and obtain the position Sx (t) and momentum Sp (t) Shannon entropies as well as Fisher information Ix (t) in position and momentum Ip (t) spaces. Using the Fourier transformed wave function, we obtain the results in momentum space. Some interesting features of the information entropy densities ρs (x,t) and γs (p,t), as well as the probability densities ρ (x,t) and γ (p,t) for time-dependent states are demonstrated. We establish a general relation between variance and Fisher's information. The Bialynicki-Birula-Mycielski inequality is tested and verified for the states n = 0,1.

  14. Innovative approaches for addressing old challenges in component importance measures

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.; Ramirez-Marquez, Jose Emmanuel

    2012-01-01

    Importance measures (IM) are component related indices that allow assessing how a component in a system affects one or more system level performance functions. While several IM have been presented in the literature, challenges still remain with respect to the following: (1) multiple ranking—multiple perspective, (2) multi-component importance and, (3) multi-function importance. To address these challenges, this paper proposes set of innovative solutions based on several available techniques: Hasse diagram, Copeland score and Multi-objective optimization. As such, the purpose of this research is twofold: first propose solutions and second foster new research to address these challenges. Each of the proposed solutions is exemplified with a working example.

  15. A measure theoretical approach to quantum stochastic processes

    CERN Document Server

    Von Waldenfels, Wilhelm

    2014-01-01

    This monograph takes as starting point that abstract quantum stochastic processes can be understood as a quantum field theory in one space and in one time coordinate. As a result it is appropriate to represent operators as power series of creation and annihilation operators in normal-ordered form, which can be achieved using classical measure theory. Considering in detail four basic examples (e.g. a two-level atom coupled to a heat bath of oscillators), in each case the Hamiltonian of the associated one-parameter strongly continuous group is determined and the spectral decomposition is explicitly calculated in the form of generalized eigen-vectors. Advanced topics include the theory of the Hudson-Parthasarathy equation and the amplified oscillator problem. To that end, a chapter on white noise calculus has also been included.

  16. Measuring efficiency of international crude oil markets: A multifractality approach

    Science.gov (United States)

    Niere, H. M.

    2015-01-01

    The three major international crude oil markets are treated as complex systems and their multifractal properties are explored. The study covers daily prices of Brent crude, OPEC reference basket and West Texas Intermediate (WTI) crude from January 2, 2003 to January 2, 2014. A multifractal detrended fluctuation analysis (MFDFA) is employed to extract the generalized Hurst exponents in each of the time series. The generalized Hurst exponent is used to measure the degree of multifractality which in turn is used to quantify the efficiency of the three international crude oil markets. To identify whether the source of multifractality is long-range correlations or broad fat-tail distributions, shuffled data and surrogated data corresponding to each of the time series are generated. Shuffled data are obtained by randomizing the order of the price returns data. This will destroy any long-range correlation of the time series. Surrogated data is produced using the Fourier-Detrended Fluctuation Analysis (F-DFA). This is done by randomizing the phases of the price returns data in Fourier space. This will normalize the distribution of the time series. The study found that for the three crude oil markets, there is a strong dependence of the generalized Hurst exponents with respect to the order of fluctuations. This shows that the daily price time series of the markets under study have signs of multifractality. Using the degree of multifractality as a measure of efficiency, the results show that WTI is the most efficient while OPEC is the least efficient market. This implies that OPEC has the highest likelihood to be manipulated among the three markets. This reflects the fact that Brent and WTI is a very competitive market hence, it has a higher level of complexity compared against OPEC, which has a large monopoly power. Comparing with shuffled data and surrogated data, the findings suggest that for all the three crude oil markets, the multifractality is mainly due to long

  17. An isotopic approach to measuring nitrogen balance in caribou

    Science.gov (United States)

    Gustine, David D.; Barboza, Perry S.; Adams, Layne G.; Farnell, Richard G.; Parker, Katherine L.

    2011-01-01

    Nutritional restrictions in winter may reduce the availability of protein for reproduction and survival in northern ungulates. We refined a technique that uses recently voided excreta on snow to assess protein status in wild caribou (Rangifer tarandus) in late winter. Our study was the first application of this non‐invasive, isotopic approach to assess protein status of wild caribou by determining dietary and endogenous contributions of nitrogen (N) to urinary urea. We used isotopic ratios of N (δ15N) in urine and fecal samples to estimate the proportion of urea N derived from body N (p‐UN) in pregnant, adult females of the Chisana Herd, a small population that ranged across the Alaska‐Yukon border. We took advantage of a predator‐exclosure project to examine N status of penned caribou in April 2006. Lichens were the primary forage (>40%) consumed by caribou in the pen and δ15N of fiber tracked the major forages in their diets. The δ15N of urinary urea for females in the pen was depleted relative (−1.3 ± 1.0 parts per thousand [‰], ${\\bar {x}}\\pm {\\rm SD}$) to the δ15N of body N (2.7 ± 0.7‰). A similar proportion of animals in the exclosure lost core body mass (excluding estimates of fetal and uterine tissues; 55%) and body protein (estimated by isotope ratios; 54%). This non‐invasive technique could be applied at various spatial and temporal scales to assess trends in protein status of free‐ranging populations of northern ungulates. Intra‐ and inter‐annual estimates of protein status could help managers monitor effects of foraging conditions on nutritional constraints in ungulates, increase the efficiency and efficacy of management actions, and help prepare stakeholders for potential changes in population trends.

  18. A high stability and repeatability electrochemical scanning tunneling microscope

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Zhigang; Wang, Jihao; Lu, Qingyou, E-mail: qxl@ustc.edu.cn [High Magnetic Field Laboratory, Chinese Academy of Sciences and University of Science and Technology of China, Hefei, Anhui 230026 (China); Hefei National Laboratory for Physical Sciences at Microscale, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hou, Yubin [High Magnetic Field Laboratory, Chinese Academy of Sciences and University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2014-12-15

    We present a home built electrochemical scanning tunneling microscope (ECSTM) with very high stability and repeatability. Its coarse approach is driven by a closely stacked piezo motor of GeckoDrive type with four rigid clamping points, which enhances the rigidity, compactness, and stability greatly. It can give high clarity atomic resolution images without sound and vibration isolations. Its drifting rates in XY and Z directions in solution are as low as 84 pm/min and 59 pm/min, respectively. In addition, repeatable coarse approaches in solution within 2 mm travel distance show a lateral deviation less than 50 nm. The gas environment can be well controlled to lower the evaporation rate of the cell, thus reducing the contamination and elongating the measurement time. Atomically resolved SO{sub 4}{sup 2−} image on Au (111) work electrode is demonstrated to show the performance of the ECSTM.

  19. A high stability and repeatability electrochemical scanning tunneling microscope.

    Science.gov (United States)

    Xia, Zhigang; Wang, Jihao; Hou, Yubin; Lu, Qingyou

    2014-12-01

    We present a home built electrochemical scanning tunneling microscope (ECSTM) with very high stability and repeatability. Its coarse approach is driven by a closely stacked piezo motor of GeckoDrive type with four rigid clamping points, which enhances the rigidity, compactness, and stability greatly. It can give high clarity atomic resolution images without sound and vibration isolations. Its drifting rates in XY and Z directions in solution are as low as 84 pm/min and 59 pm/min, respectively. In addition, repeatable coarse approaches in solution within 2 mm travel distance show a lateral deviation less than 50 nm. The gas environment can be well controlled to lower the evaporation rate of the cell, thus reducing the contamination and elongating the measurement time. Atomically resolved SO4(2-) image on Au (111) work electrode is demonstrated to show the performance of the ECSTM.

  20. The Pentapeptide Repeat Proteins

    OpenAIRE

    Vetting, Matthew W.; Hegde, Subray S.; Fajardo, J. Eduardo; Fiser, Andras; Roderick, Steven L.; Takiff, Howard E.; Blanchard, John S.

    2006-01-01

    The Pentapeptide Repeat Protein (PRP) family has over 500 members in the prokaryotic and eukaryotic kingdoms. These proteins are composed of, or contain domains composed of, tandemly repeated amino acid sequences with a consensus sequence of [S,T,A,V][D,N][L,F]-[S,T,R][G]. The biochemical function of the vast majority of PRP family members is unknown. The three-dimensional structure of the first member of the PRP family was determined for the fluoroquinolone resistance protein (MfpA) from Myc...

  1. Measuring Filament Orientation: A New Quantitative, Local Approach

    Energy Technology Data Exchange (ETDEWEB)

    Green, C.-E.; Cunningham, M. R.; Jones, P. A. [School of Physics, University of New South Wales, Sydney, NSW, 2052 (Australia); Dawson, J. R. [CSIRO Astronomy and Space Science, Australia Telescope National Facility, P.O. Box 76, Epping, NSW 1710 (Australia); Novak, G. [Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) and Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208 (United States); Fissel, L. M. [National Radio Astronomy Observatory (NRAO), 520 Edgemont Road, Charlottesville, VA, 22903 (United States)

    2017-09-01

    The relative orientation between filamentary structures in molecular clouds and the ambient magnetic field provides insight into filament formation and stability. To calculate the relative orientation, a measurement of filament orientation is first required. We propose a new method to calculate the orientation of the one-pixel-wide filament skeleton that is output by filament identification algorithms such as filfinder. We derive the local filament orientation from the direction of the intensity gradient in the skeleton image using the Sobel filter and a few simple post-processing steps. We call this the “Sobel-gradient method.” The resulting filament orientation map can be compared quantitatively on a local scale with the magnetic field orientation map to then find the relative orientation of the filament with respect to the magnetic field at each point along the filament. It can also be used for constructing radial profiles for filament width fitting. The proposed method facilitates automation in analyses of filament skeletons, which is imperative in this era of “big data.”.

  2. Ozone Measurements Monitoring Using Data-Based Approach

    KAUST Repository

    Harrou, Fouzi; Kadri, Farid; Khadraoui, Sofiane; Sun, Ying

    2016-01-01

    The complexity of ozone (O3) formation mechanisms in the troposphere make the fast and accurate modeling of ozone very challenging. In the absence of a process model, principal component analysis (PCA) has been extensively used as a data-based monitoring technique for highly correlated process variables; however conventional PCA-based detection indices often fail to detect small or moderate anomalies. In this work, we propose an innovative method for detecting small anomalies in highly correlated multivariate data. The developed method combine the multivariate exponentially weighted moving average (MEWMA) monitoring scheme with PCA modelling in order to enhance anomaly detection performance. Such a choice is mainly motivated by the greater ability of the MEWMA monitoring scheme to detect small changes in the process mean. The proposed PCA-based MEWMA monitoring scheme is successfully applied to ozone measurements data collected from Upper Normandy region, France, via the network of air quality monitoring stations. The detection results of the proposed method are compared to that declared by Air Normand air monitoring association.

  3. Ozone Measurements Monitoring Using Data-Based Approach

    KAUST Repository

    Harrou, Fouzi

    2016-02-01

    The complexity of ozone (O3) formation mechanisms in the troposphere make the fast and accurate modeling of ozone very challenging. In the absence of a process model, principal component analysis (PCA) has been extensively used as a data-based monitoring technique for highly correlated process variables; however conventional PCA-based detection indices often fail to detect small or moderate anomalies. In this work, we propose an innovative method for detecting small anomalies in highly correlated multivariate data. The developed method combine the multivariate exponentially weighted moving average (MEWMA) monitoring scheme with PCA modelling in order to enhance anomaly detection performance. Such a choice is mainly motivated by the greater ability of the MEWMA monitoring scheme to detect small changes in the process mean. The proposed PCA-based MEWMA monitoring scheme is successfully applied to ozone measurements data collected from Upper Normandy region, France, via the network of air quality monitoring stations. The detection results of the proposed method are compared to that declared by Air Normand air monitoring association.

  4. Measurement of airborne 218Po - A Bayesian approach

    International Nuclear Information System (INIS)

    Groer, P.G.; Lo, Y.

    1996-01-01

    The standard mathematical treatment of the buildup and decay of airborne radionuclides on a filter paper uses the solutions of the so-called bateman equations adapted to the sampling process. The equations can be interpreted as differential equations for the expectation of an underlying stochastic process, which describes the random fluctuations in the accumulation and decay of the sampled radioactive atoms. The process for the buildup and decay of airborne 218 Po can be characterized as an open-quotes immigration-death processclose quotes in the widely adopted, biologically based jargon. The probability distribution for the number of 218 Po atoms, accumulated after sampling time t, is Poisson. We show that the distribution of the number of counts, registered by a detector with efficiency ε during a counting period T after the end of sampling, it also Poisson, with mean dependent on ε,t,T, the flowrate and N o , the number of airborne 218 Po atoms per unit volume. This Poisson distribution was used to construct the likelihood given the observed number of counts. After inversion with Bayes' Theorem we obtained the posterior density for N o . This density characterizes the remaining uncertainty about the measured under of 218 Po atoms per unit volume of air. 6 refs., 3 figs., 1 tab

  5. A preliminary approach to identify irradiated foods by thermoluminescence measurements

    International Nuclear Information System (INIS)

    Shin, Choonshik; Kim, Hyoung-Ook; Lim, Yoongho

    2012-01-01

    Thermoluminescence (TL) is one of the physical methods for the identification of irradiated foods. Among the currently developed methods, TL is the most widely used method for the identification of irradiated foods. However, in order to use this method, silicate minerals should be isolated from food samples. The process for the isolation of silicate minerals is time consuming and laborious. In this work, we have investigated the applicability of the TL method using iron-containing minerals instead of silicate minerals. In the TL analyses of dried spices, TL glow curves of iron-containing minerals showed maximum temperatures between 150 and 250 °C which were the same as those of silicate minerals. The process for the mineral separation of the proposed method is simple, fast, easy, and reliable. Moreover, the analysis results including TL ratio have not shown significant differences compared with the silicate minerals method. As a result, the TL measurements using the iron-containing minerals could be an excellent method for the identification of the irradiated foods, including dried spices. - Highlights: ► A thermoluminescence method using iron-containing minerals is proposed. ► Current method using silicate minerals is time consuming and laborious. ► However, the proposed method is simple, fast, easy, and reliable. ► Analysis results are similar to those of the silicate minerals method.

  6. Measuring Filament Orientation: A New Quantitative, Local Approach

    Science.gov (United States)

    Green, C.-E.; Dawson, J. R.; Cunningham, M. R.; Jones, P. A.; Novak, G.; Fissel, L. M.

    2017-09-01

    The relative orientation between filamentary structures in molecular clouds and the ambient magnetic field provides insight into filament formation and stability. To calculate the relative orientation, a measurement of filament orientation is first required. We propose a new method to calculate the orientation of the one-pixel-wide filament skeleton that is output by filament identification algorithms such as filfinder. We derive the local filament orientation from the direction of the intensity gradient in the skeleton image using the Sobel filter and a few simple post-processing steps. We call this the “Sobel-gradient method.” The resulting filament orientation map can be compared quantitatively on a local scale with the magnetic field orientation map to then find the relative orientation of the filament with respect to the magnetic field at each point along the filament. It can also be used for constructing radial profiles for filament width fitting. The proposed method facilitates automation in analyses of filament skeletons, which is imperative in this era of “big data.”

  7. Measuring Filament Orientation: A New Quantitative, Local Approach

    International Nuclear Information System (INIS)

    Green, C.-E.; Cunningham, M. R.; Jones, P. A.; Dawson, J. R.; Novak, G.; Fissel, L. M.

    2017-01-01

    The relative orientation between filamentary structures in molecular clouds and the ambient magnetic field provides insight into filament formation and stability. To calculate the relative orientation, a measurement of filament orientation is first required. We propose a new method to calculate the orientation of the one-pixel-wide filament skeleton that is output by filament identification algorithms such as filfinder. We derive the local filament orientation from the direction of the intensity gradient in the skeleton image using the Sobel filter and a few simple post-processing steps. We call this the “Sobel-gradient method.” The resulting filament orientation map can be compared quantitatively on a local scale with the magnetic field orientation map to then find the relative orientation of the filament with respect to the magnetic field at each point along the filament. It can also be used for constructing radial profiles for filament width fitting. The proposed method facilitates automation in analyses of filament skeletons, which is imperative in this era of “big data.”

  8. Non-Invasive Ocular Rigidity Measurement: A Differential Tonometry Approach

    Directory of Open Access Journals (Sweden)

    Efstathios T. Detorakis

    2015-12-01

    Full Text Available Purpose: Taking into account the fact that Goldmann applanation tonometry (GAT geometrically deforms the corneal apex and displaces volume from the anterior segment whereas Dynamic Contour Tonometry (DCT does not, we aimed at developing an algorithm for the calculation of ocular rigidity (OR based on the differences in pressure and volume between deformed and non-deformed status according to the general Friedenwald principle of differential tonometry. Methods: To avoid deviations of GAT IOP from true IOP in eyes with corneas different from the “calibration cornea” we applied the previously described Orssengo-Pye algorithm to calculate an error coefficient “C/B”. To test the feasibility of the proposed model, we calculated the OR coefficient (r in 17 cataract surgery candidates (9 males and 8 females. Results: The calculated r according to our model (mean ± SD, range was 0.0174 ± 0.010 (0.0123–0.022 mmHg/μL. A negative statistically significant correlation between axial length and r was detected whereas correlations between r and other biometric parameters examined were statistically not significant. Conclusions: The proposed method may prove a valid non-invasive tool for the measurement method of OR, which could help in introducing OR in the decision-making of the routine clinical practice.

  9. Conceptualizing and measuring energy security: A synthesized approach

    International Nuclear Information System (INIS)

    Sovacool, Benjamin K.; Mukherjee, Ishani

    2011-01-01

    This article provides a synthesized, workable framework for analyzing national energy security policies and performance. Drawn from research interviews, survey results, a focused workshop, and an extensive literature review, this article proposes that energy security ought to be comprised of five dimensions related to availability, affordability, technology development, sustainability, and regulation. We then break these five dimensions down into 20 components related to security of supply and production, dependency, and diversification for availability; price stability, access and equity, decentralization, and low prices for affordability; innovation and research, safety and reliability, resilience, energy efficiency, and investment for technology development; land use, water, climate change, and air pollution for sustainability; and governance, trade, competition, and knowledge for sound regulation. Further still, our synthesis lists 320 simple indicators and 52 complex indicators that policymakers and scholars can use to analyze, measure, track, and compare national performance on energy security. The article concludes by offering implications for energy policy more broadly. -- Highlights: → Energy security should consist of five dimensions related to availability, affordability, technology development, sustainability, and regulation. → The dimensions of energy security can be broken down into 20 components. → These components can be distilled into 320 simple indicators and 52 complex indicators.

  10. A dynamic approach to real-time performance measurement in design projects

    DEFF Research Database (Denmark)

    Skec, Stanko; Cash, Philip; Storga, Mario

    2017-01-01

    Recent developments in engineering design management point to the need for more dynamic, fine-grain measurement approaches able to deal with multi-dimensional, cross-level process performance in product design. Thus, this paper proposes a new approach to the measurement and management of individu...

  11. Measuring customer service quality in international marketing channels: a multimethod approach

    NARCIS (Netherlands)

    Wetzels, M.G.M.; Ruyter, de J.C.; Lemmink, J.G.A.M.; Koelemeijer, K.

    1995-01-01

    The measurement of perceived service quality using the SERVQUAL approach has been criticized by a number of authors recently. This criticism concerns the conceptual basis of this methodology as well as its empirical operationalization. Presents a complementary approach to measuring service quality

  12. Measuring the quality of MDT working: an observational approach

    Directory of Open Access Journals (Sweden)

    Taylor Cath

    2012-05-01

    Full Text Available Abstract Background Cancer multidisciplinary teams (MDTs are established in many countries but little is known about how well they function. A core activity is regular MDT meetings (MDMs where treatment recommendations are agreed. A mixed methods descriptive study was conducted to develop and test quality criteria for observational assessment of MDM performance calibrated against consensus from over 2000 MDT members about the “characteristics of an effective MDT”. Methods Eighteen of the 86 ‘Characteristics of Effective MDTs’ were considered relevant and feasible to observe. They collated to 15 aspects of MDT working covering four domains: the team (e.g. attendance, chairing, teamworking; infrastructure for meetings (venue, equipment; meeting organisation and logistics; and patient-centred clinical decision-making (patient-centredness, clarity of recommendations. Criteria for rating each characteristic from ‘very poor’ to ‘very good’ were derived from literature review, observing MDMs and expert input. Criteria were applied to 10 bowel cancer MDTs to assess acceptability and measure variation between and within teams. Feasibility and inter-rater reliability was assessed by comparing three observers. Results Observational assessment was acceptable to teams and feasible to implement. Total scores from 29 to 50 (out of 58 highlighted wide diversity in quality between teams. Eight teams were rated either ‘very good/good’ or ‘very poor/poor’ for at least three domains demonstrating some internal consistency. ‘Very good’ ratings were most likely for attendance and administrative preparation, and least likely for patient-centredness of decision-making and prioritisation of complex cases. All except two characteristics had intra-class correlations of ≥0.50. Conclusions This observational tool (MDT-OARS may contribute to the assessment of MDT performance. Further testing to confirm validity and reliability is required.

  13. Fine-resolution repeat topographic surveying of dryland landscapes using UAS-based structure-from-motion photogrammetry: Assessing accuracy and precision against traditional ground-based erosion measurements

    Science.gov (United States)

    Gillian, Jeffrey K.; Karl, Jason W.; Elaksher, Ahmed; Duniway, Michael C.

    2017-01-01

    Structure-from-motion (SfM) photogrammetry from unmanned aerial system (UAS) imagery is an emerging tool for repeat topographic surveying of dryland erosion. These methods are particularly appealing due to the ability to cover large landscapes compared to field methods and at reduced costs and finer spatial resolution compared to airborne laser scanning. Accuracy and precision of high-resolution digital terrain models (DTMs) derived from UAS imagery have been explored in many studies, typically by comparing image coordinates to surveyed check points or LiDAR datasets. In addition to traditional check points, this study compared 5 cm resolution DTMs derived from fixed-wing UAS imagery with a traditional ground-based method of measuring soil surface change called erosion bridges. We assessed accuracy by comparing the elevation values between DTMs and erosion bridges along thirty topographic transects each 6.1 m long. Comparisons occurred at two points in time (June 2014, February 2015) which enabled us to assess vertical accuracy with 3314 data points and vertical precision (i.e., repeatability) with 1657 data points. We found strong vertical agreement (accuracy) between the methods (RMSE 2.9 and 3.2 cm in June 2014 and February 2015, respectively) and high vertical precision for the DTMs (RMSE 2.8 cm). Our results from comparing SfM-generated DTMs to check points, and strong agreement with erosion bridge measurements suggests repeat UAS imagery and SfM processing could replace erosion bridges for a more synoptic landscape assessment of shifting soil surfaces for some studies. However, while collecting the UAS imagery and generating the SfM DTMs for this study was faster than collecting erosion bridge measurements, technical challenges related to the need for ground control networks and image processing requirements must be addressed before this technique could be applied effectively to large landscapes.

  14. Repeated Causal Decision Making

    Science.gov (United States)

    Hagmayer, York; Meder, Bjorn

    2013-01-01

    Many of our decisions refer to actions that have a causal impact on the external environment. Such actions may not only allow for the mere learning of expected values or utilities but also for acquiring knowledge about the causal structure of our world. We used a repeated decision-making paradigm to examine what kind of knowledge people acquire in…

  15. simple sequence repeat (SSR)

    African Journals Online (AJOL)

    In the present study, 78 mapped simple sequence repeat (SSR) markers representing 11 linkage groups of adzuki bean were evaluated for transferability to mungbean and related Vigna spp. 41 markers amplified characteristic bands in at least one Vigna species. The transferability percentage across the genotypes ranged ...

  16. 3D measurement using combined Gray code and dual-frequency phase-shifting approach

    Science.gov (United States)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin

    2018-04-01

    The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.

  17. Developing risk management dashboards using risk and quality measures: A visual best practices approach.

    Science.gov (United States)

    Bunting, Robert F; Siegal, Dana

    2017-10-01

    Because quality measures are ubiquitous, health care risk management leaders often use them as a proxy for risk management measures. While certain quality measures adequately reflect some aspects of risk management, they are neither a perfect nor complete substitute for well-developed and comprehensive risk management measures. Using a comprehensive approach consisting of quality measures, risk measures, and measures that are less amenable to classification would be the best approach. Identifying the most powerful and informative measures, designing the most appropriate dashboards, and incorporating visual best practices are crucial steps required for evaluating the effectiveness and value of an enterprise risk management program. The authors explain the terms and concepts, review the measures available in the literature, propose new measures, discuss visual best practices, and provide sample dashboard components. © 2017 American Society for Healthcare Risk Management of the American Hospital Association.

  18. Accessibility of green space in urban areas: an examination of various approaches to measure it

    OpenAIRE

    Zhang, Xin

    2007-01-01

    In the present research, we attempt to improve the methods used for measuring accessibility of green spaces by combining two components of accessibility-distance and demand relative to supply. Three modified approaches (Joseph and Bantock gravity model measure, the two-step floating catchment area measure and a measure based on kernel densities) will be applied for measuring accessibility to green spaces. We select parks and public open spaces (metropolitan open land) of south London as a cas...

  19. DEVELOPMENT OF EVALUATION OF A QUANTITATIVE VIDEO-FLUORESCENCE IMAGING SYSTEM AND FLUORESCENT TRACER FOR MEASURING TRANSFER OF PESTICIDE RESIDUES FROM SURFACES TO HANDS WITH REPEATED CONTACTS

    Science.gov (United States)

    A video imaging system and the associated quantification methods have been developed for measurement of the transfers of a fluorescent tracer from surfaces to hands. The highly fluorescent compound riboflavin (Vitamin B2), which is also water soluble and non-toxic, was chosen as...

  20. Empirical Approaches to Measuring the Intelligibility of Different Varieties of English in Predicting Listener Comprehension

    Science.gov (United States)

    Kang, Okim; Thomson, Ron I.; Moran, Meghan

    2018-01-01

    This study compared five research-based intelligibility measures as they were applied to six varieties of English. The objective was to determine which approach to measuring intelligibility would be most reliable for predicting listener comprehension, as measured through a listening comprehension test similar to the Test of English as a Foreign…

  1. Personal selling constructs and measures: Emic versus etic approaches to cross-national research

    OpenAIRE

    Herché, Joel; Swenson, Michael; Verbeke, Willem

    1996-01-01

    textabstractEvaluates transportability of personal selling measures across cultural boundaries. Concept of measurement development; Emic and etic approaches to developing measures for cross-cultural applications; Cross-national dimensionality, reliability and construct validity of adaptive selling (ADAPTS) and customer-oriented selling (SOCO).

  2. Using and Developing Measurement Instruments in Science Education: A Rasch Modeling Approach. Science & Engineering Education Sources

    Science.gov (United States)

    Liu, Xiufeng

    2010-01-01

    This book meets a demand in the science education community for a comprehensive and introductory measurement book in science education. It describes measurement instruments reported in refereed science education research journals, and introduces the Rasch modeling approach to developing measurement instruments in common science assessment domains,…

  3. Comparative use of different emission measurement approaches to determine methane emissions from a biogas plant

    DEFF Research Database (Denmark)

    Reinelt, Torsten; Delre, Antonio; Westerkamp, Tanja

    2017-01-01

    (corresponding to a methane loss of 0.6 and 2.1%) from team to team, depending on the number of measured emission points, operational state during the measurements and the measurement method applied. Taking the operational conditions into account, the deviation between different approaches and teams could......A sustainable anaerobic biowaste treatment has to mitigate methane emissions from the entire biogas production chain, but the exact quantification of these emissions remains a challenge. This study presents a comparative measurement campaign carried out with on-site and ground-based remote sensing...... measurement approaches conducted by six measuring teams at a Swedish biowaste treatment plant. The measured emissions showed high variations, amongst others caused by different periods of measurement performance in connection with varying operational states of the plant. The overall methane emissions measured...

  4. Classifying individuals based on a densely captured sequence of vital signs: An example using repeated blood pressure measurements during hemodialysis treatment.

    Science.gov (United States)

    Goldstein, Benjamin A; Chang, Tara I; Winkelmayer, Wolfgang C

    2015-10-01

    Electronic Health Records (EHRs) present the opportunity to observe serial measurements on patients. While potentially informative, analyzing these data can be challenging. In this work we present a means to classify individuals based on a series of measurements collected by an EHR. Using patients undergoing hemodialysis, we categorized people based on their intradialytic blood pressure. Our primary criteria were that the classifications were time dependent and independent of other subjects. We fit a curve of intradialytic blood pressure using regression splines and then calculated first and second derivatives to come up with four mutually exclusive classifications at different time points. We show that these classifications relate to near term risk of cardiac events and are moderately stable over a succeeding two-week period. This work has general application for analyzing dense EHR data. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Changing the size of a mirror-reflected hand moderates the experience of embodiment but not proprioceptive drift: a repeated measures study on healthy human participants.

    Science.gov (United States)

    Wittkopf, Priscilla G; Lloyd, Donna M; Johnson, Mark I

    2017-06-01

    Mirror visual feedback is used for reducing pain and visually distorting the size of the reflection may improve efficacy. The findings of studies investigating size distortion are inconsistent. The influence of the size of the reflected hand on embodiment of the mirror reflection is not known. The aim of this study was to compare the effect of magnifying and minifying mirror reflections of the hand on embodiment measured using an eight-item questionnaire and on proprioceptive drift. During the experiment, participants (n = 45) placed their right hand behind a mirror and their left hand in front of a mirror. Participants watched a normal-sized, a magnified and a minified reflection of the left hand while performing synchronised finger movements for 3 min (adaptive phase). Measurements of embodiment were taken before (pre) and after (post) synchronous movements of the fingers of both hands (embodiment adaptive phase). Results revealed larger proprioceptive drift post-adaptive phase (p = 0.001). Participants agreed more strongly with questionnaire items associated with location, ownership and agency of the reflection of the hand post-adaptive phase (p embodiment of the reflection of the hand. Magnifying and minifying the reflection of the hand has little effect on proprioceptive drift, but it weakens the subjective embodiment experience. Such factors need to be taken into account in future studies using this technique, particularly when assessing mirror visual feedback for pain management.

  6. Repetibilidade e número mínimo de medições para caracteres de cacho de bacabi (Oenocarpus mapora Repeatability and minimum number of measurements for characters of bacabi palm (Oenocarpus mapora racemes

    Directory of Open Access Journals (Sweden)

    Maria do Socorro Padilha de Oliveira

    2010-12-01

    potential for commercial and food uses, but has not been well studied. The objectives of this study were to estimate the repeatability coefficients, determine predictability and the number of measurements needed for raceme characters of this palm. 27 individuals of bacabi that belong to the Germplasm Bank of Oenocarpus/Jessenia at Embrapa Eastern Amazon, in Belém, PA, Brazil were evaluated. Three fully matured racemes from each plant were sampled to measure six characters: total weight of raceme (TWR and fruit weight per raceme (FWR, number of rachillae per raceme (NRR, rachis length per raceme (RLR, weight of 100 fruits (WHF and fruit yield per raceme (FER. The repeatability estimates were obtained by three statistical methods: analysis of variance; principal components; and structural analysis. For all characters, the estimates of repeatability coefficients presented values with very similar magnitudes in the three methods. The estimates of repeatability coefficients and determination coefficients were relatively high (r 0.60 and R2 81.7% for the characters FER and NRR, showing genotype regularity for these raceme measurements. For these characters, the minimum number of racemes necessary to estimate the true character value of the genotypes was thirteen (FER and five (NRR, with 95% reliability. The remaining characters showed repeatabilities and determination coefficients with medium to low values, indicating the need for better environmental control to make the measurements.

  7. The effects of repeated testing, simulated malingering, and traumatic brain injury on high-precision measures of simple visual reaction time

    Directory of Open Access Journals (Sweden)

    David L Woods

    2015-11-01

    Full Text Available Simple reaction time (SRT, the latency to respond to a stimulus, has been widely used as a basic measure of processing speed. In the current experiments, we examined clinically-relevant properties of a new SRT test that presents visual stimuli to the left or right hemifield at varying stimulus onset asynchronies. Experiment 1 examined test-retest reliability in participants who underwent three test sessions at weekly intervals. In the first test, log-transformed (log-SRT z-scores, corrected for the influence of age and computer-use, were well predicted by regression functions derived from a normative population of 189 control participants. Test-retest reliability of log-SRT z-scores was measured with an intraclass correlation coefficient (ICC = 0.83 and equaled or exceeded those of other SRT tests and other widely used tests of processing speed that are administered manually. No significant learning effects were observed across test sessions. Experiment 2 investigated the same participants when instructed to malinger during a fourth testing session: 94% showed abnormal log-SRT z-scores, with 83% producing log-SRT z-scores exceeding a cutoff of 3.0, a degree of abnormality never seen in full-effort conditions. Thus, a log-SRT z-score cutoff of 3.0 had a sensitivity (83% and specificity (100% that equaled or exceeded that of existing symptom validity tests. We argue that even expert malingerers, fully informed of the malingering-detection metric, would be unable to successfully feign impairments on the SRT test because of the precise control of SRT latencies that would be required. Experiment 3 investigated 26 patients with traumatic brain injury (TBI tested more than one year post-injury. The 22 patients with mild TBI showed insignificantly faster SRTs than controls, but a small group of four patients with severe TBI showed slowed SRTs. Simple visual reaction time is a reliable measure of processing speed that is sensitive to the effects of

  8. [Biocybernetic approach to the thermometric methods of blood supply measurements of periodontal tissues].

    Science.gov (United States)

    Pastusiak, J; Zakrzewski, J

    1988-11-01

    Specific biocybernetic approach to the problem of the blood supply determination of paradontium tissues by means of thermometric methods has been presented in the paper. The compartment models of the measuring procedure have been given. Dilutodynamic methology and classification has been applied. Such an approach enables to select appropriate biophysical parameters describing the state of blood supply of paradontium tissues and optimal design of transducers and measuring methods.

  9. County level socioeconomic position, work organization and depression disorder: a repeated measures cross-classified multilevel analysis of low-income nursing home workers.

    Science.gov (United States)

    Muntaner, Carles; Li, Yong; Xue, Xiaonan; Thompson, Theresa; O'Campo, Patricia; Chung, Haejoo; Eaton, William W

    2006-12-01

    This study simultaneously tests the effect of county, organizational, workplace, and individual level variables on depressive disorders among low-income nursing assistants employed in US nursing homes. A total of 482 observations are used from two waves of survey data collection, with an average two-year interval between initial and follow-up surveys. The overall response rate was 62 percent. The hierarchically structured data was analyzed using multilevel modeling to account for cross-classifications across levels of data. Nursing assistants working in nursing homes covered by a single union in three states were asked about aspects of their working conditions, job stress, physical and mental health status, individual and family health-care needs, household economics and household strain. The 241 nursing assistants who participated in this study were employed in 34 nursing homes and lived in 49 counties of West Virginia, Ohio and Kentucky. The study finds that emotional strain, related to providing direct care to elderly and disabled clients, is associated with depressive disorder, as is nursing home ownership type (for-profit versus not-for-profit). However, when controlling for county level socioeconomic variables (Gini index and proportion of African Americans living in the county), neither workplace nor organizational level variables were found to be statistically significant associated with depressive disorder. This study supports previous findings that emotional demand in health-care environments is an important correlate of mental health. It also adds empirical evidence to support a link between financial strain and depression in US women. While this study does not find that lack of a seniority wage benefits--a factor that can conceivably exacerbate financial strain over time--is associated with depressive disorder among low-income health-care workers, it does find county level measures of poverty to be statistically significant predictors of depressive

  10. Long-term aerobic exercise and omega-3 supplementation modulate osteoporosis through inflammatory mechanisms in post-menopausal women: a randomized, repeated measures study

    Directory of Open Access Journals (Sweden)

    Kanaley Jill

    2011-10-01

    Full Text Available Abstract Background Evidence indicates that dietary fats and physical activity influence bone health. The purpose of this study was to examine the effects of long-term aerobic exercise and omega-3 (N-3 supplementation on serum inflammatory markers, bone mineral density (BMD, and bone biomarkers in post-menopausal women. Methods Seventy-nine healthy sedentary post-menopausal women aged 58-78 years participated in this study. Subjects were randomized to one of 4 groups: exercise + supplement (E+S, n = 21, exercise (E, n = 20, supplement (S, n = 20, and control (Con, n = 18 groups. The subjects in the E+S and E groups performed aerobic exercise training (walking and jogging up to 65% of HRmax, three times a week for 24 weeks. Subjects in the E+S and S groups consumed 1000 mg/d N-3 for 24 weeks. The lumbar spine (L2-L4 and femoral neck BMD, serum tumor necrosis factor (TNF α, interleukin (IL 6, prostaglandin (PG E2, estrogen, osteocalcin, 1, 25-dihydroxyvitamin D3 (1, 25 Vit D, C-telopeptide (CTX, parathyroid hormone (PTH and calcitonin (CT were measured at baseline, the end of week 12 and 24. Results Serum estrogen, osteocalcin, 1, 25 Vit D, CT, L2-L4 and femoral neck BMD measures increased (P 2 decreased (P 2-L4 and femoral neck BMD, estrogen, osteocalcin, and CT were negatively (P 2. PTH and CT were correlated positively and negatively with IL-6, respectively (P Conclusions The present study demonstrates that long-term aerobic exercise training plus N-3 supplementation have a synergistic effect in attenuating inflammation and augmenting BMD in post-menopausal osteoporosis.

  11. Accuracy of repeated measurements of late-night salivary cortisol to screen for early-stage recurrence of Cushing's disease following pituitary surgery.

    Science.gov (United States)

    Danet-Lamasou, Marie; Asselineau, Julien; Perez, Paul; Vivot, Alexandre; Nunes, Marie-Laure; Loiseau, Hugues; San-Galli, François; Cherifi-Gatta, Blandine; Corcuff, Jean-Benoît; Tabarin, Antoine

    2015-02-01

    The performance of late-night salivary cortisol (LNSC) to accurately screen for postoperative recurrence of Cushing's disease (CD) at an early stage is unknown. The aim of this study was to compare the accuracy of multiple sampling strategies to suggest the optimal number of LNSC samples needed for diagnosing post-surgical recurrences of CD at an early stage. Retrospective analysis in a single centre. Thirty-six patients in surgical remission of CD had successive measurements of LNSC, defined as 'sequences', using a locally modified RIA assay as part of long-term follow-up (69·2 ± 10·6 months). Patients underwent an extensive biochemical evaluation within 3 months before or after a sequence of saliva sampling and were classified as being in remission or in early-stage recurrence. The accuracy of three diagnostic strategies combining two, three or four LNSC results from a sequence was estimated using areas under the ROC curves (AUC), sensitivity, specificity and predictive values. Forty-four sequences of LNSC measurements were available. Fifty-two percent of sequences were performed during early-stage recurrence. The intrasequence variability of LNSC was higher during recurrence than during remission (medians of SDs: 2·1 vs 0·5 nm; P recurrence of CD. However, due to a major within-patient variability of LNSC from 1 day to another, a screening strategy using three or four samples collected on successive days may be recommended to detect early-stage recurrence of CD with a high accuracy. © 2014 John Wiley & Sons Ltd.

  12. Comparative use of different emission measurement approaches to determine methane emissions from a biogas plant.

    Science.gov (United States)

    Reinelt, Torsten; Delre, Antonio; Westerkamp, Tanja; Holmgren, Magnus A; Liebetrau, Jan; Scheutz, Charlotte

    2017-10-01

    A sustainable anaerobic biowaste treatment has to mitigate methane emissions from the entire biogas production chain, but the exact quantification of these emissions remains a challenge. This study presents a comparative measurement campaign carried out with on-site and ground-based remote sensing measurement approaches conducted by six measuring teams at a Swedish biowaste treatment plant. The measured emissions showed high variations, amongst others caused by different periods of measurement performance in connection with varying operational states of the plant. The overall methane emissions measured by ground-based remote sensing varied from 5 to 25kgh -1 (corresponding to a methane loss of 0.6-3.0% of upgraded methane produced), depending on operating conditions and the measurement method applied. Overall methane emissions measured by the on-site measuring approaches varied between 5 and 17kgh -1 (corresponding to a methane loss of 0.6 and 2.1%) from team to team, depending on the number of measured emission points, operational state during the measurements and the measurement method applied. Taking the operational conditions into account, the deviation between different approaches and teams could be explained, in that the two largest methane-emitting sources, contributing about 90% of the entire site's emissions, were found to be the open digestate storage tank and a pressure release valve on the compressor station. Copyright © 2017. Published by Elsevier Ltd.

  13. Standardized Approach to Quantitatively Measure Residual Limb Skin Health in Individuals with Lower Limb Amputation.

    Science.gov (United States)

    Rink, Cameron L; Wernke, Matthew M; Powell, Heather M; Tornero, Mark; Gnyawali, Surya C; Schroeder, Ryan M; Kim, Jayne Y; Denune, Jeffrey A; Albury, Alexander W; Gordillo, Gayle M; Colvin, James M; Sen, Chandan K

    2017-07-01

    Objective: (1) Develop a standardized approach to quantitatively measure residual limb skin health. (2) Report reference residual limb skin health values in people with transtibial and transfemoral amputation. Approach: Residual limb health outcomes in individuals with transtibial ( n  = 5) and transfemoral ( n  = 5) amputation were compared to able-limb controls ( n  = 4) using noninvasive imaging (hyperspectral imaging and laser speckle flowmetry) and probe-based approaches (laser doppler flowmetry, transcutaneous oxygen, transepidermal water loss, surface electrical capacitance). Results: A standardized methodology that employs noninvasive imaging and probe-based approaches to measure residual limb skin health are described. Compared to able-limb controls, individuals with transtibial and transfemoral amputation have significantly lower transcutaneous oxygen tension, higher transepidermal water loss, and higher surface electrical capacitance in the residual limb. Innovation: Residual limb health as a critical component of prosthesis rehabilitation for individuals with lower limb amputation is understudied in part due to a lack of clinical measures. Here, we present a standardized approach to measure residual limb health in people with transtibial and transfemoral amputation. Conclusion: Technology advances in noninvasive imaging and probe-based measures are leveraged to develop a standardized approach to quantitatively measure residual limb health in individuals with lower limb loss. Compared to able-limb controls, resting residual limb physiology in people that have had transfemoral or transtibial amputation is characterized by lower transcutaneous oxygen tension and poorer skin barrier function.

  14. Evaluation of repeated measurements of radon-222 concentrations in well water sampled from bedrock aquifers of the Piedmont near Richmond, Virginia, USA: Effects of lithology and well characteristics

    International Nuclear Information System (INIS)

    Harris, Shelley A.; Billmeyer, Ernest R.; Robinson, Michael A.

    2006-01-01

    Radon ( 222 Rn) concentrations in 26 ground water wells of two distinct lithologies in the Piedmont of Virginia were measured to assess variation in ground water radon concentrations (GWRC), to evaluate differences in concentrations related to well characteristics, lithology, and spatial distributions, and to assess the feasibility of predicting GWRC. Wells were sampled in accordance with American Public Health Association Method 7500 Rn-B, with modifications to include a well shaft profile analysis that determined the minimum purge time sufficient to remove the equivalent of one column of water from each well. Statistically significant differences in GWRC were found in the Trssu (1482±1711 pCi/L) and Mpg (7750±5188 pCi/L) lithologies, however, no significant differences were found among GWRC at each well over time. Using multiple regression, 86% of the variability (R 2 ) in the GWRC was explained by the lithology, latitudinal class, and water table elevation of the wells. The GWRC in a majority of the wells studied exceed US Environmental Protection Agency designated maximum contaminant level and AMCL. Results support modifications to sampling procedures and indicate that, in previous studies, variations in GWRC concentrations over time may have been due in part to differences in sampling procedures and not in source water

  15. A novel approach to reduce environmental noise in microgravity measurements using a Scintrex CG5

    Science.gov (United States)

    Boddice, Daniel; Atkins, Phillip; Rodgers, Anthony; Metje, Nicole; Goncharenko, Yuriy; Chapman, David

    2018-05-01

    The accuracy and repeatability of microgravity measurements for surveying purposes are affected by two main sources of noise; instrument noise from the sensor and electronics, and environmental sources of noise from anthropogenic activity, wind, microseismic activity and other sources of vibrational noise. There is little information in the literature on the quantitative values of these different noise sources and their significance for microgravity measurements. Experiments were conducted to quantify these sources of noise with multiple instruments, and to develop methodologies to reduce these unwanted signals thereby improving the accuracy or speed of microgravity measurements. External environmental sources of noise were found to be concentrated at higher frequencies (> 0.1 Hz), well within the instrument's bandwidth. In contrast, the internal instrumental noise was dominant at frequencies much lower than the reciprocal of the maximum integration time, and was identified as the limiting factor for current instruments. The optimum time for integration was found to be between 120 and 150 s for the instruments tested. In order to reduce the effects of external environmental noise on microgravity measurements, a filtering and despiking technique was created using data from noisy environments next to a main road and outside on a windy day. The technique showed a significant improvement in the repeatability of measurements, with between 40% and 50% lower standard deviations being obtained over numerous different data sets. The filtering technique was then tested in field conditions by using an anomaly of known size, and a comparison made between different filtering methods. Results showed improvements with the proposed method performing better than a conventional, or boxcar, averaging process. The proposed despiking process was generally found to be ineffective, with greater gains obtained when complete measurement records were discarded. Field survey results were

  16. Measuring economy-wide energy efficiency performance: A parametric frontier approach

    International Nuclear Information System (INIS)

    Zhou, P.; Ang, B.W.; Zhou, D.Q.

    2012-01-01

    This paper proposes a parametric frontier approach to estimating economy-wide energy efficiency performance from a production efficiency point of view. It uses the Shephard energy distance function to define an energy efficiency index and adopts the stochastic frontier analysis technique to estimate the index. A case study of measuring the economy-wide energy efficiency performance of a sample of OECD countries using the proposed approach is presented. It is found that the proposed parametric frontier approach has higher discriminating power in energy efficiency performance measurement compared to its nonparametric frontier counterparts.

  17. Avaliação do desempenho zootécnico de genótipos de frangos de corte utilizando-se a análise de medidas repetidas Performance evaluation of broiler genotypes by repeated measures

    Directory of Open Access Journals (Sweden)

    Millor Fernandes do Rosário

    2005-12-01

    . Four genotypes (A, B, C, and D and two sexes were evaluated at six ages (7, 14, 21, 28, 35, and 42 days for average feed intake (AFI, average body weight (ABW and feed:gain ratio (F/G using an unbalanced incomplete blocks 4x2 factorial design. Five error co(variance structures were tested using the MIXED procedure of SAS® for statistical analyses. Averages were estimated by least squares and compared by the Tukey-Kramer test. Quadratic profile analyses for AFI and F/G and Gompertz growth model for ABW and respective coefficients of determination were estimated by NLIN procedure of SAS®. Significant effects of some triple or double interactions were observed for all response variables. Genotypes signicantly diferred in each age and sex for AFI and ABW after 21 days of age and diferences for F/G between genotypes and sexes, in each age, were verified only at 42 days of age. Larger averages for AFI and ABW was observed for genotype D and smaller F/G was verified in genotypes C and B. Estimated profile analyses explained adequately each response variable in function of age. AFI and ABW for males of genotype D were larger after 14 days and from 28 to 42 days their performance diferred from the other genotypes. The best F/G was observed in males of the genotype C. Overall, genotypes C and B presented the best performance. The repeated measurements approach was approppriate to evaluate diferences in the performance of broiler genotypes.

  18. Measurement

    NARCIS (Netherlands)

    Boumans, M.; Durlauf, S.N.; Blume, L.E.

    2008-01-01

    Measurement theory takes measurement as the assignment of numbers to properties of an empirical system so that a homomorphism between the system and a numerical system is established. To avoid operationalism, two approaches can be distinguished. In the axiomatic approach it is asserted that if the

  19. Assessment of denitrification gaseous end-products in the soil profile under two water table management practices using repeated measures analysis.

    Science.gov (United States)

    Elmi, Abdirashid A; Astatkie, Tess; Madramootoo, Chandra; Gordon, Robert; Burton, David

    2005-01-01

    The denitrification process and nitrous oxide (N2O) production in the soil profile are poorly documented because most research into denitrification has concentrated on the upper soil layer (0-0.15 m). This study, undertaken during the 1999 and 2000 growing seasons, was designed to examine the effects of water table management (WTM), nitrogen (N) application rate, and depth (0.15, 0.30, and 0.45 m) on soil denitrification end-products (N2O and N2) from a corn (Zea mays L.) field. Water table management treatments were free drainage (FD) with open drains and subirrigation (SI) with a target water table depth of 0.6 m. Fertility treatments (ammonium nitrate) were 120 kg N ha(-1) (N120) and 200 kg N ha(-1) (N200). During both growing seasons greater denitrification rates were measured in SI than in FD, particularly in the surface soil (0-0.15 m) and at the intermediate (0.15-0.30 m) soil depths under N200 treatment. Greater denitrification rates under the SI treatment, however, were not accompanied with greater N2O production. The decrease in N2O production under SI was probably caused by a more complete reduction of N2O to N2, which resulted in lower N2O to (N2O + N2) ratios. Denitrification rate, N2O production and N2O to (N2O + N2) ratios were only minimally affected by N treatments, irrespective of sampling date and soil depth. Overall, half of the denitrification occurred at the 0.15- to 0.30- and 0.30- to 0.45-m soil layers, and under SI, regardless of fertility treatment level. Consequently, sampling of the 0- to 0.15-m soil layer alone may not give an accurate estimation of denitrification losses under SI practice.

  20. Real-World Massage Therapy Produces Meaningful Effectiveness Signal for Primary Care Patients with Chronic Low Back Pain: Results of a Repeated Measures Cohort Study.

    Science.gov (United States)

    Elder, William G; Munk, Niki; Love, Margaret M; Bruckner, Geza G; Stewart, Kathryn E; Pearce, Kevin

    2017-07-01

    While efficacy of massage and other nonpharmacological treatments for chronic low back pain is established, stakeholders have called for pragmatic studies of effectiveness in "real-world" primary health care. The Kentucky Pain Research and Outcomes Study evaluated massage impact on pain, disability, and health-related quality of life for primary care patients with chronic low back pain. We report effectiveness and feasibility results, and make comparisons with established minimal clinically important differences. Primary care providers referred eligible patients for 10 massage sessions with community practicing licensed massage therapists. Oswestry Disability Index and SF-36v2 measures obtained at baseline and postintervention at 12 and 24 weeks were analyzed with mixed linear models and Tukey's tests. Additional analyses examined clinically significant improvement and predictive patient characteristics. Of 104 enrolled patients, 85 and 76 completed 12 and 24 weeks of data collection, respectively. Group means improved at 12 weeks for all outcomes and at 24 weeks for SF-36v2's Physical Component Summary and Bodily Pain Domain. Of those with clinically improved disability at 12 weeks, 75% were still clinically improved at 24 weeks ( P  < 0.01). For SF-36v2 Physical and Mental Component Summaries, 55.4% and 43.4%, respectively, showed clinically meaningful improvement at 12 weeks, 46.1% and 30.3% at 24 weeks. For Bodily Pain Domain, 49.4% were clinically improved at 12 weeks, 40% at 24 weeks. Adults older than age 49 years had better pain and disability outcomes than younger adults. Results provide a meaningful signal of massage effect for primary care patients with chronic low back pain and call for further research in practice settings using pragmatic designs with control groups. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  1. Meta-Analysis for Sociology – A Measure-Driven Approach

    Science.gov (United States)

    Roelfs, David J.; Shor, Eran; Falzon, Louise; Davidson, Karina W.; Schwartz, Joseph E.

    2013-01-01

    Meta-analytic methods are becoming increasingly important in sociological research. In this article we present an approach for meta-analysis which is especially helpful for sociologists. Conventional approaches to meta-analysis often prioritize “concept-driven” literature searches. However, in disciplines with high theoretical diversity, such as sociology, this search approach might constrain the researcher’s ability to fully exploit the entire body of relevant work. We explicate a “measure-driven” approach, in which iterative searches and new computerized search techniques are used to increase the range of publications found (and thus the range of possible analyses) and to traverse time and disciplinary boundaries. We demonstrate this measure-driven search approach with two meta-analytic projects, examining the effects of various social variables on all-cause mortality. PMID:24163498

  2. Effects of two hospital bed design features on physical demands and usability during brake engagement and patient transportation: a repeated measures experimental study.

    Science.gov (United States)

    Kim, Sunwook; Barker, Linsey M; Jia, Bochen; Agnew, Michael J; Nussbaum, Maury A

    2009-03-01

    Work-related musculoskeletal disorders (WMSDs) are prevalent among healthcare workers worldwide. While existing research has focused on patient-handling techniques during activities which require direct patient contact (e.g., patient transfer), nursing tasks also involve other patient-handling activities, such as engaging bed brakes and transporting patients in beds, which could render healthcare workers at risk of developing WMSDs. Effectiveness of hospital bed design features (brake pedal location and steering-assistance) was evaluated in terms of physical demands and usability during brake engagement and patient transportation tasks. Two laboratory-based studies were conducted. In simulated brake engagement tasks, three brake pedal locations (head-end vs. foot-end vs. side of a bed) and two hands conditions (hands-free vs. hands-occupied) were manipulated. Additionally, both in-room and corridor patient transportation tasks were simulated, in which activation of steering-assistance features (5th wheel and/or front wheel caster lock) and two patient masses were manipulated. Nine novice participants were recruited from the local student population and community for each study. During brake engagement, trunk flexion angle, task completion time, and questionnaires were used to quantify postural comfort and usability. For patient transportation, dependent measures were hand forces and questionnaire responses. Brake pedal locations and steering-assistance features in hospital beds had significant effects on physical demands and usability during brake engagement and patient transportation tasks. Specifically, a brake pedal at the head-end of a bed increased trunk flexion by 74-224% and completion time by 53-74%, compared to other pedal locations. Participants reported greater overall perceived difficulty and less postural comfort with the brake pedal at the head-end. During in-room transportation, participants generally reported "Neither Low nor High" physical demands

  3. Repeatability of Cryogenic Multilayer Insulation

    Science.gov (United States)

    Johnson, W. L.; Vanderlaan, M.; Wood, J. J.; Rhys, N. O.; Guo, W.; Van Sciver, S.; Chato, D. J.

    2017-12-01

    Due to the variety of requirements across aerospace platforms, and one off projects, the repeatability of cryogenic multilayer insulation (MLI) has never been fully established. The objective of this test program is to provide a more basic understanding of the thermal performance repeatability of MLI systems that are applicable to large scale tanks. There are several different types of repeatability that can be accounted for: these include repeatability between identical blankets, repeatability of installation of the same blanket, and repeatability of a test apparatus. The focus of the work in this report is on the first two types of repeatability. Statistically, repeatability can mean many different things. In simplest form, it refers to the range of performance that a population exhibits and the average of the population. However, as more and more identical components are made (i.e. the population of concern grows), the simple range morphs into a standard deviation from an average performance. Initial repeatability testing on MLI blankets has been completed at Florida State University. Repeatability of five Glenn Research Center (GRC) provided coupons with 25 layers was shown to be +/- 8.4% whereas repeatability of repeatedly installing a single coupon was shown to be +/- 8.0%. A second group of 10 coupons has been fabricated by Yetispace and tested by Florida State University, the repeatability between coupons has been shown to be +/- 15-25%. Based on detailed statistical analysis, the data has been shown to be statistically significant.

  4. A super-resolution approach for uncertainty estimation of PIV measurements

    NARCIS (Netherlands)

    Sciacchitano, A.; Wieneke, B.; Scarano, F.

    2012-01-01

    A super-resolution approach is proposed for the a posteriori uncertainty estimation of PIV measurements. The measured velocity field is employed to determine the displacement of individual particle images. A disparity set is built from the residual distance between paired particle images of

  5. A new approach to counting measurements: Addressing the problems with ISO-11929

    Science.gov (United States)

    Klumpp, John; Miller, Guthrie; Poudel, Deepesh

    2018-06-01

    We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: "what is the probability distribution of the true amount in the sample, given the data?" The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the "measurement strength" that depends only on measurement-stage count quantities. We show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an "action threshold" on the measurement strength which is similar to the decision threshold recommended by the current standard. We further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.

  6. Navigational Traffic Conflict Technique: A Proactive Approach to Quantitative Measurement of Collision Risks in Port Waters

    Science.gov (United States)

    Debnath, Ashim Kumar; Chin, Hoong Chor

    Navigational safety analysis relying on collision statistics is often hampered because of the low number of observations. A promising alternative approach that overcomes this problem is proposed in this paper. By analyzing critical vessel interactions this approach proactively measures collision risk in port waters. The proposed method is illustrated for quantitative measurement of collision risks in Singapore port fairways, and validated by examining correlations between the measured risks with those perceived by pilots. This method is an ethically appealing alternative to the collision-based analysis for fast, reliable and effective safety assessment, thus possessing great potential for managing collision risks in port waters.

  7. Sound source measurement by using a passive sound insulation and a statistical approach

    Science.gov (United States)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  8. Statistical approaches to assessing single and multiple outcome measures in dry eye therapy and diagnosis.

    Science.gov (United States)

    Tomlinson, Alan; Hair, Mario; McFadyen, Angus

    2013-10-01

    Dry eye is a multifactorial disease which would require a broad spectrum of test measures in the monitoring of its treatment and diagnosis. However, studies have typically reported improvements in individual measures with treatment. Alternative approaches involve multiple, combined outcomes being assessed by different statistical analyses. In order to assess the effect of various statistical approaches to the use of single and combined test measures in dry eye, this review reanalyzed measures from two previous studies (osmolarity, evaporation, tear turnover rate, and lipid film quality). These analyses assessed the measures as single variables within groups, pre- and post-intervention with a lubricant supplement, by creating combinations of these variables and by validating these combinations with the combined sample of data from all groups of dry eye subjects. The effectiveness of single measures and combinations in diagnosis of dry eye was also considered. Copyright © 2013. Published by Elsevier Inc.

  9. Repetibilidade da mensuração de imagens das características de carcaça obtidas por ultrassonografia em fêmeas Nelore Repeatability of ultrasound image measurements of carcass traits in Nellore cattle

    Directory of Open Access Journals (Sweden)

    Maria Eugênia Zerlotti Mercadante

    2010-04-01

    Full Text Available Avaliou-se a repetibilidade da mensuração de imagens de ultrassom da área do músculo longissimus dorsi (AOL e das espessuras de gordura subcutânea do lombo (EGL e da garupa (EGG. Imagens de ultrassom tomadas no lombo (entre a 12ª e a 13ª costela e na garupa (entre os músculos gluteus medium e biceps femoris de novilhas Nelore de 14 a 22 meses de idade foram classificadas em aceitáveis, marginais e rejeitáveis. As imagens aceitáveis e marginais foram mensuradas duas vezes por três técnicos em diferentes níveis de treinamento. Foram estimadas as repetibilidades entre e dentro de técnicos por classe de qualidade da imagem, para determinação do efeito da qualidade da imagem e do técnico no valor absoluto da diferença entre a primeira e a segunda mensuração dessas características. A repetibilidade para as imagens aceitáveis foi maior que para imagens marginais, tanto entre como dentro de técnicos. Na análise da diferença absoluta entre a primeira e a segunda interpretação, foram significativos os efeitos de técnico para AOL e EGL e de classe de qualidade da imagem para AOL. Em geral, o técnico com maior experiência apresentou maiores valores de repetibilidade. É recomendável que a mensuração de imagens de animais de mesmo grupo contemporâneo seja feita por um único técnico.The repeatability of ultrasound image measurements of the longissimus dorsi muscle (AOL and of the rumpfat (EGG and backfat (EGL subcutaneous thickness was evaluated. Ultrasound images taken from the back (between 12th and 13th ribs and from the rump (between gluteus medium and biceps femoris muscles of Nelore heifers at 14 and 22 months of age were classified as acceptable, marginal and rejected. The acceptable and marginal images were measured twice by three technicians at different levels of training. It was estimated repeatabilities among and within technicians by class of image quality in order to determine effect of image quality and of

  10. The transverse technique; a complementary approach to the measurement of first-trimester uterine artery Doppler.

    Science.gov (United States)

    Drouin, Olivier; Johnson, Jo-Ann; Chaemsaithong, Piya; Metcalfe, Amy; Huber, Janie; Schwarzenberger, Jill; Winters, Erin; Stavness, Lesley; Tse, Ada W T; Lu, Jing; Lim, Wan Teng; Leung, Tak Yeung; Bujold, Emmanuel; Sahota, Daljit; Poon, Liona C

    2017-10-04

    The objectives of this study were to 1) define the protocol for the first-trimester assessment of the uterine artery pulsatility index (UtA-PI) using the new transverse technique, 2) evaluate UtA-PI measured by the transverse approach versus that obtained by the conventional sagittal approach, and 3) determine if accelerated onsite training (both methods) of inexperienced sonographers can achieve reproducible UtA-PI measurements compared to that measured by an experienced sonographer. The study consists of 2 parts conducted in 2 centers (Part 1, Calgary, Canada and Part 2, Hong Kong). Part 1 Prospective observational study of women with singleton pregnancies between 11-13+6 weeks' gestation. UtA-PI measurements were performed using the 2 techniques (4 sonographers trained in both methods, 10 cases each) and measurement indices (PI), time required and subjective difficulty to obtain satisfactory measurements were compared. One sample t-test and Wilcoxon rank sign test was used when appropriate. Bland-Altman difference plots were used to assess measurement agreement, and intra-class correlation (ICC) was used to evaluate measurement reliability. A target plot was used to assess measures of central tendency and dispersion. Part 2 One experienced and three inexperienced sonographers prospectively measured the UtA-PI at 11-13+6 weeks' gestation in two groups of women (42 and 35, respectively), with singleton pregnancies using both approaches. Inexperienced sonographers underwent accelerated on-site training by the experienced sonographer. Measurement approach and sonographer order were on a random basis. ICC, Bland-Altman and Passing-Bablok analyses were performed to assess measurement agreement, reliability and effect of accelerated training. Part 1 We observed no difference in the mean time to acquire the measurements (Sagittal: 118 seconds vs Transverse: 106 seconds, p=0.38). The 4 sonographers reported the transverse technique was subjectively easier to perform (p=0

  11. Selecting measures to prevent deleterious alkali-silica reaction in concrete : rationale for the AASHTO PP65 prescriptive approach.

    Science.gov (United States)

    2012-10-01

    PP65-11 provides two approaches for selecting preventive measures: (i) a performance approach based on laboratory testing, and (ii) a prescriptive approach based on a consideration of the reactivity of the aggregate, type and size of structure, expos...

  12. Repeat Customer Success in Extension

    Science.gov (United States)

    Bess, Melissa M.; Traub, Sarah M.

    2013-01-01

    Four multi-session research-based programs were offered by two Extension specialist in one rural Missouri county. Eleven participants who came to multiple Extension programs could be called "repeat customers." Based on the total number of participants for all four programs, 25% could be deemed as repeat customers. Repeat customers had…

  13. 78 FR 65594 - Vehicular Repeaters

    Science.gov (United States)

    2013-11-01

    ... coordinators estimate the effect on coordination fees? Does the supposed benefit that mobile repeater stations... allow the licensing and operation of vehicular repeater systems and other mobile repeaters by public... email: [email protected] or phone: 202-418- 0530 or TTY: 202-418-0432. For detailed instructions for...

  14. Measurement of student attitudes in first year engineering - A mixed methods approach

    Science.gov (United States)

    Malik, Qaiser Hameed

    This research study focused on freshman attitudes towards engineering in a newly implemented cornerstone sequence that emphasized holistic design experiences. The students' initial attitudes and changes in these attitudes were examined with the explanatory mixed methods approach that allows a sequential examination of the target population with two methods, using two sets of data, to investigate the treatment effects. In the quantitative phase, the study compared changes in freshman attitude towards engineering, between the new 'design sequence' group (composed of freshmen in the cornerstone sequence) and the prior 'traditional sequence' group (composed of all other freshmen), over the course of one semester. The data were collected in fall 2008 at two time intervals and changes in the two groups' attitudes were examined with repeated measures analysis of covariance models. The analyses reported here include data from 389 students out of the total population of 722 freshmen. The analyses revealed that engineering freshmen joined the program with positive or strongly positive attitudes towards engineering. Those strong attitudes were durable and resistant to change. Students in the design sequence group had higher ACT scores, enjoyed math and science the most, and did not believe engineering to be an exact science. However, no appreciable time-group interaction was observed. To validate the quantitative results, an interview protocol was developed to investigate initial freshman attitudes and changes, if any, that took place as a result of the new cornerstone sequence. One-on-one interviews with a sample of ten students out of the population of 272 freshmen revealed that freshmen in the cornerstone sequence entered the program full of enthusiasm and idealism, and with strongly positive attitudes towards engineering. The strong motivational factors included parental/teacher influences, childhood motivations, and high school extra-curricular experiences. The

  15. Ultrasonic fluid quantity measurement in dynamic vehicular applications a support vector machine approach

    CERN Document Server

    Terzic, Jenny; Nagarajah, Romesh; Alamgir, Muhammad

    2013-01-01

    Accurate fluid level measurement in dynamic environments can be assessed using a Support Vector Machine (SVM) approach. SVM is a supervised learning model that analyzes and recognizes patterns. It is a signal classification technique which has far greater accuracy than conventional signal averaging methods. Ultrasonic Fluid Quantity Measurement in Dynamic Vehicular Applications: A Support Vector Machine Approach describes the research and development of a fluid level measurement system for dynamic environments. The measurement system is based on a single ultrasonic sensor. A Support Vector Machines (SVM) based signal characterization and processing system has been developed to compensate for the effects of slosh and temperature variation in fluid level measurement systems used in dynamic environments including automotive applications. It has been demonstrated that a simple ν-SVM model with Radial Basis Function (RBF) Kernel with the inclusion of a Moving Median filter could be used to achieve the high levels...

  16. Brain tissues volume measurements from 2D MRI using parametric approach

    Science.gov (United States)

    L'vov, A. A.; Toropova, O. A.; Litovka, Yu. V.

    2018-04-01

    The purpose of the paper is to propose a fully automated method of volume assessment of structures within human brain. Our statistical approach uses maximum interdependency principle for decision making process of measurements consistency and unequal observations. Detecting outliers performed using maximum normalized residual test. We propose a statistical model which utilizes knowledge of tissues distribution in human brain and applies partial data restoration for precision improvement. The approach proposes completed computationally efficient and independent from segmentation algorithm used in the application.

  17. Evaluating airline energy efficiency: An integrated approach with Network Epsilon-based Measure and Network Slacks-based Measure

    International Nuclear Information System (INIS)

    Xu, Xin; Cui, Qiang

    2017-01-01

    This paper focuses on evaluating airline energy efficiency, which is firstly divided into four stages: Operations Stage, Fleet Maintenance Stage, Services Stage and Sales Stage. The new four-stage network structure of airline energy efficiency is a modification of existing models. A new approach, integrated with Network Epsilon-based Measure and Network Slacks-based Measure, is applied to assess the overall energy efficiency and divisional efficiency of 19 international airlines from 2008 to 2014. The influencing factors of airline energy efficiency are analyzed through the regression analysis. The results indicate the followings: 1. The integrated model can identify the benchmarking airlines in the overall system and stages. 2. Most airlines' energy efficiencies keep steady during the period, except for some sharply fluctuations. The efficiency decreases mainly centralized in the year 2008–2011, affected by the financial crisis in the USA. 3. The average age of fleet is positively correlated with the overall energy efficiency, and each divisional efficiency has different significant influencing factors. - Highlights: • An integrated approach with Network Epsilon-based Measure and Network Slacks-based Measure is developed. • 19 airlines' energy efficiencies are evaluated. • Garuda Indonesia has the highest overall energy efficiency.

  18. Initial study of stability and repeatability of measuring R2' and oxygen extraction fraction values in the healthy brain with gradient-echo sampling of spin-echo sequence

    International Nuclear Information System (INIS)

    Hui Lihong; Zhang Xiaodong; He Chao; Xie Sheng; Xiao Jiangxi; Zhang jue; Wang Xiaoying; Jiang Xuexiang

    2010-01-01

    Objective: To evaluate the stability and repeatability of gradient-echo sampling of spin- echo (GESSE) sequence in measuring the R 2 ' value in volunteers, by comparison with traditional GRE sequence (T 2 * ]nap and T 2 map). Methods: Eight normal healthy volunteers were enrolled in this study and written informed consents were obtained from all subjects. MR scanning including sequences of GESSE, T 2 map and T 2 * map were performed in these subjects at resting status. The same protocol was repeated one day later. Raw data from GESSE sequence were transferred to PC to conduct postprocessing with the software built in house. R 2 ' map and OEF map were got consequently. To obtain quantitative R 2 ' and OEF values in the brain parenchyma, six ROIs were equally placed in the anterior, middle and posterior part of bilateral hemispheres. Both mean and standard deviation of R 2 ' and OEF were recorded. All images from T 2 * map and T 2 map were transferred to the Workstation for postprocessing. The ROIs were put at the same areas as those for GESSE sequence. R 2 ' is defined as R 2 ' = R 2 * - R 2 , R 2 * = 1/T 2 * . The R 2 ' value of GESSE sequence were compared with that of GRE sequence. Results: The mean R 2 ' values of GESSE at the first and second scan and those of the GRE were (4.21±0.92), (4.45±0.94) Hz and (7.37±1.47), (6.42±2.33) Hz respectively. The mean OEF values of GESSE at the first and second scan is 0.327±0.036 and 0.336± 0.035 respectively. The R 2 ' value and OEF value obtained from GESSE were not significantly different between the first and second scan (t=-0.83, -1.48, P>0.05). The R 2 ' value of first GRE imaging had significantly statistical difference from that of second GRE imaging (t=1.80, P 2 ' value of GESSE sequence was less than that of GRE sequence, and there was significantly statistical difference between them (t=1.71, P<0.05). Conclusion: The GESSE sequence has good stability and repeatability with promising clinical practicability

  19. Parallelism measurement for base plate of standard artifact with multiple tactile approaches

    Science.gov (United States)

    Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie

    2018-01-01

    Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.

  20. A new approach for measuring power spectra and reconstructing time series in active galactic nuclei

    Science.gov (United States)

    Li, Yan-Rong; Wang, Jian-Min

    2018-05-01

    We provide a new approach to measure power spectra and reconstruct time series in active galactic nuclei (AGNs) based on the fact that the Fourier transform of AGN stochastic variations is a series of complex Gaussian random variables. The approach parametrizes a stochastic series in frequency domain and transforms it back to time domain to fit the observed data. The parameters and their uncertainties are derived in a Bayesian framework, which also allows us to compare the relative merits of different power spectral density models. The well-developed fast Fourier transform algorithm together with parallel computation enables an acceptable time complexity for the approach.

  1. Surface-sensitive conductivity measurement using a micro multi-point probe approach

    DEFF Research Database (Denmark)

    Perkins, Edward; Barreto, Lucas; Wells, Justin

    2013-01-01

    An instrument for microscale electrical transport measurements in ultra-high vacuum is presented. The setup is constructed around collinear lithographically-created multi-point probes with a contact spacing down to 500 nm. Most commonly, twelve-point probes are used. These probes are approached...... measurements with an equidistant four-point probe for a wide range of contact spacings. In this way, it is possible to distinguish between bulk-like and surface-like conduction. The paper describes the design of the instrument and the approach to data and error analysis. Application examples are given...

  2. PET functional volume delineation: a robustness and repeatability study

    International Nuclear Information System (INIS)

    Hatt, Mathieu; Cheze-le Rest, Catherine; Albarghach, Nidal; Pradier, Olivier; Visvikis, Dimitris

    2011-01-01

    Current state-of-the-art algorithms for functional uptake volume segmentation in PET imaging consist of threshold-based approaches, whose parameters often require specific optimization for a given scanner and associated reconstruction algorithms. Different advanced image segmentation approaches previously proposed and extensively validated, such as among others fuzzy C-means (FCM) clustering, or fuzzy locally adaptive bayesian (FLAB) algorithm have the potential to improve the robustness of functional uptake volume measurements. The objective of this study was to investigate robustness and repeatability with respect to various scanner models, reconstruction algorithms and acquisition conditions. Robustness was evaluated using a series of IEC phantom acquisitions carried out on different PET/CT scanners (Philips Gemini and Gemini Time-of-Flight, Siemens Biograph and GE Discovery LS) with their associated reconstruction algorithms (RAMLA, TF MLEM, OSEM). A range of acquisition parameters (contrast, duration) and reconstruction parameters (voxel size) were considered for each scanner model, and the repeatability of each method was evaluated on simulated and clinical tumours and compared to manual delineation. For all the scanner models, acquisition parameters and reconstruction algorithms considered, the FLAB algorithm demonstrated higher robustness in delineation of the spheres with low mean errors (10%) and variability (5%), with respect to threshold-based methodologies and FCM. The repeatability provided by all segmentation algorithms considered was very high with a negligible variability of <5% in comparison to that associated with manual delineation (5-35%). The use of advanced image segmentation algorithms may not only allow high accuracy as previously demonstrated, but also provide a robust and repeatable tool to aid physicians as an initial guess in determining functional volumes in PET. (orig.)

  3. Approaches to statistical analysis of repeated echocardiographic measurements after myocardial infarction and its relation to heart failure : Application of a random-effects model

    NARCIS (Netherlands)

    de Kam, PJ; Voors, AA; Brouwer, J; van Gilst, WH

    Background: Extensive left ventricular (LV) dilatation after myocardial infarction (MI) is associated with increased heart failure risk. Aims: To investigate whether the power to demonstrate the relation between LV dilatation and heart failure depends on the method applied to predict LV dilatation

  4. Repeated causal decision making.

    Science.gov (United States)

    Hagmayer, York; Meder, Björn

    2013-01-01

    Many of our decisions refer to actions that have a causal impact on the external environment. Such actions may not only allow for the mere learning of expected values or utilities but also for acquiring knowledge about the causal structure of our world. We used a repeated decision-making paradigm to examine what kind of knowledge people acquire in such situations and how they use their knowledge to adapt to changes in the decision context. Our studies show that decision makers' behavior is strongly contingent on their causal beliefs and that people exploit their causal knowledge to assess the consequences of changes in the decision problem. A high consistency between hypotheses about causal structure, causally expected values, and actual choices was observed. The experiments show that (a) existing causal hypotheses guide the interpretation of decision feedback, (b) consequences of decisions are used to revise existing causal beliefs, and (c) decision makers use the experienced feedback to induce a causal model of the choice situation even when they have no initial causal hypotheses, which (d) enables them to adapt their choices to changes of the decision problem. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  5. Different top-down approaches to estimate measurement uncertainty of whole blood tacrolimus mass concentration values.

    Science.gov (United States)

    Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca

    2018-05-08

    Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  6. Production competence revisited:a critique of the literature and a new measurement approach

    OpenAIRE

    Szász, Levente; Demeter, Krisztina; Boer, Harry

    2015-01-01

    Purpose – The purpose of this paper is to seek remedy to two major flaws of the production competence literature, which concern: the way the production competence construct is operationalized and the way its effects on performance are measured.Design/methodology/approach – The paper proposes to measure production competence as the two-dimensional operational level construct it actually is, and to use Slack’s (1994) importance performance matrix to study its business level performance effects....

  7. The grey relational approach for evaluating measurement uncertainty with poor information

    International Nuclear Information System (INIS)

    Luo, Zai; Wang, Yanqing; Zhou, Weihu; Wang, Zhongyu

    2015-01-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) is the master document for measurement uncertainty evaluation. However, the GUM may encounter problems and does not work well when the measurement data have poor information. In most cases, poor information means a small data sample and an unknown probability distribution. In these cases, the evaluation of measurement uncertainty has become a bottleneck in practical measurement. To solve this problem, a novel method called the grey relational approach (GRA), different from the statistical theory, is proposed in this paper. The GRA does not require a large sample size or probability distribution information of the measurement data. Mathematically, the GRA can be divided into three parts. Firstly, according to grey relational analysis, the grey relational coefficients between the ideal and the practical measurement output series are obtained. Secondly, the weighted coefficients and the measurement expectation function will be acquired based on the grey relational coefficients. Finally, the measurement uncertainty is evaluated based on grey modeling. In order to validate the performance of this method, simulation experiments were performed and the evaluation results show that the GRA can keep the average error around 5%. Besides, the GRA was also compared with the grey method, the Bessel method, and the Monte Carlo method by a real stress measurement. Both the simulation experiments and real measurement show that the GRA is appropriate and effective to evaluate the measurement uncertainty with poor information. (paper)

  8. Quality assurance of in-situ measurements of land surface albedo: A model-based approach

    Science.gov (United States)

    Adams, Jennifer; Gobron, Nadine; Widlowski, Jean-Luc; Mio, Corrado

    2016-04-01

    This paper presents the development of a model-based framework for assessing the quality of in-situ measurements of albedo used to validate land surface albedo products. Using a 3D Monte Carlo Ray Tracing (MCRT) radiative transfer model, a quality assurance framework is built based on simulated field measurements of albedo within complex 3D canopies and under various illumination scenarios. This method provides an unbiased approach in assessing the quality of field measurements, and is also able to trace the contributions of two main sources of uncertainty in field-measurements of albedo; those resulting from 1) the field measurement protocol, such as height or placement of field measurement within the canopy, and 2) intrinsic factors of the 3D canopy under specific illumination characteristics considered, such as the canopy structure and landscape heterogeneity, tree heights, ecosystem type and season.

  9. Critical investigation of Jauch's approach to the quantum theory of measurement

    International Nuclear Information System (INIS)

    Herbut, Fedor

    1986-01-01

    To make Jauch's approach more realistic, his assumptions are modified in two ways: (1) On the quantum system plus the measuring apparatus (S + MA) after the measuring interaction has ceased, one can actually measure only operators of the form given. (2) Measurement is defined in the most general way (including, besides first-kind, also second-kind and third-kind or indirect measurements). It is shown that Jauch's basic result that the microstates (statistical operators) of S + MA before and after the collapse correspond to the same macrostate (belong to the same equivalence class of microstates) remains valid under the above modifications, and that the significance of this result goes beyond measurement theory. On the other hand, it is argued that taking the orthodox (i.e. uncompromisingly quantum) view of quantum mechanics, it is not the collapse, but the Jauch-type macrostates that are spurious in a Jauch-type theory. (author)

  10. Real time drift measurement for colloidal probe atomic force microscope: a visual sensing approach

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yuliang, E-mail: wangyuliang@buaa.edu.cn; Bi, Shusheng [Robotics Institute, School of Mechanical Engineering and Automation, Beihang University, Beijing 100191 (China); Wang, Huimin [Department of Materials Science and Engineering, The Ohio State University, 2041 College Rd., Columbus, OH 43210 (United States)

    2014-05-15

    Drift has long been an issue in atomic force microscope (AFM) systems and limits their ability to make long time period measurements. In this study, a new method is proposed to directly measure and compensate for the drift between AFM cantilevers and sample surfaces in AFM systems. This was achieved by simultaneously measuring z positions for beads at the end of an AFM colloidal probe and on sample surface through an off-focus image processing based visual sensing method. The working principle and system configuration are presented. Experiments were conducted to validate the real time drift measurement and compensation. The implication of the proposed method for regular AFM measurements is discussed. We believe that this technique provides a practical and efficient approach for AFM experiments requiring long time period measurement.

  11. Ocular surface sensitivity repeatability with Cochet-Bonnet esthesiometer.

    Science.gov (United States)

    Chao, Cecilia; Stapleton, Fiona; Badarudin, Ezailina; Golebiowski, Blanka

    2015-02-01

    To determine the repeatability of ocular surface threshold measurements using the Cochet-Bonnet esthesiometer on the same day and 3 months apart. Two separate studies were conducted to determine the repeatability of ocular surface threshold measurements made on the same day (n = 20 subjects) and 3 months apart (n = 29 subjects). The Cochet-Bonnet esthesiometer was used to measure corneal and inferior conjunctival thresholds using the ascending method of limits. The pressure exerted by the Cochet-Bonnet esthesiometer was determined using an analytical balance, for both the 0.08- and 0.12-mm-diameter filaments. This calibration was then used to convert filament length measurements to pressure. Repeatability was determined using a Bland and Altman analysis. The pressure exerted at each filament length differed between the two filament diameters. The measured pressure also differed from values provided by the manufacturer. Repeatability of threshold measurements at the central cornea was shown to be good, with better repeatability for same-day measurements (coefficient of repeatability [CoR] = ±0.23 g/mm²) than for those 3 months apart (CoR = ±0.52 g/mm²). Threshold measurements at the inferior conjunctiva, in contrast, were poorly repeatable (CoR = ±12.78 g/mm²). Cochet-Bonnet esthesiometry is repeatable when performed on the central cornea on the same day and 3 months apart, but this instrument is not recommended for conjunctival threshold measurements.

  12. A Developmental Assets Approach in East Africa: Can Swahili Measures Capture Adolescent Strengths and Supports?

    Science.gov (United States)

    Drescher, Christopher F.; Johnson, Laura R.; Kurz, A. Solomon; Scales, Peter C.; Kiliho, Ray P.

    2018-01-01

    Background: Assets-based approaches are well-suited to youth living in majority world contexts, such as East Africa. However, positive psychology research with African adolescents is rare. One hindering factor is the lack of translated measures for conducting research. Objective: This study builds capacity for positive youth development research…

  13. Measurements of translation, rotation and strain: new approaches to seismic processing and inversion

    NARCIS (Netherlands)

    Bernauer, M.; Fichtner, A.; Igel, H.

    2012-01-01

    We propose a novel approach to seismic tomography based on the joint processing of translation, strain and rotation measurements. Our concept is based on the apparent S and P velocities, defined as the ratios of displacement velocity and rotation amplitude, and displacement velocity and

  14. A pragmatic approach to measuring, monitoring and evaluating interventions for improved tuberculosis case detection

    NARCIS (Netherlands)

    Blok, Lucie; Creswell, Jacob; Stevens, Robert; Brouwer, Miranda; Ramis, Oriol; Weil, Olivier; Klatser, Paul; Sahu, Suvanand; Bakker, Mirjam I.

    2014-01-01

    The inability to detect all individuals with active tuberculosis has led to a growing interest in new approaches to improve case detection. Policy makers and program staff face important challenges measuring effectiveness of newly introduced interventions and reviewing feasibility of scaling-up

  15. A pragmatic approach to measuring, monitoring and evaluating interventions for improved tuberculosis case detection.

    NARCIS (Netherlands)

    Blok, L; Creswell, J; Stevens, R.; Brouwer, M; Ramis, O; Weil, O; Klatser, P.R.; Sahu, S; Bakker, M.I.

    2014-01-01

    The inability to detect all individuals with active tuberculosis has led to a growing interest in new approaches to improve case detection. Policy makers and program staff face important challenges measuring effectiveness of newly introduced interventions and reviewing feasibility of scaling-up

  16. A new approach to estimate nuclide ratios from measurements with activities close to background

    International Nuclear Information System (INIS)

    Kirchner, G.; Steiner, M.; Zaehringer, M.

    2009-01-01

    Measurements of low-level radioactivity often give results of the order of the detection limit. For many applications, interest is not only in estimating activity concentrations of a single radioactive isotope, but focuses on multi-isotope analyses, which often enable inference on the source of the activity detected (e.g. from activity ratios). Obviously, such conclusions become questionable if the measurement merely gives a detection limit for a specific isotope. This is particularly relevant if the presence of an isotope, which shows a low signal only (e.g. due to a short half-life or a small transition probability), is crucial for gaining the information of interest. This paper discusses a new approach which has the potential to solve these problems. Using Bayesian statistics, a method is presented which allows statistical inference on nuclide ratios taking into account both prior knowledge and all information collected from the measurements. It is shown that our method allows quantitative conclusion to be drawn if counts of single isotopes are low or become even negative after background subtraction. Differences to the traditional statistical approach of specifying decision thresholds or detection limits are highlighted. Application of this new approach is illustrated by a number of examples of environmental low-level radioactivity measurements. The capabilities of our approach for spectrum interpretation and source identification are demonstrated with real spectra from air filters, sewage sludge and soil samples.

  17. Measuring Integration of Information and Communication Technology in Education: An Item Response Modeling Approach

    Science.gov (United States)

    Peeraer, Jef; Van Petegem, Peter

    2012-01-01

    This research describes the development and validation of an instrument to measure integration of Information and Communication Technology (ICT) in education. After literature research on definitions of integration of ICT in education, a comparison is made between the classical test theory and the item response modeling approach for the…

  18. Benchmarking in the National Intellectual Capital Measurement: Is It the Best Available Approach?

    Science.gov (United States)

    Januškaite, Virginija; Užiene, Lina

    2016-01-01

    Sustainable economic development is an aspiration of every nation in today's knowledge economy. Scientists for a few decades claim that intellectual capital management is the answer how to reach this goal. Currently, benchmarking methodology is the most common approach in the national intellectual capital measurement intended to provide…

  19. User involvement in measuring service quality of local authority occupational therapy services: a new approach.

    NARCIS (Netherlands)

    Sixma, H.J.; Calnan, S.; Calnan, M.; Groenewegen, P.P.

    2001-01-01

    The aim of this paper is two-fold: (i) to describe the development of a new measuring instrument for quality of care from the perspective of the users of local authority Occupational Therapy (OT) services, and (ii) to evaluate the potential of the new instrument as a standardized approach for the

  20. Preventing School Bullying: Should Schools Prioritize an Authoritative School Discipline Approach over Security Measures?

    Science.gov (United States)

    Gerlinger, Julie; Wo, James C.

    2016-01-01

    A common response to school violence features the use of security measures to deter serious and violent incidents. However, a second approach, based on school climate theory, suggests that schools exhibiting authoritative school discipline (i.e., high structure and support) might more effectively reduce school disorder. We tested these approaches…

  1. ANIMAL BEHAVIOR AND WELL-BEING SYMPOSIUM: The Common Swine Industry Audit: Future steps to assure positive on-farm animal welfare utilizing validated, repeatable and feasible animal-based measures.

    Science.gov (United States)

    Pairis-Garcia, M; Moeller, S J

    2017-03-01

    The Common Swine Industry Audit (CSIA) was developed and scientifically evaluated through the combined efforts of a task force consisting of university scientists, veterinarians, pork producers, packers, processers, and retail and food service personnel to provide stakeholders throughout the pork chain with a consistent, reliable, and verifiable system to ensure on-farm swine welfare and food safety. The CSIA tool was built from the framework of the Pork Quality Assurance Plus (PQA Plus) site assessment program with the purpose of developing a single, common audit platform for the U.S. swine industry. Twenty-seven key aspects of swine care are captured and evaluated in CSIA and cover the specific focal areas of animal records, animal observations, facilities, and caretakers. Animal-based measures represent approximately 50% of CSIA evaluation criteria and encompass critical failure criteria, including observation of willful acts of abuse and determination of timely euthanasia. Objective, science-based measures of animal well-being parameters (e.g., BCS, lameness, lesions, hernias) are assessed within CSIA using statistically validated sample sizes providing a detection ability of 1% with 95% confidence. The common CSIA platform is used to identify care issues and facilitate continuous improvement in animal care through a validated, repeatable, and feasible animal-based audit process. Task force members provide continual updates to the CSIA tool with a specific focus toward 1) identification and interpretation of appropriate animal-based measures that provide inherent value to pig welfare, 2) establishment of acceptability thresholds for animal-based measures, and 3) interpretation of CSIA data for use and improvement of welfare within the U.S. swine industry.

  2. A suggested approach toward measuring sorption and applying sorption data to repository performance assessment

    International Nuclear Information System (INIS)

    Rundberg, R.S.

    1992-01-01

    The prediction of radionuclide migration for the purpose of assessing the safety of a nuclear waste repository will be based on a collective knowledge of hydrologic and geochemical properties of the surrounding rock and groundwater. This knowledge along with assumption about the interactions of radionuclides with groundwater and minerals form the scientific basis for a model capable of accurately predicting the repository's performance. Because the interaction of radionuclides in geochemical systems is known to be complicated, several fundamental and empirical approaches to measuring the interaction between radionuclides and the geologic barrier have been developed. The approaches applied to the measurement of sorption involve the use of pure minerals, intact, or crushed rock in dynamic and static experiments. Each approach has its advantages and disadvantages. There is no single best method for providing sorption data for performance assessment models which can be applied without invoking information derived from multiple experiments. 53 refs., 12 figs

  3. Thirty years of precise gravity measurements at Mt. Vesuvius: an approach to detect underground mass movements

    Directory of Open Access Journals (Sweden)

    Giovanna Berrino

    2013-11-01

    Full Text Available Since 1982, high precision gravity measurements have been routinely carried out on Mt. Vesuvius. The gravity network consists of selected sites most of them coinciding with, or very close to, leveling benchmarks to remove the effect of the elevation changes from gravity variations. The reference station is located in Napoli, outside the volcanic area. Since 1986, absolute gravity measurements have been periodically made on a station on Mt. Vesuvius, close to a permanent gravity station established in 1987, and at the reference in Napoli. The results of the gravity measurements since 1982 are presented and discussed. Moderate gravity changes on short-time were generally observed. On long-term significant gravity changes occurred and the overall fields displayed well defined patterns. Several periods of evolution may be recognized. Gravity changes revealed by the relative surveys have been confirmed by repeated absolute measurements, which also confirmed the long-term stability of the reference site. The gravity changes over the recognized periods appear correlated with the seismic crises and with changes of the tidal parameters obtained by continuous measurements. The absence of significant ground deformation implies masses redistribution, essentially density changes without significant volume changes, such as fluids migration at the depth of the seismic foci, i.e. at a few kilometers. The fluid migration may occur through pre-existing geological structures, as also suggested by hydrological studies, and/or through new fractures generated by seismic activity. This interpretation is supported by the analyses of the spatial gravity changes overlapping the most significant and recent seismic crises.

  4. How to measure renal artery stenosis - a retrospective comparison of morphological measurement approaches in relation to hemodynamic significance

    International Nuclear Information System (INIS)

    Andersson, Malin; Jägervall, Karl; Eriksson, Per; Persson, Anders; Granerus, Göran; Wang, Chunliang; Smedby, Örjan

    2015-01-01

    fuzzy connectedness segmentation. Further studies are required to definitely identify the optimal measurement approach

  5. Composite Measures of Health Care Provider Performance: A Description of Approaches

    Science.gov (United States)

    Shwartz, Michael; Restuccia, Joseph D; Rosen, Amy K

    2015-01-01

    Context Since the Institute of Medicine’s 2001 report Crossing the Quality Chasm, there has been a rapid proliferation of quality measures used in quality-monitoring, provider-profiling, and pay-for-performance (P4P) programs. Although individual performance measures are useful for identifying specific processes and outcomes for improvement and tracking progress, they do not easily provide an accessible overview of performance. Composite measures aggregate individual performance measures into a summary score. By reducing the amount of data that must be processed, they facilitate (1) benchmarking of an organization’s performance, encouraging quality improvement initiatives to match performance against high-performing organizations, and (2) profiling and P4P programs based on an organization’s overall performance. Methods We describe different approaches to creating composite measures, discuss their advantages and disadvantages, and provide examples of their use. Findings The major issues in creating composite measures are (1) whether to aggregate measures at the patient level through all-or-none approaches or the facility level, using one of the several possible weighting schemes; (2) when combining measures on different scales, how to rescale measures (using z scores, range percentages, ranks, or 5-star categorizations); and (3) whether to use shrinkage estimators, which increase precision by smoothing rates from smaller facilities but also decrease transparency. Conclusions Because provider rankings and rewards under P4P programs may be sensitive to both context and the data, careful analysis is warranted before deciding to implement a particular method. A better understanding of both when and where to use composite measures and the incentives created by composite measures are likely to be important areas of research as the use of composite measures grows. PMID:26626986

  6. Measuring energy rebound effect in the Chinese economy: An economic accounting approach

    International Nuclear Information System (INIS)

    Lin, Boqiang; Du, Kerui

    2015-01-01

    Estimating the magnitude of China's economy-wide rebound effect has attracted much attention in recent years. Most existing studies measure the rebound effect through the additional energy consumption from technological progress. However, in general technological progress is not equivalent to energy efficiency improvement. Consequently, their estimation may be misleading. To overcome the limitation, this paper develops an alternative approach for estimating energy rebound effect. Based on the proposed approach, China's economy-wide energy rebound effect is revisited. The empirical result shows that during the period 1981–2011 the rebound effects in China are between 30% and 40%, with an average value of 34.3%. - Highlights: • This paper develops an alternative approach for estimating energy rebound effect. • The proposed approach is based on the multilevel–hierarchical (M–H) IDA model. • The energy rebound effects in China are estimated between 30% and 40%

  7. Comparing Laser Interferometry and Atom Interferometry Approaches to Space-Based Gravitational-Wave Measurement

    Science.gov (United States)

    Baker, John; Thorpe, Ira

    2012-01-01

    Thoroughly studied classic space-based gravitational-wave missions concepts such as the Laser Interferometer Space Antenna (LISA) are based on laser-interferometry techniques. Ongoing developments in atom-interferometry techniques have spurred recently proposed alternative mission concepts. These different approaches can be understood on a common footing. We present an comparative analysis of how each type of instrument responds to some of the noise sources which may limiting gravitational-wave mission concepts. Sensitivity to laser frequency instability is essentially the same for either approach. Spacecraft acceleration reference stability sensitivities are different, allowing smaller spacecraft separations in the atom interferometry approach, but acceleration noise requirements are nonetheless similar. Each approach has distinct additional measurement noise issues.

  8. Expansion of protein domain repeats.

    Directory of Open Access Journals (Sweden)

    Asa K Björklund

    2006-08-01

    Full Text Available Many proteins, especially in eukaryotes, contain tandem repeats of several domains from the same family. These repeats have a variety of binding properties and are involved in protein-protein interactions as well as binding to other ligands such as DNA and RNA. The rapid expansion of protein domain repeats is assumed to have evolved through internal tandem duplications. However, the exact mechanisms behind these tandem duplications are not well-understood. Here, we have studied the evolution, function, protein structure, gene structure, and phylogenetic distribution of domain repeats. For this purpose we have assigned Pfam-A domain families to 24 proteomes with more sensitive domain assignments in the repeat regions. These assignments confirmed previous findings that eukaryotes, and in particular vertebrates, contain a much higher fraction of proteins with repeats compared with prokaryotes. The internal sequence similarity in each protein revealed that the domain repeats are often expanded through duplications of several domains at a time, while the duplication of one domain is less common. Many of the repeats appear to have been duplicated in the middle of the repeat region. This is in strong contrast to the evolution of other proteins that mainly works through additions of single domains at either terminus. Further, we found that some domain families show distinct duplication patterns, e.g., nebulin domains have mainly been expanded with a unit of seven domains at a time, while duplications of other domain families involve varying numbers of domains. Finally, no common mechanism for the expansion of all repeats could be detected. We found that the duplication patterns show no dependence on the size of the domains. Further, repeat expansion in some families can possibly be explained by shuffling of exons. However, exon shuffling could not have created all repeats.

  9. Distance measurement and wave dispersion in a Liouville-string approach to quantum gravity

    CERN Document Server

    Amelino-Camelia, G; Mavromatos, Nikolaos E; Nanopoulos, Dimitri V

    1997-01-01

    Within a Liouville approach to non-critical string theory, we discuss space-time foam effects on the propagation of low-energy particles. We find an induced frequency-dependent dispersion in the propagation of a wave packet, and observe that this would affect the outcome of measurements involving low-energy particles as probes. In particular, the maximum possible order of magnitude of the space-time foam effects would give rise to an error in the measurement of distance comparable to that independently obtained in some recent heuristic quantum-gravity analyses. We also briefly compare these error estimates with the precision of astrophysical measurements.

  10. Hybrid FRC under repeated loading

    International Nuclear Information System (INIS)

    Komlos, K.; Babal, B.; Nuernbergerova, T.

    1993-01-01

    Fibre reinforced concretes (FRC) containing several volume fractions in different ratios of two types of fibres - polypropylene and steel, were tested under repeated loading. Mechanical properties of specimens - cubes 150/150/150 mm (for compressive strength), prisms 100/100/400 (for flexural strength), short cylinders 150/60 mm (for impact strength) have been experimentally investigated before and after cyclic loading at the age of 28 days curing time. Mix proportions were designed after DIN 1045 with max. aggregate size 8 mm and grading curve B 8. Portland Cement PC 400 in the amount of 450 kg. m -3 was applied and W/C ratio 0.55. Workability of mixes was measured by Vebe method and regulated by plasticizing admixture Ligoplast Na. Maximum hybrid fibre volume fraction (polypropylene + steel) was 1.0%. Dynamic forces generated in Schenck testing machine with frequency 16 Hz had sinusoidal wave form varying between 0.7 and 0.1 of static mechanical characteristics. The number of cycles in all tests was 10 5 . The residual MOR at static four point bending test and working diagram force-deflection was carried out as well. The impact properties after repeated loading in compression were tested by means of falling weight test. Relationships between composition of fibre composites with different combination of polypropylene (0.2, 0.3, 0.5% by volume) and steel (0.5, 0.7, and 0.8% by volume) fibre content were obtained and technological properties of mixes as well. (author)

  11. Quality control during repeated fryings

    Directory of Open Access Journals (Sweden)

    Cuesta, C.

    1998-08-01

    Full Text Available Most of the debate ¡s about how the slow or frequent turnover of fresh fat affects the deterioration, of fat used in frying. Then, the modification of different oils used in repeated fryings of potatoes without or with turnover of fresh oil, under similar frying conditions, was evaluated by two criteria: by measuring the total polar component isolated by column chromatography and by the evaluation of the specific compounds related to thermoxidative and hydrolytic alteration by High Performance Size Exclusion Chromatography (HPSEC. The results indicate that with frequent turnover of fresh oil, the critical level of 25% of polar material is rarely reached, and there are fewer problems with fat deterioration because the frying tended to increase the level of polar material and thermoxidative compounds (polymers and dimers of triglycerides and oxidized triglycerides in the fryer oil during the first fryings, followed by minor changes and a tendency to reach a near-steady state in successive fryings. However, in repeated frying of potatoes using a null turnover the alteration rate was higher being linear the relationship found between polar material or the different thermoxidative compounds and the number of fryings. On the other hand chemical reactions produced during deep-fat frying can be minimized by using proper oils. In addition the increased level of consumers awareness toward fat composition and its impact on human health could had an impact on the selection of fats for snacks and for industry. In this way monoenic fats are the most adequate from a nutritional point of view and for its oxidative stability during frying.

  12. Approach to determine measurement uncertainty in complex nanosystems with multiparametric dependencies and multivariate output quantities

    Science.gov (United States)

    Hampel, B.; Liu, B.; Nording, F.; Ostermann, J.; Struszewski, P.; Langfahl-Klabes, J.; Bieler, M.; Bosse, H.; Güttler, B.; Lemmens, P.; Schilling, M.; Tutsch, R.

    2018-03-01

    In many cases, the determination of the measurement uncertainty of complex nanosystems provides unexpected challenges. This is in particular true for complex systems with many degrees of freedom, i.e. nanosystems with multiparametric dependencies and multivariate output quantities. The aim of this paper is to address specific questions arising during the uncertainty calculation of such systems. This includes the division of the measurement system into subsystems and the distinction between systematic and statistical influences. We demonstrate that, even if the physical systems under investigation are very different, the corresponding uncertainty calculation can always be realized in a similar manner. This is exemplarily shown in detail for two experiments, namely magnetic nanosensors and ultrafast electro-optical sampling of complex time-domain signals. For these examples the approach for uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) is explained, in which correlations between multivariate output quantities are captured. To illustate the versatility of the proposed approach, its application to other experiments, namely nanometrological instruments for terahertz microscopy, dimensional scanning probe microscopy, and measurement of concentration of molecules using surface enhanced Raman scattering, is shortly discussed in the appendix. We believe that the proposed approach provides a simple but comprehensive orientation for uncertainty calculation in the discussed measurement scenarios and can also be applied to similar or related situations.

  13. Measuring leading placental edge to internal cervical os: Transabdominal versus transvaginal approach

    DEFF Research Database (Denmark)

    Westerway, Susan Campbell; Hyett, Jon; Henning Pedersen, Lars

    2017-01-01

    We aimed to compare the value of transabdominal (TA) and transvaginal (TV) approaches for assessing the risk of a low-lying placenta. This involved a comparison of TA and TV measurements between the leading placental edge and the internal cervical os. We also assessed the intra-/interobserver var......We aimed to compare the value of transabdominal (TA) and transvaginal (TV) approaches for assessing the risk of a low-lying placenta. This involved a comparison of TA and TV measurements between the leading placental edge and the internal cervical os. We also assessed the intra......-/interobserver variation for these measurements and the efficacy of TA measures in screening for a low placenta. Methodology Transabdominal and TV measurements of the leading placental edge to the internal cervical os were performed on 369 consecutive pregnancies of 16–41 weeks' gestation. The difference (TA-TV) from...... the area under the receiver operator characteristics (ROC) curve. Intra-/interobserver variations were also calculated. Results Of the pregnancies, 278 had a leading placental edge that was visible with the TV approach. Differences (TA-TV) ranged from −50 mm to +57 mm. Bland-Altman plot shows that TA...

  14. Measuring the Usability of Augmented Reality e-Learning Systems: A User-Centered Evaluation Approach

    Science.gov (United States)

    Pribeanu, Costin; Balog, Alexandru; Iordache, Dragoş Daniel

    The development of Augmented Reality (AR) systems is creating new challenges and opportunities for the designers of e-learning systems. The mix of real and virtual requires appropriate interaction techniques that have to be evaluated with users in order to avoid usability problems. Formative usability aims at finding usability problems as early as possible in the development life cycle and is suitable to support the development of such novel interactive systems. This work presents an approach to the user-centered usability evaluation of an e-learning scenario for Biology developed on an Augmented Reality educational platform. The evaluation has been carried on during and after a summer school held within the ARiSE research project. The basic idea was to perform usability evaluation twice. In this respect, we conducted user testing with a small number of students during the summer school in order to get a fast feedback from users having good knowledge in Biology. Then, we repeated the user testing in different conditions and with a relatively larger number of representative users. In this paper we describe both experiments and compare the usability evaluation results.

  15. Measuring IC following a semi-qualitative approach: An integrated framework

    Directory of Open Access Journals (Sweden)

    Chiara Verbano

    2013-09-01

    Full Text Available Purpose: Considering the different IC measures adopted in literature, the advantages of adopting semi-qualitative measures, and the lack of an agreed system for IC evaluation, the purpose of the paper is to analyse literature on IC measurement following a semi-qualitative approach, with the final intent to build an IC measurement framework. Design/methodology/approach: A literature review on IC measurement system, following a semi-qualitative approach, has been conducted and analysed, in order to re-organize and synthesize all items used in previous researches. Findings: An integrated framework emerged from this research and it constitutes an IC  measurement system, created gathering and integrating different items previously adopted in literature. Each of these variables has been organized in categories belonging to one of the three main components of IC: human capital, internal structural capital and relational capital. Originality/value: This research provides an integrated tool for IC evaluation, fostering toward a well agreed measurement system that is still lacking in literature. This framework could be interesting  not only for the academic world, which in the last two decades reveals increasing attention to IC, but also for the management of the companies, that with IC measurement can increase awareness of the firm’s value and develop internal auditing system to support the management of these assets. Moreover, it could be a useful instrument for the communication of IC value to the external stakeholders, as customers, suppliers and especially shareholders, and to investors and financial analysts.

  16. Comparison of step-by-step kinematics in repeated 30m sprints in female soccer players.

    Science.gov (United States)

    van den Tillaar, Roland

    2018-01-04

    The aim of this study was to compare kinematics in repeated 30m sprints in female soccer players. Seventeen subjects performed seven 30m sprints every 30s in one session. Kinematics were measured with an infrared contact mat and laser gun, and running times with an electronic timing device. The main findings were that sprint times increased in the repeated sprint ability test. The main changes in kinematics during the repeated sprint ability test were increased contact time and decreased step frequency, while no change in step length was observed. The step velocity increased in almost each step until the 14, which occurred around 22m. After this, the velocity was stable until the last step, when it decreased. This increase in step velocity was mainly caused by the increased step length and decreased contact times. It was concluded that the fatigue induced in repeated 30m sprints in female soccer players resulted in decreased step frequency and increased contact time. Employing this approach in combination with a laser gun and infrared mat for 30m makes it very easy to analyse running kinematics in repeated sprints in training. This extra information gives the athlete, coach and sports scientist the opportunity to give more detailed feedback and help to target these changes in kinematics better to enhance repeated sprint performance.

  17. Resolving the double tension: Toward a new approach to measurement modeling in cross-national research

    Science.gov (United States)

    Medina, Tait Runnfeldt

    The increasing global reach of survey research provides sociologists with new opportunities to pursue theory building and refinement through comparative analysis. However, comparison across a broad array of diverse contexts introduces methodological complexities related to the development of constructs (i.e., measurement modeling) that if not adequately recognized and properly addressed undermine the quality of research findings and cast doubt on the validity of substantive conclusions. The motivation for this dissertation arises from a concern that the availability of cross-national survey data has outpaced sociologists' ability to appropriately analyze and draw meaningful conclusions from such data. I examine the implicit assumptions and detail the limitations of three commonly used measurement models in cross-national analysis---summative scale, pooled factor model, and multiple-group factor model with measurement invariance. Using the orienting lens of the double tension I argue that a new approach to measurement modeling that incorporates important cross-national differences into the measurement process is needed. Two such measurement models---multiple-group factor model with partial measurement invariance (Byrne, Shavelson and Muthen 1989) and the alignment method (Asparouhov and Muthen 2014; Muthen and Asparouhov 2014)---are discussed in detail and illustrated using a sociologically relevant substantive example. I demonstrate that the former approach is vulnerable to an identification problem that arbitrarily impacts substantive conclusions. I conclude that the alignment method is built on model assumptions that are consistent with theoretical understandings of cross-national comparability and provides an approach to measurement modeling and construct development that is uniquely suited for cross-national research. The dissertation makes three major contributions: First, it provides theoretical justification for a new cross-national measurement model and

  18. Optimal and safe treatment of spider leg veins measuring less than 1.5 mm on skin type IV patients, using repeated low-fluence Nd:YAG laser pulses after polidocanol injection.

    Science.gov (United States)

    Moreno-Moraga, Javier; Hernández, Esteban; Royo, Josefina; Alcolea, Justo; Isarría, M Jose; Pascu, Mihail Lucian; Smarandache, Adriana; Trelles, Mario

    2013-05-01

    Treatment of micro-veins of less than 1.5 mm with laser and with chemical sclerosis is technically challenging because of their difficulty to remedy. Laser treatment is even more difficult when dark phototypes are involved.Three groups of 30 patients each, skin type IV, and vessels measuring less than 1.5 mm in diameter, were enrolled for two treatment sessions 8 weeks apart: group A, polidocanol (POL) micro-foam injection; group B, Nd:YAG laser alone; and group C, laser after POL injection. Repeated 8-Hz low-fluence pulses, moving the hand piece over a 3-cm vein segment with an average of five laser passes maximum and with a total time irradiation of 1 s were used. Sixteen weeks after the second treatment, statistically, degree of clearance after examining photographs and patients satisfaction index, plotted on a visual analogue scale and comparing results of all three groups, results were significantly better for group C (psafe and satisfactory in 96 % of patients using low-fluence laser pulses with a total cumulative energy in the 3 cm venous segment, lower than that of conventional treatment. Very few and transient complications were observed. POL foam injection followed by laser pulses is safe and efficient for vein treatment in dark-skinned patients.

  19. Assessment of the Variation Associated with Repeated Measurement of Gastrointestinal Transit Times and Assessment of the Effect of Oral Ranitidine on Gastrointestinal Transit Times Using a Wireless Motility Capsule System in Dogs

    Directory of Open Access Journals (Sweden)

    Jonathan A. Lidbury

    2012-01-01

    Full Text Available This study aimed to evaluate the variation associated with repeated measurement of gastrointestinal (GI transit times and the effect of oral ranitidine on GI transit times in healthy dogs using a wireless motility capsule (WMC system. Eight privately owned healthy adult dogs were enrolled, and one developed diarrhea and was removed from the study. For the first 3 repetitions, each dog was fed a standard meal followed by oral administration of a WMC. For the 4th repetition, each dog was given ranitidine hydrochloride (75 mg PO every 12 hours prior to and during assessment of GI transit times. Mean between-subject coefficients of variation for gastric emptying time (GET, small and large bowel transit time (SLBTT, and total transit time (TTT were 26.9%, 32.3%, and 19.6%, respectively. Mean within-subject coefficients of variation for GET, SLBTT, and TTT were 9.3%, 19.6%, and 15.9%, respectively. Median GET, SLBTT, and TTT without ranitidine were 719, 1,636, and 2,735 minutes, respectively. Median GET, SLBTT, and TTT with ranitidine were 757, 1,227, and 2,083 minutes, respectively. No significant differences in GI transit times were found between any of the 4 repetitions. Under these experimental conditions, no significant effects of oral ranitidine on GI transit times were observed.

  20. Approach for Self-Calibrating CO2 Measurements with Linear Membrane-Based Gas Sensors

    Directory of Open Access Journals (Sweden)

    Detlef Lazik

    2016-11-01

    Full Text Available Linear membrane-based gas sensors that can be advantageously applied for the measurement of a single gas component in large heterogeneous systems, e.g., for representative determination of CO2 in the subsurface, can be designed depending on the properties of the observation object. A resulting disadvantage is that the permeation-based sensor response depends on operating conditions, the individual site-adapted sensor geometry, the membrane material, and the target gas component. Therefore, calibration is needed, especially of the slope, which could change over several orders of magnitude. A calibration-free approach based on an internal gas standard is developed to overcome the multi-criterial slope dependency. This results in a normalization of sensor response and enables the sensor to assess the significance of measurement. The approach was proofed on the example of CO2 analysis in dry air with tubular PDMS membranes for various CO2 concentrations of an internal standard. Negligible temperature dependency was found within an 18 K range. The transformation behavior of the measurement signal and the influence of concentration variations of the internal standard on the measurement signal were shown. Offsets that were adjusted based on the stated theory for the given measurement conditions and material data from the literature were in agreement with the experimentally determined offsets. A measurement comparison with an NDIR reference sensor shows an unexpectedly low bias (<1% of the non-calibrated sensor response, and comparable statistical uncertainty.

  1. Approach for Self-Calibrating CO₂ Measurements with Linear Membrane-Based Gas Sensors.

    Science.gov (United States)

    Lazik, Detlef; Sood, Pramit

    2016-11-17

    Linear membrane-based gas sensors that can be advantageously applied for the measurement of a single gas component in large heterogeneous systems, e.g., for representative determination of CO₂ in the subsurface, can be designed depending on the properties of the observation object. A resulting disadvantage is that the permeation-based sensor response depends on operating conditions, the individual site-adapted sensor geometry, the membrane material, and the target gas component. Therefore, calibration is needed, especially of the slope, which could change over several orders of magnitude. A calibration-free approach based on an internal gas standard is developed to overcome the multi-criterial slope dependency. This results in a normalization of sensor response and enables the sensor to assess the significance of measurement. The approach was proofed on the example of CO₂ analysis in dry air with tubular PDMS membranes for various CO₂ concentrations of an internal standard. Negligible temperature dependency was found within an 18 K range. The transformation behavior of the measurement signal and the influence of concentration variations of the internal standard on the measurement signal were shown. Offsets that were adjusted based on the stated theory for the given measurement conditions and material data from the literature were in agreement with the experimentally determined offsets. A measurement comparison with an NDIR reference sensor shows an unexpectedly low bias (sensor response, and comparable statistical uncertainty.

  2. A Novel Approach to Measuring Muscle Mechanics in Vehicle Collision Conditions

    Directory of Open Access Journals (Sweden)

    Simon Krašna

    2017-06-01

    Full Text Available The aim of the study was to evaluate a novel approach to measuring neck muscle load and activity in vehicle collision conditions. A series of sled tests were performed on 10 healthy volunteers at three severity levels to simulate low-severity frontal impacts. Electrical activity—electromyography (EMG—and muscle mechanical tension was measured bilaterally on the upper trapezius. A novel mechanical contraction (MC sensor was used to measure the tension on the muscle surface. The neck extensor loads were estimated based on the inverse dynamics approach. The results showed strong linear correlation (Pearson’s coefficient = 0.821 between the estimated neck muscle load and the muscle tension measured with the MC sensor. The peak of the estimated neck muscle force delayed 0.2 ± 30.6 ms on average vs. the peak MC sensor signal compared to the average delay of 61.8 ± 37.4 ms vs. the peak EMG signal. The observed differences in EMG and MC sensor collected signals indicate that the MC sensor offers an additional insight into the analysis of the neck muscle load and activity in impact conditions. This approach enables a more detailed assessment of the muscle-tendon complex load of a vehicle occupant in pre-impact and impact conditions.

  3. Repeated nicotine exposure enhances reward-related learning in the rat.

    Science.gov (United States)

    Olausson, Peter; Jentsch, J David; Taylor, Jane R

    2003-07-01

    Repeated exposure to addictive drugs causes neuroadaptive changes in cortico-limbic-striatal circuits that may underlie alterations in incentive-motivational processes and reward-related learning. Such drug-induced alterations may be relevant to drug addiction because enhanced incentive motivation and increased control over behavior by drug-associated stimuli may contribute to aspects of compulsive drug-seeking and drug-taking behaviors. This study investigated the consequences of repeated nicotine treatment on the acquisition and performance of Pavlovian discriminative approach behavior, a measure of reward-related learning, in male rats. Water-restricted rats were trained to associate a compound conditioned stimulus (tone+light) with the availability of water (the unconditioned stimulus) in 15 consecutive daily sessions. In separate experiments, rats were repeatedly treated with nicotine (0.35 mg/kg, s.c.) either (1) prior to the onset of training, (2) after each daily training session was completed (ie postsession injections), or (3) received nicotine both before the onset of training as well as after each daily training session. In this study, all nicotine treatment schedules increased Pavlovian discriminative approach behavior and, thus, prior repeated exposure to nicotine, repeated postsession nicotine injections, or both, facilitated reward-related learning.

  4. Film repeats in radiology department

    International Nuclear Information System (INIS)

    Suwan, A. Z.; Al-Shakharah, A. I

    1997-01-01

    During a one year period, 4910 radiographs of 55780 films were repeated. The objective of our study was to analyse and to classify the causes in order to minimize the repeats, cut the expenses and to provide optimal radiographs for accurate diagnosis. Analysis of the different factors revealed that, 43.6% of film repeats in our service were due to faults in exposure factors, centering comprises 15.9% of the repeats, while too much collimation was responsible for 7.6% of these repeats. All of which can be decreased by awareness and programmed training of technicians. Film blurring caused by patient motion was also responsible for 4.9% for radiographs reexamination, which can be minimized by detailed explanation to the patient and providing the necessary privacy. Fogging of X-Ray films by improper storage or inadequate handling or processing faults were responsible for 14.5% in repeats in our study. Methods and criteria for proper storage and handling of films were discussed. Recommendation for using modern day-light and laser processor has been high lighted. Artefacts are noticeably high in our cases, due to spinal dresses and frequent usage of precious metals for c osmotic purposes in this part of the world. The repeated films comprise 8.8% of all films We conclude that, the main factor responsible for repeats of up to 81.6% of cases was the technologists, thus emphasizing the importance of adequate training of the technologists. (authors). 15 refs., 9 figs., 1 table

  5. Nifty Nines and Repeating Decimals

    Science.gov (United States)

    Brown, Scott A.

    2016-01-01

    The traditional technique for converting repeating decimals to common fractions can be found in nearly every algebra textbook that has been published, as well as in many precalculus texts. However, students generally encounter repeating decimal numerals earlier than high school when they study rational numbers in prealgebra classes. Therefore, how…

  6. Repeated Prescribed Burning in Aspen

    Science.gov (United States)

    Donald A. Perala

    1974-01-01

    Infrequent burning weather, low flammability of the aspen-hardwood association, and prolific sprouting and seeding of shrubs and hardwoods made repeated dormant season burning a poor tool to convert good site aspen to conifers. Repeat fall burns for wildlife habitat maintenance is workable if species composition changes are not important.

  7. Tevatron serial data repeater system

    International Nuclear Information System (INIS)

    Ducar, R.J.

    1981-01-01

    A ten megabit per second serial data repeater system has been developed for the 6.28km Tevatron accelerator. The repeaters are positioned at each of the thirty service buildings and accommodate control and abort system communications as well as distribution of the Tevatron time and energy clocks. The repeaters are transparent to the particular protocol of the transmissions. Serial data are encoded locally as unipolar two volt signals employing the self-clocking Manchester Bi-Phase code. The repeaters modulate the local signals to low-power bursts of 50 MHz rf carrier for the 260m transmission between service buildings. The repeaters also demodulate the transmission and restructure the data for local utilization. The employment of frequency discrimination techniques yields high immunity to the characteristic noise spectrum

  8. Measuring quality of care: considering conceptual approaches to quality indicator development and evaluation.

    Science.gov (United States)

    Stelfox, Henry T; Straus, Sharon E

    2013-12-01

    In this article, we describe one approach for developing and evaluating quality indicators. We focus on describing different conceptual approaches to quality indicator development, review one approach for developing quality indicators, outline how to evaluate quality indicators once developed, and discuss quality indicator maintenance. The key steps for developing quality indicators include specifying a clear goal for the indicators; using methodologies to incorporate evidence, expertise, and patient perspectives; and considering contextual factors and logistics of implementation. The Strategic Framework Board and the National Quality Measure Clearinghouse have developed criteria for evaluating quality indicators that complement traditional psychometric evaluations. Optimal strategies for quality indicator maintenance and dissemination have not been determined, but experiences with clinical guideline maintenance may be informative. For quality indicators to effectively guide quality improvement efforts, they must be developed, evaluated, maintained, and implemented using rigorous evidence-informed practices. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Combining data visualization and statistical approaches for interpreting measurements and meta-data: Integrating heatmaps, variable clustering, and mixed regression models

    Science.gov (United States)

    The advent of new higher throughput analytical instrumentation has put a strain on interpreting and explaining the results from complex studies. Contemporary human, environmental, and biomonitoring data sets are comprised of tens or hundreds of analytes, multiple repeat measures...

  10. A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report: Updated in 2016

    Energy Technology Data Exchange (ETDEWEB)

    Sisterson, Douglas [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-01-15

    The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. ARM currently provides data and supporting metadata (information about the data or data quality) to its users through several sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, ARM relies on Instrument Mentors and the ARM Data Quality Office to ensure, assess, and report measurement quality. Therefore, an easily accessible, well-articulated estimate of ARM measurement uncertainty is needed. This report is a continuation of the work presented by Campos and Sisterson (2015) and provides additional uncertainty information from instruments not available in their report. As before, a total measurement uncertainty has been calculated as a function of the instrument uncertainty (calibration factors), the field uncertainty (environmental factors), and the retrieval uncertainty (algorithm factors). This study will not expand on methods for computing these uncertainties. As before, it will focus on the practical identification, characterization, and inventory of the measurement uncertainties already available to the ARM community through the ARM Instrument Mentors and their ARM instrument handbooks. This study continues the first steps towards reporting ARM measurement uncertainty as: (1) identifying how the uncertainty of individual ARM measurements is currently expressed, (2) identifying a consistent approach to measurement uncertainty, and then (3) reclassifying ARM instrument measurement uncertainties in a common framework.

  11. Influence of the measurement location on the resistance index in the umbilical arteries: a hemodynamic approach.

    Science.gov (United States)

    Vieyres, P; Durand, A; Patat, F; Descamps, P; Gregoire, J M; Pourcelot, D; Pourcelot, L

    1991-12-01

    A computer model was used to study the primary factors generating the reduction in resistance index, (S-D)/S, values observed by ultrasonic Doppler measurements in the umbilical artery, from the fetal insertion to the placental insertion (S represents the amplitude of the systolic peak and D the amplitude of the diastolic peak). This hemodynamic approach shows that the placental resistance is the primary factor, the viscosity and the cord length playing secondary roles. Clinically, the position of the measurement along the cord is an important factor. To increase the sensitivity of the index, the Doppler measurement must be performed near the fetal insertion, whereas a measurement near the placental insertion will make the Doppler examination more specific.

  12. Combination of Evidence with Different Weighting Factors: A Novel Probabilistic-Based Dissimilarity Measure Approach

    Directory of Open Access Journals (Sweden)

    Mengmeng Ma

    2015-01-01

    Full Text Available To solve the invalidation problem of Dempster-Shafer theory of evidence (DS with high conflict in multisensor data fusion, this paper presents a novel combination approach of conflict evidence with different weighting factors using a new probabilistic dissimilarity measure. Firstly, an improved probabilistic transformation function is proposed to map basic belief assignments (BBAs to probabilities. Then, a new dissimilarity measure integrating fuzzy nearness and introduced correlation coefficient is proposed to characterize not only the difference between basic belief functions (BBAs but also the divergence degree of the hypothesis that two BBAs support. Finally, the weighting factors used to reassign conflicts on BBAs are developed and Dempster’s rule is chosen to combine the discounted sources. Simple numerical examples are employed to demonstrate the merit of the proposed method. Through analysis and comparison of the results, the new combination approach can effectively solve the problem of conflict management with better convergence performance and robustness.

  13. Laser patterning: A new approach to measure local magneto-transport properties in multifilamentary superconducting tapes

    International Nuclear Information System (INIS)

    Sanchez Valdes, C.F.; Perez-Penichet, C.; Noda, C.; Arronte, M.; Batista-Leyva, A.J.; Haugen, O.; Johansen, T.H.; Han, Z.; Altshuler, E.

    2007-01-01

    The determination of inter- and intra-filament characteristics in superconducting composites such as BSCCO-Ag tapes is of great importance for material evaluation towards applications. Most attempts to separate the two contributions have relied on indirect methods based on magnetic measurements such as SQUID or magneto-optic imaging techniques. Here we show that laser patterning of superconducting BSCCO-Ag tapes constitutes a simple approach to measure local transport properties in a direct way, even able to separate inter- and intra-filament contributions to the overall transport behavior of the sample

  14. A human rights-consistent approach to multidimensional welfare measurement applied to sub-Saharan Africa

    DEFF Research Database (Denmark)

    Arndt, Channing; Mahrt, Kristi; Hussain, Azhar

    2017-01-01

    is in reality inconsistent with the Universal Declaration of Human Rights principles of indivisibility, inalienability, and equality. We show that a first-order dominance methodology maintains consistency with basic principles, discuss the properties of the multidimensional poverty index and first......The rights-based approach to development targets progress towards the realization of 30 articles set forth in the Universal Declaration of Human Rights. Progress is frequently measured using the multidimensional poverty index. While elegant and useful, the multidimensional poverty index...

  15. Factor-Analytic and Individualized Approaches to Constructing Brief Measures of ADHD Behaviors

    Science.gov (United States)

    Volpe, Robert J.; Gadow, Kenneth D.; Blom-Hoffman, Jessica; Feinberg, Adam B.

    2009-01-01

    Two studies were performed to examine a factor-analytic and an individualized approach to creating short progress-monitoring measures from the longer "ADHD-Symptom Checklist-4" (ADHD-SC4). In Study 1, teacher ratings on items of the ADHD:Inattentive (IA) and ADHD:Hyperactive-Impulsive (HI) scales of the ADHD-SC4 were factor analyzed in a normative…

  16. A quantitative approach to measure road network information based on edge diversity

    Science.gov (United States)

    Wu, Xun; Zhang, Hong; Lan, Tian; Cao, Weiwei; He, Jing

    2015-12-01

    The measure of map information has been one of the key issues in assessing cartographic quality and map generalization algorithms. It is also important for developing efficient approaches to transfer geospatial information. Road network is the most common linear object in real world. Approximately describe road network information will benefit road map generalization, navigation map production and urban planning. Most of current approaches focused on node diversities and supposed that all the edges are the same, which is inconsistent to real-life condition, and thus show limitations in measuring network information. As real-life traffic flow are directed and of different quantities, the original undirected vector road map was first converted to a directed topographic connectivity map. Then in consideration of preferential attachment in complex network study and rich-club phenomenon in social network, the from and to weights of each edge are assigned. The from weight of a given edge is defined as the connectivity of its end node to the sum of the connectivities of all the neighbors of the from nodes of the edge. After getting the from and to weights of each edge, edge information, node information and the whole network structure information entropies could be obtained based on information theory. The approach has been applied to several 1 square mile road network samples. Results show that information entropies based on edge diversities could successfully describe the structural differences of road networks. This approach is a complementarity to current map information measurements, and can be extended to measure other kinds of geographical objects.

  17. Safety assessment of inter-channel / inter-system digital communications: A defensive measures approach

    International Nuclear Information System (INIS)

    Thuy, N. N. Q.

    2006-01-01

    Inappropriately designed inter-channel and inter-system digital communications could initiate common cause failure of multiple channels or multiple systems. Defensive measures were introduced in EPRI report TR-1002835 (Guideline for Performing Defense-in-Depth and Diversity Assessments for Digital Upgrades) to assess, on a deterministic basis, the susceptibility of digital systems architectures to common-cause failures. This paper suggests how this approach could be applied to assess inter-channel and inter-system digital communications from a safety standpoint. The first step of the approach is to systematically identify the so called 'influence factors' that one end of the data communication path can have on the other. Potential factors to be considered would typically include data values, data volumes and data rates. The second step of the approach is to characterize the ways possible failures of a given end of the communication path could affect these influence factors (e.g., incorrect data values, excessive data rates, time-outs, incorrect data volumes). The third step is to analyze the designed-in measures taken to guarantee independence of the other end. In addition to classical error detection and correction codes, typical defensive measures are one-way data communication, fixed-rate data communication, fixed-volume data communication, validation of data values. (authors)

  18. Demonstration of the test-retest reliability and sensitivity of the Lower Limb Functional Index-10 as a measure of functional recovery post burn injury: a cross-sectional repeated measures study design.

    Science.gov (United States)

    Ryland, Margaret E; Grisbrook, Tiffany L; Wood, Fiona M; Phillips, Michael; Edgar, Dale W

    2016-01-01

    Lower limb burns can significantly delay recovery of function. Measuring lower limb functional outcomes is challenging in the unique burn patient population and necessitates the use of reliable and valid tools. The aims of this study were to examine the test-retest reliability, sensitivity, and internal consistency of Sections 1 and 3 of the Lower Limb Functional Index-10 (LLFI-10) questionnaire for measuring functional ability in patients with lower limb burns over time. Twenty-nine adult patients who had sustained a lower limb burn injury in the previous 12 months completed the test-retest procedure of the study. In addition, the minimal detectable change (MDC) was calculated for Section 1 and 3 of the LLFI-10. Section 1 is focused on the activity limitations experienced by patients with a lower limb disorder whereas Section 3 involves patients indicating their current percentage of pre-injury duties. Section 1 of the LLFI-10 demonstrated excellent test-retest reliability (intra-class correlation coefficient (ICC) 0.98, 95 % CI 0.96-0.99) whilst Section 3 demonstrated high test-retest reliability (ICC 0.88, 95 % CI 0.79-0.94). MDC scores for Sections 1 and 3 were 1.27 points and 30.22 %, respectively. Internal consistency was demonstrated with a significant negative association (r s  = -0.83) between Sections 1 and 3 of the LLFI-10 (p reliable for measuring functional ability in patients who have sustained lower limb burns in the previous 12 months, and furthermore, Section 1 is sensitive to changes in patient function over time.

  19. Simultaneous overlay and CD measurement for double patterning: scatterometry and RCWA approach

    Science.gov (United States)

    Li, Jie; Liu, Zhuan; Rabello, Silvio; Dasari, Prasad; Kritsun, Oleg; Volkman, Catherine; Park, Jungchul; Singh, Lovejeet

    2009-03-01

    As optical lithography advances to 32 nm technology node and beyond, double patterning technology (DPT) has emerged as an attractive solution to circumvent the fundamental optical limitations. DPT poses unique demands on critical dimension (CD) uniformity and overlay control, making the tolerance decrease much faster than the rate at which critical dimension shrinks. This, in turn, makes metrology even more challenging. In the past, multi-pad diffractionbased overlay (DBO) using empirical approach has been shown to be an effective approach to measure overlay error associated with double patterning [1]. In this method, registration errors for double patterning were extracted from specially designed dif