WorldWideScience

Sample records for valid strain estimates

  1. Left ventricular strain and its pattern estimated from cine CMR and validation with DENSE

    International Nuclear Information System (INIS)

    Gao, Hao; Luo, Xiaoyu; Allan, Andrew; McComb, Christie; Berry, Colin

    2014-01-01

    Measurement of local strain provides insight into the biomechanical significance of viable myocardium. We attempted to estimate myocardial strain from cine cardiovascular magnetic resonance (CMR) images by using a b-spline deformable image registration method. Three healthy volunteers and 41 patients with either recent or chronic myocardial infarction (MI) were studied at 1.5 Tesla with both cine and DENSE CMR. Regional circumferential and radial left ventricular strains were estimated from cine and DENSE acquisitions. In all healthy volunteers, there was no difference for peak circumferential strain (− 0.18 ± 0.04 versus − 0.18 ± 0.03, p = 0.76) between cine and DENSE CMR, however peak radial strain was overestimated from cine (0.84 ± 0.37 versus 0.49 ± 0.2, p < 0.01). In the patient study, the peak strain patterns predicted by cine were similar to the patterns from DENSE, including the strain evolution related to recovery time and strain patterns related to MI scar extent. Furthermore, cine-derived strain disclosed different strain patterns in MI and non-MI regions, and regions with transmural and non-transmural MI as DENSE. Although there were large variations with radial strain measurements from cine CMR images, useful circumferential strain information can be obtained from routine clinical CMR imaging. Cine strain analysis has potential to improve the diagnostic yield from routine CMR imaging in clinical practice. (paper)

  2. Left ventricular strain and its pattern estimated from cine CMR and validation with DENSE.

    Science.gov (United States)

    Gao, Hao; Allan, Andrew; McComb, Christie; Luo, Xiaoyu; Berry, Colin

    2014-07-07

    Measurement of local strain provides insight into the biomechanical significance of viable myocardium. We attempted to estimate myocardial strain from cine cardiovascular magnetic resonance (CMR) images by using a b-spline deformable image registration method. Three healthy volunteers and 41 patients with either recent or chronic myocardial infarction (MI) were studied at 1.5 Tesla with both cine and DENSE CMR. Regional circumferential and radial left ventricular strains were estimated from cine and DENSE acquisitions. In all healthy volunteers, there was no difference for peak circumferential strain (- 0.18 ± 0.04 versus - 0.18 ± 0.03, p = 0.76) between cine and DENSE CMR, however peak radial strain was overestimated from cine (0.84 ± 0.37 versus 0.49 ± 0.2, p cine were similar to the patterns from DENSE, including the strain evolution related to recovery time and strain patterns related to MI scar extent. Furthermore, cine-derived strain disclosed different strain patterns in MI and non-MI regions, and regions with transmural and non-transmural MI as DENSE. Although there were large variations with radial strain measurements from cine CMR images, useful circumferential strain information can be obtained from routine clinical CMR imaging. Cine strain analysis has potential to improve the diagnostic yield from routine CMR imaging in clinical practice.

  3. Intramyocardial strain estimation from cardiac cine MRI.

    Science.gov (United States)

    Elnakib, Ahmed; Beache, Garth M; Gimel'farb, Georgy; El-Baz, Ayman

    2015-08-01

    Functional strain is one of the important clinical indicators for the quantification of heart performance and the early detection of cardiovascular diseases, and functional strain parameters are used to aid therapeutic decisions and follow-up evaluations after cardiac surgery. A comprehensive framework for deriving functional strain parameters at the endocardium, epicardium, and mid-wall of the left ventricle (LV) from conventional cine MRI data was developed and tested. Cine data were collected using short TR-/TE-balanced steady-state free precession acquisitions on a 1.5T Siemens Espree scanner. The LV wall borders are segmented using a level set-based deformable model guided by a stochastic force derived from a second-order Markov-Gibbs random field model that accounts for the object shape and appearance features. Then, the mid-wall of the segmented LV is determined based on estimating the centerline between the endocardium and epicardium of the LV. Finally, a geometrical Laplace-based method is proposed to track corresponding points on successive myocardial contours throughout the cardiac cycle in order to characterize the strain evolutions. The method was tested using simulated phantom images with predefined point locations of the LV wall throughout the cardiac cycle. The method was tested on 30 in vivo datasets to evaluate the feasibility of the proposed framework to index functional strain parameters. The cine MRI-based model agreed with the ground truth for functional metrics to within 0.30 % for indexing the peak systolic strain change and 0.29 % (per unit time) for indexing systolic and diastolic strain rates. The method was feasible for in vivo extraction of functional strain parameters. Strain indexes of the endocardium, mid-wall, and epicardium can be derived from routine cine images using automated techniques, thereby improving the utility of cine MRI data for characterization of myocardial function. Unlike traditional texture-based tracking, the

  4. Content validity and its estimation

    Directory of Open Access Journals (Sweden)

    Yaghmale F

    2003-04-01

    Full Text Available Background: Measuring content validity of instruments are important. This type of validity can help to ensure construct validity and give confidence to the readers and researchers about instruments. content validity refers to the degree that the instrument covers the content that it is supposed to measure. For content validity two judgments are necessary: the measurable extent of each item for defining the traits and the set of items that represents all aspects of the traits. Purpose: To develop a content valid scale for assessing experience with computer usage. Methods: First a review of 2 volumes of International Journal of Nursing Studies, was conducted with onlyI article out of 13 which documented content validity did so by a 4-point content validity index (CV! and the judgment of 3 experts. Then a scale with 38 items was developed. The experts were asked to rate each item based on relevance, clarity, simplicity and ambiguity on the four-point scale. Content Validity Index (CVI for each item was determined. Result: Of 38 items, those with CVIover 0.75 remained and the rest were discarded reSulting to 25-item scale. Conclusion: Although documenting content validity of an instrument may seem expensive in terms of time and human resources, its importance warrants greater attention when a valid assessment instrument is to be developed. Keywords: Content Validity, Measuring Content Validity

  5. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  6. Tissue strain rate estimator using ultrafast IQ complex data

    OpenAIRE

    TERNIFI , Redouane; Elkateb Hachemi , Melouka; Remenieras , Jean-Pierre

    2012-01-01

    International audience; Pulsatile motion of brain parenchyma results from cardiac and breathing cycles. In this study, transient motion of brain tissue was estimated using an Aixplorer® imaging system allowing an ultrafast 2D acquisition mode. The strain was computed directly from the ultrafast IQ complex data using the extended autocorrelation strain estimator (EASE), which provides great SNRs regardless of depth. The EASE first evaluates the autocorrelation function at each depth over a set...

  7. Factors affecting finite strain estimation in low-grade, low-strain clastic rocks

    Science.gov (United States)

    Pastor-Galán, Daniel; Gutiérrez-Alonso, Gabriel; Meere, Patrick A.; Mulchrone, Kieran F.

    2009-12-01

    The computer strain analysis methods SAPE, MRL and DTNNM have permitted the characterization of finite strain in two different regions with contrasting geodynamic scenarios; (1) the Talas Ala Tau (Tien Shan, Kyrgyzs Republic) and (2) the Somiedo Nappe and Narcea Antiform (Cantabrian to West Asturian-Leonese Zone boundary, Variscan Belt, NW of Iberia). The performed analyses have revealed low-strain values and the regional strain trend in both studied areas. This study also investigates the relationship between lithology (grain size and percentage of matrix) and strain estimates the two methodologies used. The results show that these methods are comparable and the absence of significant finite strain lithological control in rocks deformed under low metamorphic and low-strain conditions.

  8. Strain gauge validation experiments for the Sandia 34-meter VAWT (Vertical Axis Wind Turbine) test bed

    Science.gov (United States)

    Sutherland, Herbert J.

    1988-08-01

    Sandia National Laboratories has erected a research oriented, 34- meter diameter, Darrieus vertical axis wind turbine near Bushland, Texas. This machine, designated the Sandia 34-m VAWT Test Bed, is equipped with a large array of strain gauges that have been placed at critical positions about the blades. This manuscript details a series of four-point bend experiments that were conducted to validate the output of the blade strain gauge circuits. The output of a particular gauge circuit is validated by comparing its output to equivalent gauge circuits (in this stress state) and to theoretical predictions. With only a few exceptions, the difference between measured and predicted strain values for a gauge circuit was found to be of the order of the estimated repeatability for the measurement system.

  9. Estimation of piping temperature fluctuations based on external strain measurements

    International Nuclear Information System (INIS)

    Morilhat, P.; Maye, J.P.

    1993-01-01

    Due to the difficulty to carry out measurements at the inner sides of nuclear reactor piping subjected to thermal transients, temperature and stress variations in the pipe walls are estimated by means of external thermocouples and strain-gauges. This inverse problem is solved by spectral analysis. Since the wall harmonic transfer function (response to a harmonic load) is known, the inner side signal will be obtained by convolution of the inverse transfer function of the system and of the strain measurement enables detection of internal temperature fluctuations in a frequency range beyond the scope of the thermocouples. (authors). 5 figs., 3 refs

  10. How Valid are Estimates of Occupational Illness?

    Science.gov (United States)

    Hilaski, Harvey J.; Wang, Chao Ling

    1982-01-01

    Examines some of the methods of estimating occupational diseases and suggests that a consensus on the adequacy and reliability of estimates by the Bureau of Labor Statistics and others is not likely. (SK)

  11. Experimental validation of pulsed column inventory estimators

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Weh, R.; Eiben, K.; Dander, T.; Hakkila, E.A.

    1991-01-01

    Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may be an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs

  12. Validation of Core Temperature Estimation Algorithm

    Science.gov (United States)

    2016-01-20

    based on an extended Kalman filter , which was developed using field data from 17 young male U.S. Army soldiers with core temperatures ranging from...CTstart, v) %KFMODEL estimate core temperature from heart rate with Kalman filter % This version supports both batch mode (operate on entire HR time...CTstart = 37.1; % degrees Celsius end if nargin < 3 v = 0; end %Extended Kalman Filter Parameters a = 1; gamma = 0.022^2; b_0 = -7887.1; b_1

  13. Stochastic precision analysis of 2D cardiac strain estimation in vivo

    International Nuclear Information System (INIS)

    Bunting, E A; Provost, J; Konofagou, E E

    2014-01-01

    Ultrasonic strain imaging has been applied to echocardiography and carries great potential to be used as a tool in the clinical setting. Two-dimensional (2D) strain estimation may be useful when studying the heart due to the complex, 3D deformation of the cardiac tissue. Increasing the framerate used for motion estimation, i.e. motion estimation rate (MER), has been shown to improve the precision of the strain estimation, although maintaining the spatial resolution necessary to view the entire heart structure in a single heartbeat remains challenging at high MERs. Two previously developed methods, the temporally unequispaced acquisition sequence (TUAS) and the diverging beam sequence (DBS), have been used in the past to successfully estimate in vivo axial strain at high MERs without compromising spatial resolution. In this study, a stochastic assessment of 2D strain estimation precision is performed in vivo for both sequences at varying MERs (65, 272, 544, 815 Hz for TUAS; 250, 500, 1000, 2000 Hz for DBS). 2D incremental strains were estimated during left ventricular contraction in five healthy volunteers using a normalized cross-correlation function and a least-squares strain estimator. Both sequences were shown capable of estimating 2D incremental strains in vivo. The conditional expected value of the elastographic signal-to-noise ratio (E(SNRe|ε)) was used to compare strain estimation precision of both sequences at multiple MERs over a wide range of clinical strain values. The results here indicate that axial strain estimation precision is much more dependent on MER than lateral strain estimation, while lateral estimation is more affected by strain magnitude. MER should be increased at least above 544 Hz to avoid suboptimal axial strain estimation. Radial and circumferential strain estimations were influenced by the axial and lateral strain in different ways. Furthermore, the TUAS and DBS were found to be of comparable precision at similar MERs. (paper)

  14. Validating estimates of problematic drug use in England

    Directory of Open Access Journals (Sweden)

    Heatlie Heath

    2007-10-01

    Full Text Available Abstract Background UK Government expenditure on combatting drug abuse is based on estimates of illicit drug users, yet the validity of these estimates is unknown. This study aims to assess the face validity of problematic drug use (PDU and injecting drug use (IDU estimates for all English Drug Action Teams (DATs in 2001. The estimates were derived from a statistical model using the Multiple Indicator Method (MIM. Methods Questionnaire study, in which the 149 English Drug Action Teams were asked to evaluate the MIM estimates for their DAT. Results The response rate was 60% and there were no indications of selection bias. Of responding DATs, 64% thought the PDU estimates were about right or did not dispute them, while 27% had estimates that were too low and 9% were too high. The figures for the IDU estimates were 52% (about right, 44% (too low and 3% (too high. Conclusion This is the first UK study to determine the validity estimates of problematic and injecting drug misuse. The results of this paper highlight the need to consider criterion and face validity when evaluating estimates of the number of drug users.

  15. Quasi real-time estimation of the moment magnitude of large earthquake from static strain changes

    Science.gov (United States)

    Itaba, S.

    2016-12-01

    The 2011 Tohoku-Oki (off the Pacific coast of Tohoku) earthquake, of moment magnitude 9.0, was accompanied by large static strain changes (10-7), as measured by borehole strainmeters operated by the Geological Survey of Japan in the Tokai, Kii Peninsula, and Shikoku regions. A fault model for the earthquake on the boundary between the Pacific and North American plates, based on these borehole strainmeter data, yielded a moment magnitude of 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency (JMA) announced just after earthquake occurrence was 7.9. Such geodetic moment magnitudes, derived from static strain changes, can be estimated almost as rapidly as determinations using seismic waves. I have to verify the validity of this method in some cases. In the case of this earthquake's largest aftershock, which occurred 29 minutes after the mainshock. The prompt report issued by JMA assigned this aftershock a magnitude of 7.3, whereas the moment magnitude derived from borehole strain data is 7.6, which is much closer to the actual moment magnitude of 7.7. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using static strain changes is one of the strong methods for rapid estimation of the magnitude of large earthquakes, and useful to improve the accuracy of Earthquake Early Warning.

  16. Clinical feasibility and validation of 3D principal strain analysis from cine MRI: comparison to 2D strain by MRI and 3D speckle tracking echocardiography.

    Science.gov (United States)

    Satriano, Alessandro; Heydari, Bobak; Narous, Mariam; Exner, Derek V; Mikami, Yoko; Attwood, Monica M; Tyberg, John V; Lydell, Carmen P; Howarth, Andrew G; Fine, Nowell M; White, James A

    2017-12-01

    Two-dimensional (2D) strain analysis is constrained by geometry-dependent reference directions of deformation (i.e. radial, circumferential, and longitudinal) following the assumption of cylindrical chamber architecture. Three-dimensional (3D) principal strain analysis may overcome such limitations by referencing intrinsic (i.e. principal) directions of deformation. This study aimed to demonstrate clinical feasibility of 3D principal strain analysis from routine 2D cine MRI with validation to strain from 2D tagged cine analysis and 3D speckle tracking echocardiography. Thirty-one patients undergoing cardiac MRI were studied. 3D strain was measured from routine, multi-planar 2D cine SSFP images using custom software designed to apply 4D deformation fields to 3D cardiac models to derive principal strain. Comparisons of strain estimates versus those by 2D tagged cine, 2D non-tagged cine (feature tracking), and 3D speckle tracking echocardiography (STE) were performed. Mean age was 51 ± 14 (36% female). Mean LV ejection fraction was 66 ± 10% (range 37-80%). 3D principal strain analysis was feasible in all subjects and showed high inter- and intra-observer reproducibility (ICC range 0.83-0.97 and 0.83-0.98, respectively-p analysis is feasible using routine, multi-planar 2D cine MRI and shows high reproducibility with strong correlations to 2D conventional strain analysis and 3D STE-based analysis. Given its independence from geometry-related directions of deformation this technique may offer unique benefit for the detection and prognostication of myocardial disease, and warrants expanded investigation.

  17. Validation of perceptual strain index to evaluate the thermal strain in experimental hot conditions

    Directory of Open Access Journals (Sweden)

    Habibollah Dehghan

    2015-01-01

    Conclusions: The research findings showed when there is no access to other forms of methods to evaluate the heat stress, it can be used the PeSI in evaluating the strain because of its favorable correlation with the thermal strain.

  18. Estimating activity energy expenditure: how valid are physical activity questionnaires?

    Science.gov (United States)

    Neilson, Heather K; Robson, Paula J; Friedenreich, Christine M; Csizmadi, Ilona

    2008-02-01

    Activity energy expenditure (AEE) is the modifiable component of total energy expenditure (TEE) derived from all activities, both volitional and nonvolitional. Because AEE may affect health, there is interest in its estimation in free-living people. Physical activity questionnaires (PAQs) could be a feasible approach to AEE estimation in large populations, but it is unclear whether or not any PAQ is valid for this purpose. Our aim was to explore the validity of existing PAQs for estimating usual AEE in adults, using doubly labeled water (DLW) as a criterion measure. We reviewed 20 publications that described PAQ-to-DLW comparisons, summarized study design factors, and appraised criterion validity using mean differences (AEE(PAQ) - AEE(DLW), or TEE(PAQ) - TEE(DLW)), 95% limits of agreement, and correlation coefficients (AEE(PAQ) versus AEE(DLW) or TEE(PAQ) versus TEE(DLW)). Only 2 of 23 PAQs assessed most types of activity over the past year and indicated acceptable criterion validity, with mean differences (TEE(PAQ) - TEE(DLW)) of 10% and 2% and correlation coefficients of 0.62 and 0.63, respectively. At the group level, neither overreporting nor underreporting was more prevalent across studies. We speculate that, aside from reporting error, discrepancies between PAQ and DLW estimates may be partly attributable to 1) PAQs not including key activities related to AEE, 2) PAQs and DLW ascertaining different time periods, or 3) inaccurate assignment of metabolic equivalents to self-reported activities. Small sample sizes, use of correlation coefficients, and limited information on individual validity were problematic. Future research should address these issues to clarify the true validity of PAQs for estimating AEE.

  19. Estimation of respiratory heat flows in prediction of heat strain among Taiwanese steel workers.

    Science.gov (United States)

    Chen, Wang-Yi; Juang, Yow-Jer; Hsieh, Jung-Yu; Tsai, Perng-Jy; Chen, Chen-Peng

    2017-01-01

    International Organization for Standardization 7933 standard provides evaluation of required sweat rate (RSR) and predicted heat strain (PHS). This study examined and validated the approximations in these models estimating respiratory heat flows (RHFs) via convection (C res ) and evaporation (E res ) for application to Taiwanese foundry workers. The influence of change in RHF approximation to the validity of heat strain prediction in these models was also evaluated. The metabolic energy consumption and physiological quantities of these workers performing at different workloads under elevated wet-bulb globe temperature (30.3 ± 2.5 °C) were measured on-site and used in the calculation of RHFs and indices of heat strain. As the results show, the RSR model overestimated the C res for Taiwanese workers by approximately 3 % and underestimated the E res by 8 %. The C res approximation in the PHS model closely predicted the convective RHF, while the E res approximation over-predicted by 11 %. Linear regressions provided better fit in C res approximation (R 2  = 0.96) than in E res approximation (R 2  ≤ 0.85) in both models. The predicted C res deviated increasingly from the observed value when the WBGT reached 35 °C. The deviations of RHFs observed for the workers from those predicted using the RSR or PHS models did not significantly alter the heat loss via the skin, as the RHFs were in general of a level less than 5 % of the metabolic heat consumption. Validation of these approximations considering thermo-physiological responses of local workers is necessary for application in scenarios of significant heat exposure.

  20. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  1. Validity of Edgeworth expansions for realized volatility estimators

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Veliyev, Bezirgen

    (2009). Second, we show that the validity of the Edgeworth expansions for realized volatility may not cover the optimal two-point distribution wild bootstrap proposed by Gonçalves and Meddahi (2009). Then, we propose a new optimal nonlattice distribution which ensures the second-order correctness...... of the bootstrap. Third, in the presence of microstructure noise, based on our Edgeworth expansions, we show that the new optimal choice proposed in the absence of noise is still valid in noisy data for the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). Finally, we show how...

  2. Design and Validation of a Cyclic Strain Bioreactor to Condition Spatially-Selective Scaffolds in Dual Strain Regimes

    Directory of Open Access Journals (Sweden)

    J. Matthew Goodhart

    2014-03-01

    Full Text Available The objective of this study was to design and validate a unique bioreactor design for applying spatially selective, linear, cyclic strain to degradable and non-degradable polymeric fabric scaffolds. This system uses a novel three-clamp design to apply cyclic strain via a computer controlled linear actuator to a specified zone of a scaffold while isolating the remainder of the scaffold from strain. Image analysis of polyethylene terephthalate (PET woven scaffolds subjected to a 3% mechanical stretch demonstrated that the stretched portion of the scaffold experienced 2.97% ± 0.13% strain (mean ± standard deviation while the unstretched portion experienced 0.02% ± 0.18% strain. NIH-3T3 fibroblast cells were cultured on the PET scaffolds and half of each scaffold was stretched 5% at 0.5 Hz for one hour per day for 14 days in the bioreactor. Cells were checked for viability and proliferation at the end of the 14 day period and levels of glycosaminoglycan (GAG and collagen (hydroxyproline were measured as indicators of extracellular matrix production. Scaffolds in the bioreactor showed a seven-fold increase in cell number over scaffolds cultured statically in tissue culture plastic petri dishes (control. Bioreactor scaffolds showed a lower concentration of GAG deposition per cell as compared to the control scaffolds largely due to the great increase in cell number. A 75% increase in hydroxyproline concentration per cell was seen in the bioreactor stretched scaffolds as compared to the control scaffolds. Surprisingly, little differences were experienced between the stretched and unstretched portions of the scaffolds for this study. This was largely attributed to the conditioned and shared media effect. Results indicate that the bioreactor system is capable of applying spatially-selective, linear, cyclic strain to cells growing on polymeric fabric scaffolds and evaluating the cellular and matrix responses to the applied strains.

  3. Account of internal friction when estimating recoverable creep strain

    International Nuclear Information System (INIS)

    Demidov, A.S.

    1986-01-01

    It is supposed that a difference of empirical and calculated data on the creep strain recovery for Kh18N10T steel under conditions of cyclic variations in stress is specified by the effect of internal friction. In the accepted model of creep β-flow is considered to be reversible and γ-flow- irreversible. Absorptivity is determined as a ratio of the difference between the expended work and work of strain recovery forces to the work expended in cycle. A notion of the equivalent stress acting in the period of the creep strain recovery is introduced. Results of the calculation according to the empirical formula where absorptivity was introduced into are compared with empirical data obtained for Kh18N10T steel at 750 deg C

  4. Appropriate models for estimating stresses and strains in asphalt layers

    CSIR Research Space (South Africa)

    Jooste, FJ

    1998-09-01

    Full Text Available The broad objective is to make recommendations for appropriate modelling procedures to be used in the structural design of asphalt layers. Findings of this investigation are intended to be used in refining and validating existing asphalt pavement...

  5. 3D Tendon Strain Estimation Using High-frequency Volumetric Ultrasound Images: A Feasibility Study.

    Science.gov (United States)

    Carvalho, Catarina; Slagmolen, Pieter; Bogaerts, Stijn; Scheys, Lennart; D'hooge, Jan; Peers, Koen; Maes, Frederik; Suetens, Paul

    2018-03-01

    Estimation of strain in tendons for tendinopathy assessment is a hot topic within the sports medicine community. It is believed that, if accurately estimated, existing treatment and rehabilitation protocols can be improved and presymptomatic abnormalities can be detected earlier. State-of-the-art studies present inaccurate and highly variable strain estimates, leaving this problem without solution. Out-of-plane motion, present when acquiring two-dimensional (2D) ultrasound (US) images, is a known problem and may be responsible for such errors. This work investigates the benefit of high-frequency, three-dimensional (3D) US imaging to reduce errors in tendon strain estimation. Volumetric US images were acquired in silico, in vitro, and ex vivo using an innovative acquisition approach that combines the acquisition of 2D high-frequency US images with a mechanical guided system. An affine image registration method was used to estimate global strain. 3D strain estimates were then compared with ground-truth values and with 2D strain estimates. The obtained results for in silico data showed a mean absolute error (MAE) of 0.07%, 0.05%, and 0.27% for 3D estimates along axial, lateral direction, and elevation direction and a respective MAE of 0.21% and 0.29% for 2D strain estimates. Although 3D could outperform 2D, this does not occur in in vitro and ex vivo settings, likely due to 3D acquisition artifacts. Comparison against the state-of-the-art methods showed competitive results. The proposed work shows that 3D strain estimates are more accurate than 2D estimates but acquisition of appropriate 3D US images remains a challenge.

  6. Dislocation-mediated strain hardening in tungsten: Thermo-mechanical plasticity theory and experimental validation

    Science.gov (United States)

    Terentyev, Dmitry; Xiao, Xiazi; Dubinko, A.; Bakaeva, A.; Duan, Huiling

    2015-12-01

    A self-consistent thermo-mechanical model to study the strain-hardening behavior of polycrystalline tungsten was developed and validated by a dedicated experimental route. Dislocation-dislocation multiplication and storage, as well dislocation-grain boundary (GB) pinning were the major mechanisms underlying the evolution of plastic deformation, thus providing a link between the strain hardening behavior and material's microstructure. The microstructure of the polycrystalline tungsten samples has been thoroughly investigated by scanning and electron microscopy. The model was applied to compute stress-strain loading curves of commercial tungsten grades, in the as-received and as-annealed states, in the temperature range of 500-1000 °C. Fitting the model to the independent experimental results obtained using a single crystal and as-received polycrystalline tungsten, the model demonstrated its capability to predict the deformation behavior of as-annealed samples in a wide temperature range and applied strain. The relevance of the dislocation-mediated plasticity mechanisms used in the model have been validated using transmission electron microscopy examination of the samples deformed up to different amounts of strain. On the basis of the experimental validation, the limitations of the model are determined and discussed.

  7. Fiber Bragg Gratings, IT Techniques and Strain Gauge Validation for Strain Calculation on Aged Metal Specimens

    Directory of Open Access Journals (Sweden)

    Ander Montero

    2011-01-01

    Full Text Available This paper studies the feasibility of calculating strains in aged F114 steel specimens with Fiber Bragg Grating (FBG sensors and infrared thermography (IT techniques. Two specimens have been conditioned under extreme temperature and relative humidity conditions making comparative tests of stress before and after aging using different adhesives. Moreover, a comparison has been made with IT techniques and conventional methods for calculating stresses in F114 steel. Implementation of Structural Health Monitoring techniques on real aircraft during their life cycle requires a study of the behaviour of FBG sensors and their wiring under real conditions, before using them for a long time. To simulate aging, specimens were stored in a climate chamber at 70 °C and 90% RH for 60 days. This study is framed within the Structural Health Monitoring (SHM and Non Destructuve Evaluation (NDE research lines, integrated into the avionics area maintained by the Aeronautical Technologies Centre (CTA and the University of the Basque Country (UPV/EHU.

  8. Validation of limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring - is a separate validation group required?

    NARCIS (Netherlands)

    Proost, J. H.

    Objective: Limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring are usually validated in a separate group of patients, according to published guidelines. The aim of this study is to evaluate the validation of LSM by comparing independent validation with cross-validation

  9. DYNAMIC STRAIN MAPPING AND REAL-TIME DAMAGE STATE ESTIMATION UNDER BIAXIAL RANDOM FATIGUE LOADING

    Data.gov (United States)

    National Aeronautics and Space Administration — DYNAMIC STRAIN MAPPING AND REAL-TIME DAMAGE STATE ESTIMATION UNDER BIAXIAL RANDOM FATIGUE LOADING SUBHASISH MOHANTY*, ADITI CHATTOPADHYAY, JOHN N. RAJADAS, AND CLYDE...

  10. Validation of equations for pleural effusion volume estimation by ultrasonography.

    Science.gov (United States)

    Hassan, Maged; Rizk, Rana; Essam, Hatem; Abouelnour, Ahmed

    2017-12-01

    To validate the accuracy of previously published equations that estimate pleural effusion volume using ultrasonography. Only equations using simple measurements were tested. Three measurements were taken at the posterior axillary line for each case with effusion: lateral height of effusion ( H ), distance between collapsed lung and chest wall ( C ) and distance between lung and diaphragm ( D ). Cases whose effusion was aspirated to dryness were included and drained volume was recorded. Intra-class correlation coefficient (ICC) was used to determine the predictive accuracy of five equations against the actual volume of aspirated effusion. 46 cases with effusion were included. The most accurate equation in predicting effusion volume was ( H  +  D ) × 70 (ICC 0.83). The simplest and yet accurate equation was H  × 100 (ICC 0.79). Pleural effusion height measured by ultrasonography gives a reasonable estimate of effusion volume. Incorporating distance between lung base and diaphragm into estimation improves accuracy from 79% with the first method to 83% with the latter.

  11. Well-founded cost estimation validated by experience

    International Nuclear Information System (INIS)

    LaGuardia, T.S.

    2005-01-01

    to build consistency into its cost estimates. A standardized list of decommissioning activities needs to be adopted internationally so estimates can be prepared on a consistent basis, and to facilitate tracking of actual costs against the estimate. The OECD/NEA Standardized List incorporates the consensus of international experts as to the elements of cost and activities that should be included in the estimate. A significant effort was made several years ago to promote universal adoption of this standard. Using the standardized list of activities as a template, a questionnaire was distributed to gather actual decommissioning costs (and other parameters) from international projects. The results of cost estimate contributions from many countries were analyzed and evaluated as to reactor types, decommissioning strategies, cost drivers, and waste disposal quantities. The results were reported in the literature A standardized list of activities will only be valuable if the underlying cost elements and methodology is clearly identified in the estimate. While no one would expect perfect correlation of every element of cost in a large project estimate versus actual cost comparison, the variants should be visible so the basis for the difference can be examined and evaluated. For the nuclear power industry to grow to meet the increasing demand for electricity, the investors, regulators and the public must understand the total cost of the nuclear fuel cycle. The costs for decommissioning and the funding requirements to provide for safe closure and dismantling of these units are well recognized to represent a significant liability to the owner utilities and governmental agencies. Owners and government regulatory agencies need benchmarked decommissioning costs to test the validity of each proposed cost and funding request. The benchmarking process requires the oversight of decommissioning experts to evaluate contributed cost data in a meaningful manner. An international

  12. Estimation of lattice strain in nanocrystalline RuO2 by Williamson-Hall and size-strain plot methods

    Science.gov (United States)

    Sivakami, R.; Dhanuskodi, S.; Karvembu, R.

    2016-01-01

    RuO2 nanoparticles (RuO2 NPs) have been successfully synthesized by the hydrothermal method. Structure and the particle size have been determined by X-ray diffraction (XRD), scanning electron microscopy (SEM), atomic force microscopy (AFM) and transmission electron microscopy (TEM). UV-Vis spectra reveal that the optical band gap of RuO2 nanoparticles is red shifted from 3.95 to 3.55 eV. BET measurements show a high specific surface area (SSA) of 118-133 m2/g and pore diameter (10-25 nm) has been estimated by Barret-Joyner-Halenda (BJH) method. The crystallite size and lattice strain in the samples have been investigated by Williamson-Hall (W-H) analysis assuming uniform deformation, deformation stress and deformation energy density, and the size-strain plot method. All other relevant physical parameters including stress, strain and energy density have been calculated. The average crystallite size and the lattice strain evaluated from XRD measurements are in good agreement with the results of TEM.

  13. Estimation of lattice strain in nanocrystalline RuO2 by Williamson-Hall and size-strain plot methods.

    Science.gov (United States)

    Sivakami, R; Dhanuskodi, S; Karvembu, R

    2016-01-05

    RuO2 nanoparticles (RuO2 NPs) have been successfully synthesized by the hydrothermal method. Structure and the particle size have been determined by X-ray diffraction (XRD), scanning electron microscopy (SEM), atomic force microscopy (AFM) and transmission electron microscopy (TEM). UV-Vis spectra reveal that the optical band gap of RuO2 nanoparticles is red shifted from 3.95 to 3.55eV. BET measurements show a high specific surface area (SSA) of 118-133m(2)/g and pore diameter (10-25nm) has been estimated by Barret-Joyner-Halenda (BJH) method. The crystallite size and lattice strain in the samples have been investigated by Williamson-Hall (W-H) analysis assuming uniform deformation, deformation stress and deformation energy density, and the size-strain plot method. All other relevant physical parameters including stress, strain and energy density have been calculated. The average crystallite size and the lattice strain evaluated from XRD measurements are in good agreement with the results of TEM. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Fast Estimation of Strains for Cross-Beams Six-Axis Force/Torque Sensors by Mechanical Modeling

    Directory of Open Access Journals (Sweden)

    Junqing Ma

    2013-05-01

    Full Text Available Strain distributions are crucial criteria of cross-beams six-axis force/torque sensors. The conventional method for calculating the criteria is to utilize Finite Element Analysis (FEA to get numerical solutions. This paper aims to obtain analytical solutions of strains under the effect of external force/torque in each dimension. Genetic mechanical models for cross-beams six-axis force/torque sensors are proposed, in which deformable cross elastic beams and compliant beams are modeled as quasi-static Timoshenko beam. A detailed description of model assumptions, model idealizations, application scope and model establishment is presented. The results are validated by both numerical FEA simulations and calibration experiments, and test results are found to be compatible with each other for a wide range of geometric properties. The proposed analytical solutions are demonstrated to be an accurate estimation algorithm with higher efficiency.

  15. Validation of Persian rapid estimate of adult literacy in dentistry.

    Science.gov (United States)

    Pakpour, Amir H; Lawson, Douglas M; Tadakamadla, Santosh K; Fridlund, Bengt

    2016-05-01

    The aim of the present study was to establish the psychometric properties of the Rapid Estimate of adult Literacy in Dentistry-99 (REALD-99) in the Persian language for use in an Iranian population (IREALD-99). A total of 421 participants with a mean age of 28 years (59% male) were included in the study. Participants included those who were 18 years or older and those residing in Quazvin (a city close to Tehran), Iran. A forward-backward translation process was used for the IREALD-99. The Test of Functional Health Literacy in Dentistry (TOFHLiD) was also administrated. The validity of the IREALD-99 was investigated by comparing the IREALD-99 across the categories of education and income levels. To further investigate, the correlation of IREALD-99 with TOFHLiD was computed. A principal component analysis (PCA) was performed on the data to assess unidimensionality and strong first factor. The Rasch mathematical model was used to evaluate the contribution of each item to the overall measure, and whether the data were invariant to differences in sex. Reliability was estimated with Cronbach's α and test-retest correlation. Cronbach's alpha for the IREALD-99 was 0.98, indicating strong internal consistency. The test-retest correlation was 0.97. IREALD-99 scores differed by education levels. IREALD-99 scores were positively related to TOFHLiD scores (rh = 0.72, P < 0.01). In addition, IREALD-99 showed positive correlation with self-rated oral health status (rh = 0.31, P < 0.01) as evidence of convergent validity. The PCA indicated a strong first component, five times the strength of the second component and nine times the third. The empirical data were a close fit with the Rasch mathematical model. There was not a significant difference in scores with respect to income level (P = 0.09), and only the very lowest income level was significantly different (P < 0.01). The IREALD-99 exhibited excellent reliability on repeated administrations, as well as internal

  16. Experimental validation of finite element analysis of human vertebral collapse under large compressive strains.

    Science.gov (United States)

    Hosseini, Hadi S; Clouthier, Allison L; Zysset, Philippe K

    2014-04-01

    Osteoporosis-related vertebral fractures represent a major health problem in elderly populations. Such fractures can often only be diagnosed after a substantial deformation history of the vertebral body. Therefore, it remains a challenge for clinicians to distinguish between stable and progressive potentially harmful fractures. Accordingly, novel criteria for selection of the appropriate conservative or surgical treatment are urgently needed. Computer tomography-based finite element analysis is an increasingly accepted method to predict the quasi-static vertebral strength and to follow up this small strain property longitudinally in time. A recent development in constitutive modeling allows us to simulate strain localization and densification in trabecular bone under large compressive strains without mesh dependence. The aim of this work was to validate this recently developed constitutive model of trabecular bone for the prediction of strain localization and densification in the human vertebral body subjected to large compressive deformation. A custom-made stepwise loading device mounted in a high resolution peripheral computer tomography system was used to describe the progressive collapse of 13 human vertebrae under axial compression. Continuum finite element analyses of the 13 compression tests were realized and the zones of high volumetric strain were compared with the experiments. A fair qualitative correspondence of the strain localization zone between the experiment and finite element analysis was achieved in 9 out of 13 tests and significant correlations of the volumetric strains were obtained throughout the range of applied axial compression. Interestingly, the stepwise propagating localization zones in trabecular bone converged to the buckling locations in the cortical shell. While the adopted continuum finite element approach still suffers from several limitations, these encouraging preliminary results towards the prediction of extended vertebral

  17. Validation of a physical activity questionnaire to measure the effect of mechanical strain on bone mass.

    Science.gov (United States)

    Kemper, Han C G; Bakker, I; Twisk, J W R; van Mechelen, W

    2002-05-01

    Most of the questionnaires available to estimate the daily physical activity levels of humans are based on measuring the intensity of these activities as multiples of resting metabolic rate (METs). Metabolic intensity of physical activities is the most important component for evaluating effects on cardiopulmonary fitness. However, animal studies have indicated that for effects on bone mass the intensity in terms of energy expenditure (metabolic component) of physical activities is less important than the intensity of mechanical strain in terms of the forces by the skeletal muscles and/or the ground reaction forces. The physical activity questionnaire (PAQ) used in the Amsterdam Growth and Health Longitudinal Study (AGAHLS) was applied to investigate the long-term effects of habitual physical activity patterns during youth on health and fitness in later adulthood. The PAQ estimates both the metabolic components of physical activities (METPA) and the mechanical components of physical activities (MECHPA). Longitudinal measurements of METPA and MECHPA were made in a young population of males and females ranging in age from 13 to 32 years. This enabled evaluation of the differential effects of physical activities during adolescence (13-16 years), young adulthood (21-28 years), and the total period of 15 years (age 13-28 years) on bone mineral density (BMD) of the lumbar spine, as measured by dual-energy X-ray absorptiometry (DXA) in males (n = 139) and females (n = 163) at a mean age of 32 years. The PAQ used in the AGAHLS during adolescence (13-16 years) and young adulthood (21-28 years) has the ability to measure the physical activity patterns of both genders, which are important for the development of bone mass at the adult age. MECHPA is more important than METPA. The highest coefficient of 0.33 (p PAQ was established by comparing PAQ scores during four annual measurements in 200 boys and girls with two other objective measures of physical activity: movement

  18. Development of robust flexible OLED encapsulations using simulated estimations and experimental validations

    International Nuclear Information System (INIS)

    Lee, Chang-Chun; Shih, Yan-Shin; Wu, Chih-Sheng; Tsai, Chia-Hao; Yeh, Shu-Tang; Peng, Yi-Hao; Chen, Kuang-Jung

    2012-01-01

    This work analyses the overall stress/strain characteristic of flexible encapsulations with organic light-emitting diode (OLED) devices. A robust methodology composed of a mechanical model of multi-thin film under bending loads and related stress simulations based on nonlinear finite element analysis (FEA) is proposed, and validated to be more reliable compared with related experimental data. With various geometrical combinations of cover plate, stacked thin films and plastic substrate, the position of the neutral axis (NA) plate, which is regarded as a key design parameter to minimize stress impact for the concerned OLED devices, is acquired using the present methodology. The results point out that both the thickness and mechanical properties of the cover plate help in determining the NA location. In addition, several concave and convex radii are applied to examine the reliable mechanical tolerance and to provide an insight into the estimated reliability of foldable OLED encapsulations. (paper)

  19. Study of the validity of a job-exposure matrix for the job strain model factors: an update and a study of changes over time.

    Science.gov (United States)

    Niedhammer, Isabelle; Milner, Allison; LaMontagne, Anthony D; Chastang, Jean-François

    2018-03-08

    The objectives of the study were to construct a job-exposure matrix (JEM) for psychosocial work factors of the job strain model, to evaluate its validity, and to compare the results over time. The study was based on national representative data of the French working population with samples of 46,962 employees (2010 SUMER survey) and 24,486 employees (2003 SUMER survey). Psychosocial work factors included the job strain model factors (Job Content Questionnaire): psychological demands, decision latitude, social support, job strain and iso-strain. Job title was defined by three variables: occupation and economic activity coded using standard classifications, and company size. A JEM was constructed using a segmentation method (Classification and Regression Tree-CART) and cross-validation. The best quality JEM was found using occupation and company size for social support. For decision latitude and psychological demands, there was not much difference using occupation and company size with or without economic activity. The validity of the JEM estimates was higher for decision latitude, job strain and iso-strain, and lower for social support and psychological demands. Differential changes over time were observed for psychosocial work factors according to occupation, economic activity and company size. This study demonstrated that company size in addition to occupation may improve the validity of JEMs for psychosocial work factors. These matrices may be time-dependent and may need to be updated over time. More research is needed to assess the validity of JEMs given that these matrices may be able to provide exposure assessments to study a range of health outcomes.

  20. Validation of estimated glomerular filtration rate equations for Japanese children.

    Science.gov (United States)

    Gotoh, Yoshimitsu; Uemura, Osamu; Ishikura, Kenji; Sakai, Tomoyuki; Hamasaki, Yuko; Araki, Yoshinori; Hamda, Riku; Honda, Masataka

    2018-01-25

    The gold standard for evaluation of kidney function is renal inulin clearance (Cin). However, the methodology for Cin is complicated and difficult, especially for younger children and/or patients with bladder dysfunction. Therefore, we developed a simple and easier method for obtaining the estimated glomerular filtration rate (eGFR) using equations and values for several biomarkers, i.e., serum creatinine (Cr), serum cystatin C (cystC), serum beta-2 microglobulin (β 2 MG), and creatinine clearance (Ccr). The purpose of the present study was to validate these equations with a new data set. To validate each equation, we used data of 140 patients with CKD with clinical need for Cin, using the measured GFR (mGFR). We compared the results for each eGFR equation with the mGFR using mean error (ME), root mean square error (RMSE), P 30 , and Bland-Altman analysis. The ME of Cr, cystC, β 2 MG, and Ccr based on eGFR was 15.8 ± 13.0, 17.2 ± 16.5, 15.4 ± 14.3, and 10.6 ± 13.0 ml/min/1.73 m 2 , respectively. The RMSE was 29.5, 23.8, 20.9, and 16.7, respectively. The P 30 was 79.4, 71.1, 69.5, and 92.9%, respectively. The Bland-Altman bias analysis showed values of 4.0 ± 18.6, 5.3 ± 16.8, 12.7 ± 17.0, and 2.5 ± 17.2 ml/min/1.73 m 2 , respectively, for these parameters. The bias of each eGFR equation was not large. Therefore, each eGFR equation could be used.

  1. Cleavage strain in the Variscan fold belt, County Cork, Ireland, estimated from stretched arsenopyrite rosettes

    Science.gov (United States)

    Ford, M.; Ferguson, C.C.

    1985-01-01

    In south-west Ireland, hydrothermally formed arsenopyrite crystals in a Devonian mudstone have responded to Variscan deformation by brittle extension fracture and fragment separation. The interfragment gaps and terminal extension zones of each crystal are infilled with fibrous quartz. Stretches within the cleavage plane have been calculated by the various methods available, most of which can be modified to incorporate terminal extension zones. The Strain Reversal Method is the most accurate currently available but still gives a minimum estimate of the overall strain. The more direct Hossain method, which gives only slightly lower estimates with this data, is more practical for field use. A strain ellipse can be estimated from each crystal rosette composed of three laths (assuming the original interlimb angles were all 60??) and, because actual rather than relative stretches are estimated, this provides a lower bound to the area increase in the plane of cleavage. Based on the average of our calculated strain ellipses this area increase is at least 114% and implies an average shortening across the cleavage of at least 53%. However, several lines of evidence suggest that the cleavage deformation was more intense and more oblate than that calculated, and we argue that a 300% area increase in the cleavage plane and 75% shortening across the cleavage are more realistic estimates of the true strain. Furthermore, the along-strike elongation indicated is at least 80%, which may be regionally significant. Estimates of orogenic contraction derived from balanced section construction should therefore take into account the possibility of a substantial strike elongation, and tectonic models that can accommodate such elongations need to be developed. ?? 1985.

  2. Development and validation of a strain-based Structural Health Monitoring system

    Science.gov (United States)

    Katsikeros, Ch. E.; Labeas, G. N.

    2009-02-01

    An innovative Structural Health Monitoring (SHM) methodology, based on structural strain measurements, which are processed by a back-propagation feed-forward Artificial Neural Network (ANN), is proposed. The demonstration of the SHM methodology and the identification of its capabilities and drawbacks are performed by applying the method in the prediction of fatigue damage states of a typical aircraft cracked lap-joint structure. An ANN of suitable architecture is developed and trained by numerically generated strain data sets, which have been preprocessed by Fast Fourier Transformation (FFT) for the extraction of the Fourier Descriptors (FDs). The Finite Element (FE) substructuring technique is implemented in the stress and strain analysis of the lap-joint structure, due to its efficiency in the calculation of the numerous strain data, which are necessary for the ANN training. The trained network is successfully validated, as it is proven capable to accurately predict crack positions and lengths of a lap-joint structure, which is damaged by fatigue cracks of unknown location and extent. The proposed methodology is applicable to the identification of more complex types of damage or to other critical structural locations, as its basic concept is generic.

  3. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    Science.gov (United States)

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  4. Observers for vehicle tyre/road forces estimation: experimental validation

    Science.gov (United States)

    Doumiati, M.; Victorino, A.; Lechner, D.; Baffet, G.; Charara, A.

    2010-11-01

    The motion of a vehicle is governed by the forces generated between the tyres and the road. Knowledge of these vehicle dynamic variables is important for vehicle control systems that aim to enhance vehicle stability and passenger safety. This study introduces a new estimation process for tyre/road forces. It presents many benefits over the existing state-of-art works, within the dynamic estimation framework. One of these major contributions consists of discussing in detail the vertical and lateral tyre forces at each tyre. The proposed method is based on the dynamic response of a vehicle instrumented with potentially integrated sensors. The estimation process is separated into two principal blocks. The role of the first block is to estimate vertical tyre forces, whereas in the second block two observers are proposed and compared for the estimation of lateral tyre/road forces. The different observers are based on a prediction/estimation Kalman filter. The performance of this concept is tested and compared with real experimental data using a laboratory car. Experimental results show that the proposed approach is a promising technique to provide accurate estimation. Thus, it can be considered as a practical low-cost solution for calculating vertical and lateral tyre/road forces.

  5. Validation and qualification of surface-applied fibre optic strain sensors using application-independent optical techniques

    International Nuclear Information System (INIS)

    Schukar, Vivien G; Kadoke, Daniel; Kusche, Nadine; Münzenberger, Sven; Gründer, Klaus-Peter; Habel, Wolfgang R

    2012-01-01

    Surface-applied fibre optic strain sensors were investigated using a unique validation facility equipped with application-independent optical reference systems. First, different adhesives for the sensor's application were analysed regarding their material properties. Measurements resulting from conventional measurement techniques, such as thermo-mechanical analysis and dynamic mechanical analysis, were compared with measurements resulting from digital image correlation, which has the advantage of being a non-contact technique. Second, fibre optic strain sensors were applied to test specimens with the selected adhesives. Their strain-transfer mechanism was analysed in comparison with conventional strain gauges. Relative movements between the applied sensor and the test specimen were visualized easily using optical reference methods, digital image correlation and electronic speckle pattern interferometry. Conventional strain gauges showed limited opportunities for an objective strain-transfer analysis because they are also affected by application conditions. (paper)

  6. Validity of Submaximal Cycle Ergometry for Estimating Aerobic Capacity

    National Research Council Canada - National Science Library

    Myhre, Loren

    1998-01-01

    ... that allows early selection of the most appropriate test work load. A computerized version makes it possible for non-trained personnel to safely administer this test for estimating aerobic capacity...

  7. Validation of Transverse Oscillation Vector Velocity Estimation In-Vivo

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Udesen, Jesper; Thomsen, Carsten

    2007-01-01

    Conventional Doppler methods for blood velocity estimation only estimate the velocity component along the ultrasound (US) beam direction. This implies that a Doppler angle under examination close to 90deg results in unreliable information about the true blood direction and blood velocity. The novel...... the presented angle independent 2-D vector velocity method. The results give reason to believe that the TO method can be a useful alternative to conventional Doppler systems bringing forth new information to the US examination of blood flow....

  8. Finite strain transient creep of D16T alloy: identification and validation employing heterogeneous tests

    Science.gov (United States)

    Shutov, A. V.; Larichkin, A. Yu

    2017-10-01

    A cyclic creep damage model, previously proposed by the authors, is modified for a better description of the transient creep of D16T alloy observed in the finite strain range under rapidly changing stresses. The new model encompasses the concept of kinematic hardening, which allows us to account for the creep-induced anisotropy. The model kinematics is based on the nested multiplicative split of the deformation gradient, proposed by Lion. The damage evolution is accounted for by the classical Kachanov-Rabotnov approach. The material parameters are identified using experimental data on cyclic torsion of thick-walled samples with different holding times between load reversals. For the validation of the proposed material model, an additional experiment is analyzed. Although this additional test is not involved in the identification procedure, the proposed cyclic creep damage model describes it accurately.

  9. On the validity of time-dependent AUC estimators.

    Science.gov (United States)

    Schmid, Matthias; Kestler, Hans A; Potapov, Sergej

    2015-01-01

    Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  10. Estimate of body composition by Hume's equation: validation with DXA.

    Science.gov (United States)

    Carnevale, Vincenzo; Piscitelli, Pamela Angela; Minonne, Rita; Castriotta, Valeria; Cipriani, Cristiana; Guglielmi, Giuseppe; Scillitani, Alfredo; Romagnoli, Elisabetta

    2015-05-01

    We investigated how the Hume's equation, using the antipyrine space, could perform in estimating fat mass (FM) and lean body mass (LBM). In 100 (40 male ad 60 female) subjects, we estimated FM and LBM by the equation and compared these values with those measured by a last generation DXA device. The correlation coefficients between measured and estimated FM were r = 0.940 (p LBM were r = 0.913 (p LBM, though the equation underestimated FM and overestimated LBM in respect to DXA. The mean difference for FM was 1.40 kg (limits of agreement of -6.54 and 8.37 kg). For LBM, the mean difference in respect to DXA was 1.36 kg (limits of agreement -8.26 and 6.52 kg). The root mean square error was 3.61 kg for FM and 3.56 kg for LBM. Our results show that in clinically stable subjects the Hume's equation could reliably assess body composition, and the estimated FM and LBM approached those measured by a modern DXA device.

  11. Rapid estimation of the moment magnitude of large earthquake from static strain changes

    Science.gov (United States)

    Itaba, S.

    2014-12-01

    The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred on March 11, 2011. Based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9, and it was considerably smaller than an actual value. On the other hand, using nine borehole strainmeters of Geological Survey of Japan, AIST, we estimated a fault model with Mw 8.7 for the earthquake on the boundary between the Pacific and North American plates. This model can be estimated about seven minutes after the origin time, and five minute after wave arrival. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami (e.g., Ohta et al., 2012). Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.

  12. Use of micro-CT-based finite element analysis to accurately quantify peri-implant bone strains: a validation in rat tibiae.

    Science.gov (United States)

    Torcasio, Antonia; Zhang, Xiaolei; Van Oosterwyck, Hans; Duyck, Joke; van Lenthe, G Harry

    2012-05-01

    Although research has been addressed at investigating the effect of specific loading regimes on bone response around the implant, a precise quantitative understanding of the local mechanical response close to the implant site is still lacking. This study was aimed at validating micro-CT-based finite element (μFE) models to assess tissue strains after implant placement in a rat tibia. Small implants were inserted at the medio-proximal site of 8 rat tibiae. The limbs were subjected to axial compression loading; strain close to the implant was measured by means of strain gauges. Specimen-specific μFE models were created and analyzed. For each specimen, 4 different models were created corresponding to different representations of the bone-implant interface: bone and implant were assumed fully osseointegrated (A); a low stiffness interface zone was assumed with thickness of 40 μm (B), 80 μm (C), and 160 μm (D). In all cases, measured and computational strains correlated highly (R (2) = 0.95, 0.92, 0.93, and 0.95 in A, B, C, and D, respectively). The averaged calculated strains were 1.69, 1.34, and 1.15 times higher than the measured strains for A, B, and C, respectively, and lower than the experimental strains for D (factor = 0.91). In conclusion, we demonstrated that specimen-specific FE analyses provide accurate estimates of peri-implant bone strains in the rat tibia loading model. Further investigations of the bone-implant interface are needed to quantify implant osseointegration.

  13. Estimation of strain from piezoelectric effect and domain switching in morphotropic PZT by combined analysis of macroscopic strain measurements and synchrotron X-ray data

    International Nuclear Information System (INIS)

    Kungl, Hans; Theissmann, Ralf; Knapp, Michael; Baehtz, Carsten; Fuess, Hartmut; Wagner, Susanne; Fett, Theo; Hoffmann, Michael J.

    2007-01-01

    Morphotropic PZT ceramics are State of the art materials for ferroelectric actuators. Essential performance parameters for these materials are strain and hysteresis. On a microscopic scale the strain provided by an electric field is due to two different mechanisms. The piezoelectric effect causes an elongation of the unit cells, whereas domain switching changes their crystallographic orientation by aligning the polarization axis towards the field direction. A method is outlined to estimate the contribution of the two mechanisms to total strain by combining macroscopic strain measurements and X-ray diffraction (XRD) data. Results from macroscopic measurements of remanent and unipolar strain with the corresponding data on texture, derived from in situ synchrotron radiation XRD patterns, are analyzed and evaluated by a semi-empirical approach. The method was applied to six morphotropic, LaSr doped PZT materials of different Zr/Ti ratios. Results are discussed with respect to the differences between the materials

  14. Internal strain estimation for quantification of human heel pad elastic modulus: A phantom study.

    Science.gov (United States)

    Holst, Karen; Liebgott, Hervé; Wilhjelm, Jens E; Nikolov, Svetoslav; Torp-Pedersen, Søren T; Delachartre, Philippe; Jensen, Jørgen A

    2013-02-01

    Shock absorption is the most important function of the human heel pad. However, changes in heel pad elasticity, as seen in e.g. long-distance runners, diabetes patients, and victims of Falanga torture are affecting this function, often in a painful manner. Assessment of heel pad elasticity is usually based on one or a few strain measurements obtained by an external load-deformation system. The aim of this study was to develop a technique for quantitative measurements of heel pad elastic modulus based on several internal strain measures from within the heel pad by use of ultrasound images. Nine heel phantoms were manufactured featuring a combination of three heel pad stiffnesses and three heel pad thicknesses to model the normal human variation. Each phantom was tested in an indentation system comprising a 7MHz linear array ultrasound transducer, working as the indentor, and a connected load cell. Load-compression data and ultrasound B-mode images were simultaneously acquired in 19 compression steps of 0.1mm each. The internal tissue displacement was for each step calculated by a phase-based cross-correlation technique and internal strain maps were derived from these displacement maps. Elastic moduli were found from the resulting stress-strain curves. The elastic moduli made it possible to distinguish eight of nine phantoms from each other according to the manufactured stiffness and showed very little dependence of the thickness. Mean elastic moduli for the three soft, the three medium, and the three hard phantoms were 89kPa, 153kPa, and 168kPa, respectively. The combination of ultrasound images and force measurements provided an effective way of assessing the elastic properties of the heel pad due to the internal strain estimation. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Strain rates estimated by geodetic observations in the Borborema Province, Brazil

    Science.gov (United States)

    Marotta, Giuliano Sant'Anna; França, George Sand; Monico, João Francisco Galera; Bezerra, Francisco Hilário R.; Fuck, Reinhardt Adolfo

    2015-03-01

    The strain rates for the Borborema Province, located in northeastern Brazil, were estimated in this study. For this purpose, we used GNSS tracking stations with a minimum of two years data. The data were processed using the software GIPSY, version 6.2, provided by the JPL of the California Institute of Technology. The PPP method was used to process the data using the non-fiducial approach. Satellite orbits and clock were supplied by the JPL. Absolute phase center offsets and variations for both the receiver and the satellite antennaes were applied, together with ambiguity resolution; corrections of the first and second order effects of the ionosphere and troposphere models adopting the VMF1 mapping function; 10° elevation mask; FES2004 oceanic load model and terrestrial tide WahrK1 PolTid FreqDepLove OctTid. From a multi annual solution, involving at least 2 years of continuous data, the coordinates and velocities as well as their accuracies were estimated. The strain rates were calculated using the Delaunay triangulation and the Finite Element Method. The results show that the velocity direction is predominantly west and north, with maximum variation of 4.0 ± 1.5 mm/year and 4.1 ± 0.5 mm/year for the x and y components, respectively. The highest strain values of extension and contraction were 0.109552 × 10-6 ± 3.65 × 10-10/year and -0.072838 × 10-6 ± 2.32 × 10-10/year, respectively. In general, the results show that the highest strain and variation of velocity values are located close to the Potiguar Basin, region that concentrates seismic activities of magnitudes of up to 5.2 mb. We conclude that the contraction direction of strain is consistent with the maximum horizontal stress derived from focal mechanism and breakout data. In addition, we conclude that the largest strain rates occur around the Potiguar Basin, an area already recognized as one of the major sites of seismicity in intraplate South America.

  16. Importance of Statistical Evidence in Estimating Valid DEA Scores.

    Science.gov (United States)

    Barnum, Darold T; Johnson, Matthew; Gleason, John M

    2016-03-01

    Data Envelopment Analysis (DEA) allows healthcare scholars to measure productivity in a holistic manner. It combines a production unit's multiple outputs and multiple inputs into a single measure of its overall performance relative to other units in the sample being analyzed. It accomplishes this task by aggregating a unit's weighted outputs and dividing the output sum by the unit's aggregated weighted inputs, choosing output and input weights that maximize its output/input ratio when the same weights are applied to other units in the sample. Conventional DEA assumes that inputs and outputs are used in different proportions by the units in the sample. So, for the sample as a whole, inputs have been substituted for each other and outputs have been transformed into each other. Variables are assigned different weights based on their marginal rates of substitution and marginal rates of transformation. If in truth inputs have not been substituted nor outputs transformed, then there will be no marginal rates and therefore no valid basis for differential weights. This paper explains how to statistically test for the presence of substitutions among inputs and transformations among outputs. Then, it applies these tests to the input and output data from three healthcare DEA articles, in order to identify the effects on DEA scores when input substitutions and output transformations are absent in the sample data. It finds that DEA scores are badly biased when substitution and transformation are absent and conventional DEA models are used.

  17. Catalytic hydrolysis of ammonia borane: Intrinsic parameter estimation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Basu, S.; Gore, J.P. [School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907-2088 (United States); School of Chemical Engineering, Purdue University, West Lafayette, IN 47907-2100 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States); Zheng, Y. [School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907-2088 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States); Varma, A.; Delgass, W.N. [School of Chemical Engineering, Purdue University, West Lafayette, IN 47907-2100 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States)

    2010-04-02

    Ammonia borane (AB) hydrolysis is a potential process for on-board hydrogen generation. This paper presents isothermal hydrogen release rate measurements of dilute AB (1 wt%) hydrolysis in the presence of carbon supported ruthenium catalyst (Ru/C). The ranges of investigated catalyst particle sizes and temperature were 20-181 {mu}m and 26-56 C, respectively. The obtained rate data included both kinetic and diffusion-controlled regimes, where the latter was evaluated using the catalyst effectiveness approach. A Langmuir-Hinshelwood kinetic model was adopted to interpret the data, with intrinsic kinetic and diffusion parameters determined by a nonlinear fitting algorithm. The AB hydrolysis was found to have an activation energy 60.4 kJ mol{sup -1}, pre-exponential factor 1.36 x 10{sup 10} mol (kg-cat){sup -1} s{sup -1}, adsorption energy -32.5 kJ mol{sup -1}, and effective mass diffusion coefficient 2 x 10{sup -10} m{sup 2} s{sup -1}. These parameters, obtained under dilute AB conditions, were validated by comparing measurements with simulations of AB consumption rates during the hydrolysis of concentrated AB solutions (5-20 wt%), and also with the axial temperature distribution in a 0.5 kW continuous-flow packed-bed reactor. (author)

  18. Development and validation of satellite based estimates of surface visibility

    Science.gov (United States)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2015-10-01

    A satellite based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5% for classifying Clear (V ≥ 30 km), Moderate (10 km ≤ V United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network, and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  19. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    Science.gov (United States)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  20. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    International Nuclear Information System (INIS)

    Kirchner, G.; Peterson, R.

    1996-11-01

    Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the

  1. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others

    1996-11-01

    Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the

  2. A Comparison of Quantum and Molecular Mechanical Methods to Estimate Strain Energy in Druglike Fragments.

    Science.gov (United States)

    Sellers, Benjamin D; James, Natalie C; Gobbi, Alberto

    2017-06-26

    Reducing internal strain energy in small molecules is critical for designing potent drugs. Quantum mechanical (QM) and molecular mechanical (MM) methods are often used to estimate these energies. In an effort to determine which methods offer an optimal balance in accuracy and performance, we have carried out torsion scan analyses on 62 fragments. We compared nine QM and four MM methods to reference energies calculated at a higher level of theory: CCSD(T)/CBS single point energies (coupled cluster with single, double, and perturbative triple excitations at the complete basis set limit) calculated on optimized geometries using MP2/6-311+G**. The results show that both the more recent MP2.X perturbation method as well as MP2/CBS perform quite well. In addition, combining a Hartree-Fock geometry optimization with a MP2/CBS single point energy calculation offers a fast and accurate compromise when dispersion is not a key energy component. Among MM methods, the OPLS3 force field accurately reproduces CCSD(T)/CBS torsion energies on more test cases than the MMFF94s or Amber12:EHT force fields, which struggle with aryl-amide and aryl-aryl torsions. Using experimental conformations from the Cambridge Structural Database, we highlight three example structures for which OPLS3 significantly overestimates the strain. The energies and conformations presented should enable scientists to estimate the expected error for the methods described and we hope will spur further research into QM and MM methods.

  3. Balancing sub- and supra-salt strain in salt-influenced rifts: Implications for extension estimates

    Science.gov (United States)

    Coleman, Alexander J.; Jackson, Christopher A.-L.; Duffy, Oliver B.

    2017-09-01

    The structural style of salt-influenced rifts may differ from those formed in predominantly brittle crust. Salt can decouple sub- and supra-salt strain, causing sub-salt faults to be geometrically decoupled from, but kinematically coupled to and responsible for, supra-salt forced folding. Salt-influenced rifts thus contain more folds than their brittle counterparts, an observation often ignored in extension estimates. Fundamental to determining whether sub- and supra-salt structures are kinematically coherent, and the relative contributions of thin- (i.e. gravity-driven) and thick-skinned (i.e. whole-plate stretching) deformation to accommodating rift-related strain, is our ability to measure extension at both structural levels. We here use published physical models of salt-influenced extension to show that line-length estimates yield more accurate values of sub- and supra-salt extension compared to fault-heave, before applying these methods to seismic data from the Halten Terrace, offshore Norway. We show that, given the abundance of ductile deformation in salt-influenced rifts, significant amounts of extension may be ignored, leading to the erroneous interpretations of thin-skinned, gravity-gliding. If a system is kinematically coherent, supra-salt structures can help predict the occurrence and kinematics of sub-salt faults that may be poorly imaged and otherwise poorly constrained.

  4. A Novel Strain-Based Method to Estimate Tire Conditions Using Fuzzy Logic for Intelligent Tires

    Directory of Open Access Journals (Sweden)

    Daniel Garcia-Pozuelo

    2017-02-01

    Full Text Available The so-called intelligent tires are one of the most promising research fields for automotive engineers. These tires are equipped with sensors which provide information about vehicle dynamics. Up to now, the commercial intelligent tires only provide information about inflation pressure and their contribution to stability control systems is currently very limited. Nowadays one of the major problems for intelligent tire development is how to embed feasible and low cost sensors to obtain reliable information such as inflation pressure, vertical load or rolling speed. These parameters provide key information for vehicle dynamics characterization. In this paper, we propose a novel algorithm based on fuzzy logic to estimate the mentioned parameters by means of a single strain-based system. Experimental tests have been carried out in order to prove the suitability and durability of the proposed on-board strain sensor system, as well as its low cost advantages, and the accuracy of the obtained estimations by means of fuzzy logic.

  5. Vector method for strain estimation in phase-sensitive optical coherence elastography

    Science.gov (United States)

    Matveyev, A. L.; Matveev, L. A.; Sovetsky, A. A.; Gelikonov, G. V.; Moiseev, A. A.; Zaitsev, V. Y.

    2018-06-01

    A noise-tolerant approach to strain estimation in phase-sensitive optical coherence elastography, robust to decorrelation distortions, is discussed. The method is based on evaluation of interframe phase-variation gradient, but its main feature is that the phase is singled out at the very last step of the gradient estimation. All intermediate steps operate with complex-valued optical coherence tomography (OCT) signals represented as vectors in the complex plane (hence, we call this approach the ‘vector’ method). In comparison with such a popular method as least-square fitting of the phase-difference slope over a selected region (even in the improved variant with amplitude weighting for suppressing small-amplitude noisy pixels), the vector approach demonstrates superior tolerance to both additive noise in the receiving system and speckle-decorrelation caused by tissue straining. Another advantage of the vector approach is that it obviates the usual necessity of error-prone phase unwrapping. Here, special attention is paid to modifications of the vector method that make it especially suitable for processing deformations with significant lateral inhomogeneity, which often occur in real situations. The method’s advantages are demonstrated using both simulated and real OCT scans obtained during reshaping of a collagenous tissue sample irradiated by an IR laser beam producing complex spatially inhomogeneous deformations.

  6. Estimation of lattice strain for zirconia nanoparticles based on Williamson- Hall analysis

    Energy Technology Data Exchange (ETDEWEB)

    Aly, Kamal A., E-mail: kamalaly2001@gmail.com [Physics Department, Faculty of Science & Arts, Khullais, University of Jeddah, Jeddah (Saudi Arabia); Physics Department, Faculty of Science, Al-Azhar University, Assiut Branch, Assiut (Egypt); Khalil, N.M. [Chemistry Department, Faculty of Science & Arts, Khullais, University of Jeddah, Jeddah (Saudi Arabia); Refractories, Ceramics and Building Materials Department, National Research Centre, 12311, Cairo (Egypt); Algamal, Yousif [Chemistry Department, Faculty of Science & Arts, Khullais, University of Jeddah, Jeddah (Saudi Arabia); Saleem, Qaid M.A. [Chemistry Department, Faculty of Science & Arts, Khullais, University of Jeddah, Jeddah (Saudi Arabia); Aden University, Shabwa (Yemen)

    2017-06-01

    Nanoparticles of Zirconia were prepared (ZrO{sub 2}) by the neutralization of zirconium oxychloride octahydrate (ZrOCl{sub 2}-8H{sub 2}O) (2M) and ammonia solution (2M) at pH 8. The ZrO{sub 2} crystalline state was revealed by X-ray diffraction (XRD). The analysis of Scanning electron microscopy (SEM) and Transmission Electron microscope (TEM) images reveals that the as-synthesized ZrO{sub 2} particles at firing temperature of 800 °C are uniform and of range of 30 nm. Increasing of the temperature up to 1100 °C leads to the increase in particle size and alters the powders shape due to agglomeration arose from zirconia calcination as well as the increase in particle size. The X-ray peak broadening analysis (XRDBA) was used in the estimation of the crystalline size. Williamson-Hall (W-H) analysis was applied successfully to determine the energy density, stress, and the strain values via uniform deformation model (UDM), uniform deformation stress model (UDSM) and uniform deformation energy density model (UDEDM). The mean of the strain root square was calculated. The different strain values obtained from these models predicting the zirconia isotropic behavior. In addition to that, the W-H analysis results were discussed in terms of that obtained by Scherrer’s relationship, SEM and TEM images. - Graphical abstract: XRD patterns for zirconia nano-particles at different calcined temperature. - Highlights: • Nanoparticles of Zirconia (ZrO{sub 2}) were synthesized. • The ZrO{sub 2} crystalline state was revealed by XRD, SEM and TEM. • SEM and TEM images reveals that the ZrO{sub 2} particles are uniform and relatively small. • Both blocky particles and the powders shape are affected by the firing temperature. • The crystalline sizes were estimated using X-ray peak broadening analysis (XRDBA).

  7. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    International Nuclear Information System (INIS)

    Elert, M.

    1996-09-01

    deterministic case, and the uncertainty bands did not always overlap. This suggest that there are considerable model uncertainties present, which were not considered in this study. Concerning possible constraints in the application domain of different models, the results of this exercise suggest that if only the evolution of the root zone concentration is to be predicted, all of the studied models give comparable results. However, if also the flux to the groundwater is to be predicted, then a considerably increased amount of detail is needed concerning the model and the parameterization. This applies to the hydrological as well as the transport modelling. The difference in model predictions and the magnitude of uncertainty was quite small for some of the end-points predicted, while for others it could span many orders of magnitude. Of special importance were end-points where delay in the soil was involved, e.g. release to the groundwater. In such cases the influence of radioactive decay gave rise to strongly non-linear effects. The work in the subgroup has provided many valuable insights on the effects of model simplifications, e.g. discretization in the model, averaging of the time varying input parameters and the assignment of uncertainties to parameters. The conclusions that have been drawn concerning these are primarily valid for the studied scenario. However, we believe that they to a large extent also are generally applicable. The subgroup have had many opportunities to study the pitfalls involved in model comparison. The intention was to provide a well defined scenario for the subgroup, but despite several iterations misunderstandings and ambiguities remained. The participants have been forced to scrutinize their models to try to explain differences in the predictions and most, if not all, of the participants have improved their models as a result of this

  8. Performance Analysis and Experimental Validation of the Direct Strain Imaging Method

    Science.gov (United States)

    Athanasios Iliopoulos; John G. Michopoulos; John C. Hermanson

    2013-01-01

    Direct Strain Imaging accomplishes full field measurement of the strain tensor on the surface of a deforming body, by utilizing arbitrarily oriented engineering strain measurements originating from digital imaging. In this paper an evaluation of the method’s performance with respect to its operating parameter space is presented along with a preliminary...

  9. Valid and efficient manual estimates of intracranial volume from magnetic resonance images

    International Nuclear Information System (INIS)

    Klasson, Niklas; Olsson, Erik; Rudemo, Mats; Eckerström, Carl; Malmgren, Helge; Wallin, Anders

    2015-01-01

    Manual segmentations of the whole intracranial vault in high-resolution magnetic resonance images are often regarded as very time-consuming. Therefore it is common to only segment a few linearly spaced intracranial areas to estimate the whole volume. The purpose of the present study was to evaluate how the validity of intracranial volume estimates is affected by the chosen interpolation method, orientation of the intracranial areas and the linear spacing between them. Intracranial volumes were manually segmented on 62 participants from the Gothenburg MCI study using 1.5 T, T 1 -weighted magnetic resonance images. Estimates of the intracranial volumes were then derived using subsamples of linearly spaced coronal, sagittal or transversal intracranial areas from the same volumes. The subsamples of intracranial areas were interpolated into volume estimates by three different interpolation methods. The linear spacing between the intracranial areas ranged from 2 to 50 mm and the validity of the estimates was determined by comparison with the entire intracranial volumes. A progressive decrease in intra-class correlation and an increase in percentage error could be seen with increased linear spacing between intracranial areas. With small linear spacing (≤15 mm), orientation of the intracranial areas and interpolation method had negligible effects on the validity. With larger linear spacing, the best validity was achieved using cubic spline interpolation with either coronal or sagittal intracranial areas. Even at a linear spacing of 50 mm, cubic spline interpolation on either coronal or sagittal intracranial areas had a mean absolute agreement intra-class correlation with the entire intracranial volumes above 0.97. Cubic spline interpolation in combination with linearly spaced sagittal or coronal intracranial areas overall resulted in the most valid and robust estimates of intracranial volume. Using this method, valid ICV estimates could be obtained in less than five

  10. Classification in hyperspectral images by independent component analysis, segmented cross-validation and uncertainty estimates

    Directory of Open Access Journals (Sweden)

    Beatriz Galindo-Prieto

    2018-02-01

    Full Text Available Independent component analysis combined with various strategies for cross-validation, uncertainty estimates by jack-knifing and critical Hotelling’s T2 limits estimation, proposed in this paper, is used for classification purposes in hyperspectral images. To the best of our knowledge, the combined approach of methods used in this paper has not been previously applied to hyperspectral imaging analysis for interpretation and classification in the literature. The data analysis performed here aims to distinguish between four different types of plastics, some of them containing brominated flame retardants, from their near infrared hyperspectral images. The results showed that the method approach used here can be successfully used for unsupervised classification. A comparison of validation approaches, especially leave-one-out cross-validation and regions of interest scheme validation is also evaluated.

  11. Validation of Temperature Histories for Structural Steel Welds Using Estimated Heat-Affected-Zone Edges

    Science.gov (United States)

    2016-10-12

    Metallurgy , 2nd Ed., John Wiley & Sons, Inc., 2003. DOI: 10.1002/0471434027. 2. O. Grong, Metallurgical Modelling of Welding , 2ed., Materials Modelling...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6394--16-9690 Validation of Temperature Histories for Structural Steel Welds Using...PAGES 17. LIMITATION OF ABSTRACT Validation of Temperature Histories for Structural Steel Welds Using Estimated Heat-Affected-Zone Edges S.G. Lambrakos

  12. Composite Cure Process Modeling and Simulations using COMPRO(Registered Trademark) and Validation of Residual Strains using Fiber Optics Sensors

    Science.gov (United States)

    Sreekantamurthy, Thammaiah; Hudson, Tyler B.; Hou, Tan-Hung; Grimsley, Brian W.

    2016-01-01

    procedures and residual strain predications, and discusses pertinent experimental results from the validation studies.

  13. Validity and reliability of Nike + Fuelband for estimating physical activity energy expenditure.

    Science.gov (United States)

    Tucker, Wesley J; Bhammar, Dharini M; Sawyer, Brandon J; Buman, Matthew P; Gaesser, Glenn A

    2015-01-01

    The Nike + Fuelband is a commercially available, wrist-worn accelerometer used to track physical activity energy expenditure (PAEE) during exercise. However, validation studies assessing the accuracy of this device for estimating PAEE are lacking. Therefore, this study examined the validity and reliability of the Nike + Fuelband for estimating PAEE during physical activity in young adults. Secondarily, we compared PAEE estimation of the Nike + Fuelband with the previously validated SenseWear Armband (SWA). Twenty-four participants (n = 24) completed two, 60-min semi-structured routines consisting of sedentary/light-intensity, moderate-intensity, and vigorous-intensity physical activity. Participants wore a Nike + Fuelband and SWA, while oxygen uptake was measured continuously with an Oxycon Mobile (OM) metabolic measurement system (criterion). The Nike + Fuelband (ICC = 0.77) and SWA (ICC = 0.61) both demonstrated moderate to good validity. PAEE estimates provided by the Nike + Fuelband (246 ± 67 kcal) and SWA (238 ± 57 kcal) were not statistically different than OM (243 ± 67 kcal). Both devices also displayed similar mean absolute percent errors for PAEE estimates (Nike + Fuelband = 16 ± 13 %; SWA = 18 ± 18 %). Test-retest reliability for PAEE indicated good stability for Nike + Fuelband (ICC = 0.96) and SWA (ICC = 0.90). The Nike + Fuelband provided valid and reliable estimates of PAEE, that are similar to the previously validated SWA, during a routine that included approximately equal amounts of sedentary/light-, moderate- and vigorous-intensity physical activity.

  14. Occupation-specific screening for future sickness absence: Criterion validity of the trucker strain monitor (TSM)

    NARCIS (Netherlands)

    Croon, E.M.de; Blonk, R.W.; Sluiter, J.K.; Frings-Dresen, M.H.

    2005-01-01

    Background: Monitoring psychological job strain may help occupational physicians to take preventive action at the appropriate time. For this purpose, the 10-item trucker strain monitor (TSM) assessing work-related fatigue and sleeping problems in truck drivers was developed. Objectives: This study

  15. Growth, characterization and estimation of lattice strain and size in CdS nanoparticles: X-ray peak profile analysis

    Science.gov (United States)

    Solanki, Rekha Garg; Rajaram, Poolla; Bajpai, P. K.

    2018-05-01

    This work is based on the growth, characterization and estimation of lattice strain and crystallite size in CdS nanoparticles by X-ray peak profile analysis. The CdS nanoparticles were synthesized by a non-aqueous solvothermal method and were characterized by powder X-ray diffraction (XRD), transmission electron microscopy (TEM), Raman and UV-visible spectroscopy. XRD confirms that the CdS nanoparticles have the hexagonal structure. The Williamson-Hall (W-H) method was used to study the X-ray peak profile analysis. The strain-size plot (SSP) was used to study the individual contributions of crystallite size and lattice strain from the X-rays peaks. The physical parameters such as strain, stress and energy density values were calculated using various models namely, isotropic strain model, anisotropic strain model and uniform deformation energy density model. The particle size was estimated from the TEM images to be in the range of 20-40 nm. The Raman spectrum shows the characteristic optical 1LO and 2LO vibrational modes of CdS. UV-visible absorption studies show that the band gap of the CdS nanoparticles is 2.48 eV. The results show that the crystallite size estimated from Scherrer's formula, W-H plots, SSP and the particle size calculated by TEM images are approximately similar.

  16. Robust Backlash Estimation for Industrial Drive-Train Systems—Theory and Validation

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik

    2018-01-01

    Backlash compensation is used in modern machinetool controls to ensure high-accuracy positioning. When wear of a machine causes deadzone width to increase, high-accuracy control may be maintained if the deadzone is accurately estimated. Deadzone estimation is also an important parameter to indica......-of-the-art Siemens equipment. The experiments validate the theory and show that expected performance and robustness to parameter uncertainties are both achieved....

  17. Validation of radiation dose estimations in VRdose: comparing estimated radiation doses with observed radiation doses

    International Nuclear Information System (INIS)

    Nystad, Espen; Sebok, Angelia; Meyer, Geir

    2004-04-01

    The Halden Virtual Reality Centre has developed work-planning software that predicts the radiation exposure of workers in contaminated areas. To validate the accuracy of the predicted radiation dosages, it is necessary to compare predicted doses to actual dosages. During an experimental study conducted at the Halden Boiling Water Reactor (HBWR) hall, the radiation exposure was measured for all participants throughout the test session, ref. HWR-681 [3]. Data from this experimental study have also been used to model tasks in the work-planning software and gather data for predicted radiation exposure. Two different methods were used to predict radiation dosages; one method used all radiation data from all the floor levels in the HBWR (all-data method). The other used only data from the floor level where the task was conducted (isolated data method). The study showed that the all-data method gave predictions that were on average 2.3 times higher than the actual radiation dosages. The isolated-data method gave predictions on average 0.9 times the actual dosages. (Author)

  18. An Improved Fuzzy Based Missing Value Estimation in DNA Microarray Validated by Gene Ranking

    Directory of Open Access Journals (Sweden)

    Sujay Saha

    2016-01-01

    Full Text Available Most of the gene expression data analysis algorithms require the entire gene expression matrix without any missing values. Hence, it is necessary to devise methods which would impute missing data values accurately. There exist a number of imputation algorithms to estimate those missing values. This work starts with a microarray dataset containing multiple missing values. We first apply the modified version of the fuzzy theory based existing method LRFDVImpute to impute multiple missing values of time series gene expression data and then validate the result of imputation by genetic algorithm (GA based gene ranking methodology along with some regular statistical validation techniques, like RMSE method. Gene ranking, as far as our knowledge, has not been used yet to validate the result of missing value estimation. Firstly, the proposed method has been tested on the very popular Spellman dataset and results show that error margins have been drastically reduced compared to some previous works, which indirectly validates the statistical significance of the proposed method. Then it has been applied on four other 2-class benchmark datasets, like Colorectal Cancer tumours dataset (GDS4382, Breast Cancer dataset (GSE349-350, Prostate Cancer dataset, and DLBCL-FL (Leukaemia for both missing value estimation and ranking the genes, and the results show that the proposed method can reach 100% classification accuracy with very few dominant genes, which indirectly validates the biological significance of the proposed method.

  19. Estimation of Staphylococcus aureus growth parameters from turbidity data: characterization of strain variation and comparison of methods.

    Science.gov (United States)

    Lindqvist, R

    2006-07-01

    Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.

  20. Evaluation of wet bulb globe temperature index for estimation of heat strain in hot/humid conditions in the Persian Gulf.

    Science.gov (United States)

    Dehghan, Habibolah; Mortazavi, Seyed Bagher; Jafari, Mohammad J; Maracy, Mohammad R

    2012-12-01

    Heat exposure among construction workers in the Persian Gulf region is a serious hazard for health. The aim of this study was to evaluate the performance of wet bulb globe temperature (WBGT) Index for estimation of heat strain in hot/humid conditions by the use of Physiological Strain Index (PSI) as the gold standard. This cross-sectional study was carried out on 71 workers of two Petrochemical Companies in South of Iran in 2010 summer. The WBGT index, heart rate, and aural temperature were measured by Heat Stress Monitor (Casella Microtherm WBGT), Heart Rate Monitor (Polar RS100), and Personal Heat Strain Monitor (Questemp II), respectively. The obtained data were analyzed with descriptive statistics and Pearson correlation analysis. The mean (SD) of WBGT values was 33.1 (2.7). The WBGT values exceed from American Conference of Governmental Industrial Hygienists (ACGIH) standard (30°C) in 96% work stations, whereas the PSI values were more than 5.0 (moderate strain) in 11% of workstations. The correlation between WBGT and PSI values was 0.61 (P = 0.001). When WBGT values were less and more than 34°C, the mean of PSI was 2.6 (low strain) and 5.2 (moderate strain), respectively. In the Persian Gulf weather, especially hot and humid in the summer months, due to the WBGT values exceeding 30°C (in 96% of cases) and weak correlation between WBGT and PSI, the work/rest cycles of WBGT Index is not suitable for heat stress management. Therefore, in Persian Gulf weather, heat stress evaluation based on physiologic variables may have higher validity than WBGT index.

  1. Evaluation of wet bulb globe temperature index for estimation of heat strain in hot/humid conditions in the Persian Gulf

    Directory of Open Access Journals (Sweden)

    Habibolah Dehghan

    2012-01-01

    Full Text Available Background: Heat exposure among construction workers in the Persian Gulf region is a serious hazard for health. The aim of this study was to evaluate the performance of wet bulb globe temperature (WBGT Index for estimation of heat strain in hot/humid conditions by the use of Physiological Strain Index (PSI as the gold standard. Material and Methods : This cross-sectional study was carried out on 71 workers of two Petrochemical Companies in South of Iran in 2010 summer. The WBGT index, heart rate, and aural temperature were measured by Heat Stress Monitor (Casella Microtherm WBGT, Heart Rate Monitor (Polar RS100, and Personal Heat Strain Monitor (Questemp II, respectively. The obtained data were analyzed with descriptive statistics and Pearson correlation analysis. Results: The mean (SD of WBGT values was 33.1 (2.7. The WBGT values exceed from American Conference of Governmental Industrial Hygienists (ACGIH standard (30°C in 96% work stations, whereas the PSI values were more than 5.0 (moderate strain in 11% of workstations. The correlation between WBGT and PSI values was 0.61 ( P = 0.001. When WBGT values were less and more than 34°C, the mean of PSI was 2.6 (low strain and 5.2 (moderate strain, respectively. Conclusion: In the Persian Gulf weather, especially hot and humid in the summer months, due to the WBGT values exceeding 30°C (in 96% of cases and weak correlation between WBGT and PSI, the work/rest cycles of WBGT Index is not suitable for heat stress management. Therefore, in Persian Gulf weather, heat stress evaluation based on physiologic variables may have higher validity than WBGT index.

  2. Validity of 20-metre multi stage shuttle run test for estimation of ...

    African Journals Online (AJOL)

    Validity of 20-metre multi stage shuttle run test for estimation of maximum oxygen uptake in indian male university students. P Chatterjee, AK Banerjee, P Debnath, P Bas, B Chatterjee. Abstract. No Abstract. South African Journal for Physical, Health Education, Recreation and DanceVol. 12(4) 2006: pp. 461-467. Full Text:.

  3. Rapid estimation of the moment magnitude of the 2011 off the Pacific coast of Tohoku earthquake from coseismic strain steps

    Science.gov (United States)

    Itaba, S.; Matsumoto, N.; Kitagawa, Y.; Koizumi, N.

    2012-12-01

    The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred at 14:46 Japan Standard Time (JST) on March 11, 2011. The coseismic strain steps caused by the fault slip of this earthquake were observed in the Tokai, Kii Peninsula and Shikoku by the borehole strainmeters which were carefully set by Geological Survey of Japan, AIST. Using these strain steps, we estimated a fault model for the earthquake on the boundary between the Pacific and North American plates. Our model, which is estimated only from several minutes' strain data, is largely consistent with the final fault models estimated from GPS and seismic wave data. The moment magnitude can be estimated about 6 minutes after the origin time, and 4 minutes after wave arrival. According to the fault model, the moment magnitude of the earthquake is 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9. Generally coseismic strain steps are considered to be less reliable than seismic waves and GPS data. However our results show that the coseismic strain steps observed by the borehole strainmeters, which were carefully set and monitored, can be relied enough to decide the earthquake magnitude precisely and rapidly. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.

  4. Assessing Lifetime Stress Exposure Using the Stress and Adversity Inventory for Adults (Adult STRAIN): An Overview and Initial Validation

    Science.gov (United States)

    Slavich, George M.; Shields, Grant S.

    2018-01-01

    ABSTRACT Objective Numerous theories have proposed that acute and chronic stressors may exert a cumulative effect on life-span health by causing biological “wear and tear,” or allostatic load, which in turn promotes disease. Very few studies have directly tested such models, though, partly because of the challenges associated with efficiently assessing stress exposure over the entire life course. To address this issue, we developed the first online system for systematically assessing lifetime stress exposure, called the Stress and Adversity Inventory (STRAIN), and describe its initial validation here. Methods Adults recruited from the community (n = 205) were administered the STRAIN, Childhood Trauma Questionnaire—Short Form, and Perceived Stress Scale, as well as measures of socioeconomic status, personality, social desirability, negative affect, mental and physical health complaints, sleep quality, computer-assessed executive function, and doctor-diagnosed general health problems and autoimmune disorders. Results The STRAIN achieved high acceptability and was completed relatively quickly (mean = 18 minutes 39 seconds; interquartile range = 12–23 minutes). The structure of the lifetime stress data best fit two latent classes overall and five distinct trajectories over time. Concurrent associations with the Childhood Trauma Questionnaire—Short Form and Perceived Stress Scale were good (r values = .147–.552). Moreover, the STRAIN was not significantly related to personality traits or social desirability characteristics and, in adjusted analyses, emerged as the measure most strongly associated with all six of the health and cognitive outcomes assessed except current mental health complaints (β values = .16–.41; risk ratios = 1.02–1.04). Finally, test-retest reliability for the main stress exposure indices over 2–4 weeks was excellent (r values = .904–.919). Conclusions The STRAIN demonstrated good usability and acceptability; very good concurrent

  5. Assessing Lifetime Stress Exposure Using the Stress and Adversity Inventory for Adults (Adult STRAIN): An Overview and Initial Validation.

    Science.gov (United States)

    Slavich, George M; Shields, Grant S

    2018-01-01

    Numerous theories have proposed that acute and chronic stressors may exert a cumulative effect on life-span health by causing biological "wear and tear," or allostatic load, which in turn promotes disease. Very few studies have directly tested such models, though, partly because of the challenges associated with efficiently assessing stress exposure over the entire life course. To address this issue, we developed the first online system for systematically assessing lifetime stress exposure, called the Stress and Adversity Inventory (STRAIN), and describe its initial validation here. Adults recruited from the community (n = 205) were administered the STRAIN, Childhood Trauma Questionnaire-Short Form, and Perceived Stress Scale, as well as measures of socioeconomic status, personality, social desirability, negative affect, mental and physical health complaints, sleep quality, computer-assessed executive function, and doctor-diagnosed general health problems and autoimmune disorders. The STRAIN achieved high acceptability and was completed relatively quickly (mean = 18 minutes 39 seconds; interquartile range = 12-23 minutes). The structure of the lifetime stress data best fit two latent classes overall and five distinct trajectories over time. Concurrent associations with the Childhood Trauma Questionnaire-Short Form and Perceived Stress Scale were good (r values = .147-.552). Moreover, the STRAIN was not significantly related to personality traits or social desirability characteristics and, in adjusted analyses, emerged as the measure most strongly associated with all six of the health and cognitive outcomes assessed except current mental health complaints (β values = .16-.41; risk ratios = 1.02-1.04). Finally, test-retest reliability for the main stress exposure indices over 2-4 weeks was excellent (r values = .904-.919). The STRAIN demonstrated good usability and acceptability; very good concurrent, discriminant, and predictive validity; and excellent test

  6. Face and Convergent Validity of Persian Version of Rapid Office Strain Assessment (ROSA Checklist

    Directory of Open Access Journals (Sweden)

    Afrouz Armal

    2016-01-01

    Full Text Available Objective: The aim of this work was the translation, cultural adaptation and validation of the Persian version of the Rapid Office Stress Assessment (ROSA checklist. Material & Methods: This methodological study was conducted according of IQOLA method. 100 office worker were selected in order to carry out a psychometric evaluation of the ROSA checklist by performing validity (face and convergent analyses. The convergent validity was evaluated using RULA checklist. Results: Upon major changes made to the ROSA checklist during the translation/cultural adaptation process, face validity of the Persian version was obtained. Spearman correlation coefficient between total score of ROSA check list and RULA checklist was significant (r=0.76, p<0.0001. Conclusion: The results indicated that the translated version of the ROSA checklist is acceptable in terms of face validity, convergent validity in target society, and hence provides a useful instrument for assessing Iranian office workers

  7. Statistical methods for estimating normal blood chemistry ranges and variance in rainbow trout (Salmo gairdneri), Shasta Strain

    Science.gov (United States)

    Wedemeyer, Gary A.; Nelson, Nancy C.

    1975-01-01

    Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.

  8. Validation of differential gene expression algorithms: Application comparing fold-change estimation to hypothesis testing

    Directory of Open Access Journals (Sweden)

    Bickel David R

    2010-01-01

    Full Text Available Abstract Background Sustained research on the problem of determining which genes are differentially expressed on the basis of microarray data has yielded a plethora of statistical algorithms, each justified by theory, simulation, or ad hoc validation and yet differing in practical results from equally justified algorithms. Recently, a concordance method that measures agreement among gene lists have been introduced to assess various aspects of differential gene expression detection. This method has the advantage of basing its assessment solely on the results of real data analyses, but as it requires examining gene lists of given sizes, it may be unstable. Results Two methodologies for assessing predictive error are described: a cross-validation method and a posterior predictive method. As a nonparametric method of estimating prediction error from observed expression levels, cross validation provides an empirical approach to assessing algorithms for detecting differential gene expression that is fully justified for large numbers of biological replicates. Because it leverages the knowledge that only a small portion of genes are differentially expressed, the posterior predictive method is expected to provide more reliable estimates of algorithm performance, allaying concerns about limited biological replication. In practice, the posterior predictive method can assess when its approximations are valid and when they are inaccurate. Under conditions in which its approximations are valid, it corroborates the results of cross validation. Both comparison methodologies are applicable to both single-channel and dual-channel microarrays. For the data sets considered, estimating prediction error by cross validation demonstrates that empirical Bayes methods based on hierarchical models tend to outperform algorithms based on selecting genes by their fold changes or by non-hierarchical model-selection criteria. (The latter two approaches have comparable

  9. Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.

    Science.gov (United States)

    Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M

    2016-03-11

    Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.

  10. Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets

    Directory of Open Access Journals (Sweden)

    Raquel D. Langer

    2016-03-01

    Full Text Available Background: Bioelectrical Impedance Analysis (BIA is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. Methods: A total of 396 males, Brazilian Army cadets, aged 17–24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA as a reference method. Student’s t-test (for paired sample, linear regression analysis, and Bland–Altman method were used to test the validity of the BIA equations. Results: Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05 and large limits of agreement by Bland–Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Conclusion: Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.

  11. Validation of generic cost estimates for construction-related activities at nuclear power plants: Final report

    International Nuclear Information System (INIS)

    Simion, G.; Sciacca, F.; Claiborne, E.; Watlington, B.; Riordan, B.; McLaughlin, M.

    1988-05-01

    This report represents a validation study of the cost methodologies and quantitative factors derived in Labor Productivity Adjustment Factors and Generic Methodology for Estimating the Labor Cost Associated with the Removal of Hardware, Materials, and Structures From Nuclear Power Plants. This cost methodology was developed to support NRC analysts in determining generic estimates of removal, installation, and total labor costs for construction-related activities at nuclear generating stations. In addition to the validation discussion, this report reviews the generic cost analysis methodology employed. It also discusses each of the individual cost factors used in estimating the costs of physical modifications at nuclear power plants. The generic estimating approach presented uses the /open quotes/greenfield/close quotes/ or new plant construction installation costs compiled in the Energy Economic Data Base (EEDB) as a baseline. These baseline costs are then adjusted to account for labor productivity, radiation fields, learning curve effects, and impacts on ancillary systems or components. For comparisons of estimated vs actual labor costs, approximately four dozen actual cost data points (as reported by 14 nuclear utilities) were obtained. Detailed background information was collected on each individual data point to give the best understanding possible so that the labor productivity factors, removal factors, etc., could judiciously be chosen. This study concludes that cost estimates that are typically within 40% of the actual values can be generated by prudently using the methodologies and cost factors investigated herein

  12. Validation Tests of Fiber Optic Strain-Based Operational Shape and Load Measurements

    Science.gov (United States)

    Bakalyar, John A.; Jutte, Christine

    2012-01-01

    Aircraft design has been progressing toward reduced structural weight to improve fuel efficiency, increase performance, and reduce cost. Lightweight aircraft structures are more flexible than conventional designs and require new design considerations. Intelligent sensing allows for enhanced control and monitoring of aircraft, which enables increased structurally efficiency. The NASA Dryden Flight Research Center (DFRC) has developed an instrumentation system and analysis techniques that combine to make distributed structural measurements practical for lightweight vehicles. Dryden's Fiber Optic Strain Sensing (FOSS) technology enables a multitude of lightweight, distributed surface strain measurements. The analysis techniques, referred to as the Displacement Transfer Functions (DTF) and Load Transfer Functions (LTF), use surface strain values to calculate structural deflections and operational loads. The combined system is useful for real-time monitoring of aeroelastic structures, along with many other applications. This paper describes how the capabilities of the measurement system were demonstrated using subscale test articles that represent simple aircraft structures. Empirical FOSS strain data were used within the DTF to calculate the displacement of the article and within the LTF to calculate bending moments due to loads acting on the article. The results of the tests, accuracy of the measurements, and a sensitivity analysis are presented.

  13. [3H] Thymidine incorporation to estimate growth rates of anaerobic bacterial strains

    International Nuclear Information System (INIS)

    Winding, A.

    1992-01-01

    The incorporation of [ 3 H] thymidine by axenic cultures of anaerobic bacteria was investigated as a means to measure growth. The three fermentative strains and one of the methanogenic strains tested incorporated [ 3 H] thymidine during growth. It is concluded that the [ 3 H] thymidine incorporation method underestimates bacterial growth in anaerobic environments

  14. Reliability, Construct Validity and Interpretability of the Brazilian version of the Rapid Upper Limb Assessment (RULA) and Strain Index (SI).

    Science.gov (United States)

    Valentim, Daniela Pereira; Sato, Tatiana de Oliveira; Comper, Maria Luiza Caíres; Silva, Anderson Martins da; Boas, Cristiana Villas; Padula, Rosimeire Simprini

    There are very few observational methods for analysis of biomechanical exposure available in Brazilian-Portuguese. This study aimed to cross-culturally adapt and test the measurement properties of the Rapid Upper Limb Assessment (RULA) and Strain Index (SI). The cross-cultural adaptation and measurement properties test were established according to Beaton et al. and COSMIN guidelines, respectively. Several tasks that required static posture and/or repetitive motion of upper limbs were evaluated (n>100). The intra-raters' reliability for the RULA ranged from poor to almost perfect (k: 0.00-0.93), and SI from poor to excellent (ICC 2.1 : 0.05-0.99). The inter-raters' reliability was very poor for RULA (k: -0.12 to 0.13) and ranged from very poor to moderate for SI (ICC 2.1 : 0.00-0.53). The agreement was good for RULA (75-100% intra-raters, and 42.24-100% inter-raters) and to SI (EPM: -1.03% to 1.97%; intra-raters, and -0.17% to 1.51% inter-raters). The internal consistency was appropriate for RULA (α=0.88), and low for SI (α=0.65). Moderate construct validity were observed between RULA and SI, in wrist/hand-wrist posture (rho: 0.61) and strength/intensity of exertion (rho: 0.39). The adapted versions of the RULA and SI presented semantic and cultural equivalence for the Brazilian Portuguese. The RULA and SI had reliability estimates ranged from very poor to almost perfect. The internal consistency for RULA was better than the SI. The correlation between methods was moderate only of muscle request/movement repetition. Previous training is mandatory to use of observations methods for biomechanical exposure assessment, although it does not guarantee good reproducibility of these measures. Copyright © 2017 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.

  15. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in

    2016-09-07

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.

  16. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    International Nuclear Information System (INIS)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K.

    2016-01-01

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  17. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K. [Boston Children' s Hospital, Harvard Medical School, Department of Radiology, Boston, MA (United States)

    2016-03-15

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  18. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  19. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  20. A validated HPTLC method for estimation of moxifloxacin hydrochloride in tablets.

    Science.gov (United States)

    Dhillon, Vandana; Chaudhary, Alok Kumar

    2010-10-01

    A simple HPTLC method having high accuracy, precision and reproducibility was developed for the routine estimation of moxifloxacin hydrochloride in the tablets available in market and was validated for various parameters according to ICH guidelines. moxifloxacin hydrochloride was estimated at 292 nm by densitometry using Silica gel 60 F254 as stationary phase and a premix of methylene chloride: methanol: strong ammonia solution and acetonitrile (10:10:5:10) as mobile phase. Method was found linear in a range of 9-54 nanograms with a correlation coefficient >0.99. The regression equation was: AUC = 65.57 × (Amount in nanograms) + 163 (r(2) = 0.9908).

  1. Development and validation of GFR-estimating equations using diabetes, transplant and weight

    DEFF Research Database (Denmark)

    Stevens, L.A.; Schmid, C.H.; Zhang, Y.L.

    2009-01-01

    interactions. Equations were developed in a pooled database of 10 studies [2/3 (N = 5504) for development and 1/3 (N = 2750) for internal validation], and final model selection occurred in 16 additional studies [external validation (N = 3896)]. RESULTS: The mean mGFR was 68, 67 and 68 ml/min/ 1.73 m(2......BACKGROUND: We have reported a new equation (CKD-EPI equation) that reduces bias and improves accuracy for GFR estimation compared to the MDRD study equation while using the same four basic predictor variables: creatinine, age, sex and race. Here, we describe the development and validation...... of this equation as well as other equations that incorporate diabetes, transplant and weight as additional predictor variables. METHODS: Linear regression was used to relate log-measured GFR (mGFR) to sex, race, diabetes, transplant, weight, various transformations of creatinine and age with and without...

  2. Development, Validation, and Verification of a Self-Assessment Tool to Estimate Agnibala (Digestive Strength).

    Science.gov (United States)

    Singh, Aparna; Singh, Girish; Patwardhan, Kishor; Gehlot, Sangeeta

    2017-01-01

    According to Ayurveda, the traditional system of healthcare of Indian origin, Agni is the factor responsible for digestion and metabolism. Four functional states (Agnibala) of Agni have been recognized: regular, irregular, intense, and weak. The objective of the present study was to develop and validate a self-assessment tool to estimate Agnibala The developed tool was evaluated for its reliability and validity by administering it to 300 healthy volunteers of either gender belonging to 18 to 40-year age group. Besides confirming the statistical validity and reliability, the practical utility of the newly developed tool was also evaluated by recording serum lipid parameters of all the volunteers. The results show that the lipid parameters vary significantly according to the status of Agni The tool, therefore, may be used to screen normal population to look for possible susceptibility to certain health conditions. © The Author(s) 2016.

  3. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, Cetin; Williams, Brian; McClure, Patrick; Nelson, Ralph A.

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M and S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for

  4. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Unal, Cetin [Los Alamos National Laboratory; Williams, Brian [Los Alamos National Laboratory; Mc Clure, Patrick [Los Alamos National Laboratory; Nelson, Ralph A [IDAHO NATIONAL LAB

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  5. Finite element modelling of fibre Bragg grating strain sensors and experimental validation

    Science.gov (United States)

    Malik, Shoaib A.; Mahendran, Ramani S.; Harris, Dee; Paget, Mark; Pandita, Surya D.; Machavaram, Venkata R.; Collins, David; Burns, Jonathan M.; Wang, Liwei; Fernando, Gerard F.

    2009-03-01

    Fibre Bragg grating (FBG) sensors continue to be used extensively for monitoring strain and temperature in and on engineering materials and structures. Previous researchers have also developed analytical models to predict the loadtransfer characteristics of FBG sensors as a function of applied strain. The general properties of the coating or adhesive that is used to surface-bond the FBG sensor to the substrate has also been modelled using finite element analysis. In this current paper, a technique was developed to surface-mount FBG sensors with a known volume and thickness of adhesive. The substrates used were aluminium dog-bone tensile test specimens. The FBG sensors were tensile tested in a series of ramp-hold sequences until failure. The reflected FBG spectra were recorded using a commercial instrument. Finite element analysis was performed to model the response of the surface-mounted FBG sensors. In the first instance, the effect of the mechanical properties of the adhesive and substrate were modelled. This was followed by modelling the volume of adhesive used to bond the FBG sensor to the substrate. Finally, the predicted values obtained via finite element modelling were correlated to the experimental results. In addition to the FBG sensors, the tensile test specimens were instrumented with surface-mounted electrical resistance strain gauges.

  6. Validation and Intercomparison of Ocean Color Algorithms for Estimating Particulate Organic Carbon in the Oceans

    Directory of Open Access Journals (Sweden)

    Hayley Evers-King

    2017-08-01

    Full Text Available Particulate Organic Carbon (POC plays a vital role in the ocean carbon cycle. Though relatively small compared with other carbon pools, the POC pool is responsible for large fluxes and is linked to many important ocean biogeochemical processes. The satellite ocean-color signal is influenced by particle composition, size, and concentration and provides a way to observe variability in the POC pool at a range of temporal and spatial scales. To provide accurate estimates of POC concentration from satellite ocean color data requires algorithms that are well validated, with uncertainties characterized. Here, a number of algorithms to derive POC using different optical variables are applied to merged satellite ocean color data provided by the Ocean Color Climate Change Initiative (OC-CCI and validated against the largest database of in situ POC measurements currently available. The results of this validation exercise indicate satisfactory levels of performance from several algorithms (highest performance was observed from the algorithms of Loisel et al., 2002; Stramski et al., 2008 and uncertainties that are within the requirements of the user community. Estimates of the standing stock of the POC can be made by applying these algorithms, and yield an estimated mixed-layer integrated global stock of POC between 0.77 and 1.3 Pg C of carbon. Performance of the algorithms vary regionally, suggesting that blending of region-specific algorithms may provide the best way forward for generating global POC products.

  7. Validation of statistical models for estimating hospitalization associated with influenza and other respiratory viruses.

    Directory of Open Access Journals (Sweden)

    Lin Yang

    Full Text Available BACKGROUND: Reliable estimates of disease burden associated with respiratory viruses are keys to deployment of preventive strategies such as vaccination and resource allocation. Such estimates are particularly needed in tropical and subtropical regions where some methods commonly used in temperate regions are not applicable. While a number of alternative approaches to assess the influenza associated disease burden have been recently reported, none of these models have been validated with virologically confirmed data. Even fewer methods have been developed for other common respiratory viruses such as respiratory syncytial virus (RSV, parainfluenza and adenovirus. METHODS AND FINDINGS: We had recently conducted a prospective population-based study of virologically confirmed hospitalization for acute respiratory illnesses in persons <18 years residing in Hong Kong Island. Here we used this dataset to validate two commonly used models for estimation of influenza disease burden, namely the rate difference model and Poisson regression model, and also explored the applicability of these models to estimate the disease burden of other respiratory viruses. The Poisson regression models with different link functions all yielded estimates well correlated with the virologically confirmed influenza associated hospitalization, especially in children older than two years. The disease burden estimates for RSV, parainfluenza and adenovirus were less reliable with wide confidence intervals. The rate difference model was not applicable to RSV, parainfluenza and adenovirus and grossly underestimated the true burden of influenza associated hospitalization. CONCLUSION: The Poisson regression model generally produced satisfactory estimates in calculating the disease burden of respiratory viruses in a subtropical region such as Hong Kong.

  8. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  9. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    Science.gov (United States)

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  10. RNAi validation of resistance genes and their interactions in the highly DDT-resistant 91-R strain of Drosophila melanogaster.

    Science.gov (United States)

    Gellatly, Kyle J; Yoon, Kyong Sup; Doherty, Jeffery J; Sun, Weilin; Pittendrigh, Barry R; Clark, J Marshall

    2015-06-01

    4,4'-dichlorodiphenyltrichloroethane (DDT) has been re-recommended by the World Health Organization for malaria mosquito control. Previous DDT use has resulted in resistance, and with continued use resistance will increase in terms of level and extent. Drosophila melanogaster is a model dipteran that has many available genetic tools, numerous studies done on insecticide resistance mechanisms, and is related to malaria mosquitoes allowing for extrapolation. The 91-R strain of D. melanogaster is highly resistant to DDT (>1500-fold), however, there is no mechanistic scheme that accounts for this level of resistance. Recently, reduced penetration, increased detoxification, and direct excretion have been identified as resistance mechanisms in the 91-R strain. Their interactions, however, remain unclear. Use of UAS-RNAi transgenic lines of D. melanogaster allowed for the targeted knockdown of genes putatively involved in DDT resistance and has validated the role of several cuticular proteins (Cyp4g1 and Lcp1), cytochrome P450 monooxygenases (Cyp6g1 and Cyp12d1), and ATP binding cassette transporters (Mdr50, Mdr65, and Mrp1) involved in DDT resistance. Further, increased sensitivity to DDT in the 91-R strain after intra-abdominal dsRNA injection for Mdr50, Mdr65, and Mrp1 was determined by a DDT contact bioassay, directly implicating these genes in DDT efflux and resistance. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Development and prospective validation of a model estimating risk of readmission in cancer patients.

    Science.gov (United States)

    Schmidt, Carl R; Hefner, Jennifer; McAlearney, Ann S; Graham, Lisa; Johnson, Kristen; Moffatt-Bruce, Susan; Huerta, Timothy; Pawlik, Timothy M; White, Susan

    2018-02-26

    Hospital readmissions among cancer patients are common. While several models estimating readmission risk exist, models specific for cancer patients are lacking. A logistic regression model estimating risk of unplanned 30-day readmission was developed using inpatient admission data from a 2-year period (n = 18 782) at a tertiary cancer hospital. Readmission risk estimates derived from the model were then calculated prospectively over a 10-month period (n = 8616 admissions) and compared with actual incidence of readmission. There were 2478 (13.2%) unplanned readmissions. Model factors associated with readmission included: emergency department visit within 30 days, >1 admission within 60 days, non-surgical admission, solid malignancy, gastrointestinal cancer, emergency admission, length of stay >5 days, abnormal sodium, hemoglobin, or white blood cell count. The c-statistic for the model was 0.70. During the 10-month prospective evaluation, estimates of readmission from the model were associated with higher actual readmission incidence from 20.7% for the highest risk category to 9.6% for the lowest. An unplanned readmission risk model developed specifically for cancer patients performs well when validated prospectively. The specificity of the model for cancer patients, EMR incorporation, and prospective validation justify use of the model in future studies designed to reduce and prevent readmissions. © 2018 Wiley Periodicals, Inc.

  12. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Melius, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ong, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  13. Long-term monitoring of endangered Laysan ducks: Index validation and population estimates 1998–2012

    Science.gov (United States)

    Reynolds, Michelle H.; Courtot, Karen; Brinck, Kevin W.; Rehkemper, Cynthia; Hatfield, Jeffrey

    2015-01-01

    Monitoring endangered wildlife is essential to assessing management or recovery objectives and learning about population status. We tested assumptions of a population index for endangered Laysan duck (or teal; Anas laysanensis) monitored using mark–resight methods on Laysan Island, Hawai’i. We marked 723 Laysan ducks between 1998 and 2009 and identified seasonal surveys through 2012 that met accuracy and precision criteria for estimating population abundance. Our results provide a 15-y time series of seasonal population estimates at Laysan Island. We found differences in detection among seasons and how observed counts related to population estimates. The highest counts and the strongest relationship between count and population estimates occurred in autumn (September–November). The best autumn surveys yielded population abundance estimates that ranged from 674 (95% CI = 619–730) in 2003 to 339 (95% CI = 265–413) in 2012. A population decline of 42% was observed between 2010 and 2012 after consecutive storms and Japan’s To¯hoku earthquake-generated tsunami in 2011. Our results show positive correlations between the seasonal maximum counts and population estimates from the same date, and support the use of standardized bimonthly counts of unmarked birds as a valid index to monitor trends among years within a season at Laysan Island.

  14. Development and validation of a two-dimensional fast-response flood estimation model

    Energy Technology Data Exchange (ETDEWEB)

    Judi, David R [Los Alamos National Laboratory; Mcpherson, Timothy N [Los Alamos National Laboratory; Burian, Steven J [UNIV OF UTAK

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.

  15. Estimation of ligament strains and joint moments in the ankle during a supination sprain injury.

    Science.gov (United States)

    Wei, Feng; Fong, Daniel Tik-Pui; Chan, Kai-Ming; Haut, Roger C

    2015-01-01

    This study presents the ankle ligament strains and ankle joint moments during an accidental injury event diagnosed as a grade I anterior talofibular ligament (ATaFL) sprain. A male athlete accidentally sprained his ankle while performing a cutting motion in a laboratory setting. The kinematic data were input to a three-dimensional rigid-body foot model for simulation analyses. Maximum strains in 20 ligaments were evaluated in simulations that investigated various combinations of the reported ankle joint motions. Temporal strains in the ATaFL and the calcaneofibular ligament (CaFL) were then compared and the three-dimensional ankle joint moments were evaluated from the model. The ATaFL and CaFL were highly strained when the inversion motion was simulated (10% for ATaFL and 12% for CaFL). These ligament strains were increased significantly when either or both plantarflexion and internal rotation motions were added in a temporal fashion (up to 20% for ATaFL and 16% for CaFL). Interestingly, at the time strain peaked in the ATaFL, the plantarflexion angle was not large but apparently important. This computational simulation study suggested that an inversion moment of approximately 23 N m plus an internal rotation moment of approximately 11 N m and a small plantarflexion moment may have generated a strain of 15-20% in the ATaFL to produce a grade I ligament injury in the athlete's ankle. This injury simulation study exhibited the potentially important roles of plantarflexion and internal rotation, when combined with a large inversion motion, to produce a grade I ATaFL injury in the ankle of this athlete.

  16. Improved best estimate plus uncertainty methodology, including advanced validation concepts, to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, C.; Williams, B.; Hemez, F.; Atamturktur, S.H.; McClure, P.

    2011-01-01

    Research highlights: → The best estimate plus uncertainty methodology (BEPU) is one option in the licensing of nuclear reactors. → The challenges for extending the BEPU method for fuel qualification for an advanced reactor fuel are primarily driven by schedule, the need for data, and the sufficiency of the data. → In this paper we develop an extended BEPU methodology that can potentially be used to address these new challenges in the design and licensing of advanced nuclear reactors. → The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. → The methodology includes a formalism to quantify an adequate level of validation (predictive maturity) with respect to existing data, so that required new testing can be minimized, saving cost by demonstrating that further testing will not enhance the quality of the predictive tools. - Abstract: Many evolving nuclear energy technologies use advanced predictive multiscale, multiphysics modeling and simulation (M and S) capabilities to reduce the cost and schedule of design and licensing. Historically, the role of experiments has been as a primary tool for the design and understanding of nuclear system behavior, while M and S played the subordinate role of supporting experiments. In the new era of multiscale, multiphysics computational-based technology development, this role has been reversed. The experiments will still be needed, but they will be performed at different scales to calibrate and validate the models leading to predictive simulations for design and licensing. Minimizing the required number of validation experiments produces cost and time savings. The use of multiscale, multiphysics models introduces challenges in validating these predictive tools - traditional methodologies will have to be modified to address these challenges. This paper gives the basic aspects of a methodology that can potentially be used to address these new challenges in

  17. Estimation of in-vivo neurotransmitter release by brain microdialysis: the issue of validity.

    Science.gov (United States)

    Di Chiara, G.; Tanda, G.; Carboni, E.

    1996-11-01

    Although microdialysis is commonly understood as a method of sampling low molecular weight compounds in the extracellular compartment of tissues, this definition appears insufficient to specifically describe brain microdialysis of neurotransmitters. In fact, transmitter overflow from the brain into dialysates is critically dependent upon the composition of the perfusing Ringer. Therefore, the dialysing Ringer not only recovers the transmitter from the extracellular brain fluid but is a main determinant of its in-vivo release. Two types of brain microdialysis are distinguished: quantitative micro-dialysis and conventional microdialysis. Quantitative microdialysis provides an estimate of neurotransmitter concentrations in the extracellular fluid in contact with the probe. However, this information might poorly reflect the kinetics of neurotransmitter release in vivo. Conventional microdialysis involves perfusion at a constant rate with a transmitter-free Ringer, resulting in the formation of a steep neurotransmitter concentration gradient extending from the Ringer into the extracellular fluid. This artificial gradient might be critical for the ability of conventional microdialysis to detect and resolve phasic changes in neurotransmitter release taking place in the implanted area. On the basis of these characteristics, conventional microdialysis of neurotransmitters can be conceptualized as a model of the in-vivo release of neurotransmitters in the brain. As such, the criteria of face-validity, construct-validity and predictive-validity should be applied to select the most appropriate experimental conditions for estimating neurotransmitter release in specific brain areas in relation to behaviour.

  18. Application of large strain analysis for estimation of behavior and stability of rock mass

    International Nuclear Information System (INIS)

    Nakagawa, Mitsuo; Jiang, Yujing; Esaki, Tetsuro.

    1997-01-01

    It is difficult to simulate a large deformation phenomena with plastic flow after failure by using a general numerical approach, such as the FEM (finite element method), based on the infinitesimal strain theory. In order to investigate the behavior of tunnels excavated in soft rock mass, a new simulation technique which can represent large strain accurately is desired, and the code FLAC (Fast Lagragian Analysis of Continua) adopted in this study is being thought a best mean for this propose. In this paper, the basic principles and the application of the large strain analysis method to stability analysis and prediction of the deformational behavior of tunnels in soft rock are presented. First, the features of the large strain theory and some different points from the infinitesimal strain theory are made up. Next, as the examples, the reproduction of uniaxial compression test for soft rock material and the stability analysis of tunnel in soft rock are tried so as to determine the capability of presenting the large deformational behavior. (author)

  19. Estimating misclassification error: a closer look at cross-validation based methods

    Directory of Open Access Journals (Sweden)

    Ounpraseuth Songthip

    2012-11-01

    Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

  20. A stepwise validation of a wearable system for estimating energy expenditure in field-based research

    International Nuclear Information System (INIS)

    Rumo, Martin; Mäder, Urs; Amft, Oliver; Tröster, Gerhard

    2011-01-01

    Regular physical activity (PA) is an important contributor to a healthy lifestyle. Currently, standard sensor-based methods to assess PA in field-based research rely on a single accelerometer mounted near the body's center of mass. This paper introduces a wearable system that estimates energy expenditure (EE) based on seven recognized activity types. The system was developed with data from 32 healthy subjects and consists of a chest mounted heart rate belt and two accelerometers attached to a thigh and dominant upper arm. The system was validated with 12 other subjects under restricted lab conditions and simulated free-living conditions against indirect calorimetry, as well as in subjects' habitual environments for 2 weeks against the doubly labeled water method. Our stepwise validation methodology gradually trades reference information from the lab against realistic data from the field. The average accuracy for EE estimation was 88% for restricted lab conditions, 55% for simulated free-living conditions and 87% and 91% for the estimation of average daily EE over the period of 1 and 2 weeks

  1. Validation of the Abbreviated Brucella AMOS PCR as a Rapid Screening Method for Differentiation of Brucella abortus Field Strain Isolates and the Vaccine Strains, 19 and RB51

    OpenAIRE

    Ewalt, Darla R.; Bricker, Betsy J.

    2000-01-01

    The Brucella AMOS PCR assay was previously developed to identify and differentiate specific Brucella species. In this study, an abbreviated Brucella AMOS PCR test was evaluated to determine its accuracy in differentiating Brucella abortus into three categories: field strains, vaccine strain 19 (S19), and vaccine strain RB51/parent strain 2308 (S2308). Two hundred thirty-one isolates were identified and tested by the conventional biochemical tests and Brucella AMOS PCR. This included 120 isola...

  2. MR-based water content estimation in cartilage: design and validation of a method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sophie; Ringgaard, Steffen

    Purpose: Design and validation of an MR-based method that allows the calculation of the water content in cartilage tissue. Methods and Materials: Cartilage tissue T1 map based water content MR sequences were used on a 37 Celsius degree stable system. The T1 map intensity signal was analyzed on 6...... cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer used. Finally, the method was validated after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained...... map based water content sequences can provide information that, after being analyzed using a T1-map analysis software, can be interpreted as the water contained inside a cartilage tissue. The amount of water estimated using this method was similar to the one obtained at the dry-freeze procedure...

  3. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    Science.gov (United States)

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  4. Low-cost extrapolation method for maximal lte radio base station exposure estimation: Test and validation

    International Nuclear Information System (INIS)

    Verloock, L.; Joseph, W.; Gati, A.; Varsier, N.; Flach, B.; Wiart, J.; Martens, L.

    2013-01-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on down-link band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2x2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders. (authors)

  5. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Eric; Gonder, Jeff; Jehlik, Forrest

    2017-01-01

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  6. Convergent validity of ActiGraph and Actical accelerometers for estimating physical activity in adults

    DEFF Research Database (Denmark)

    Duncan, Scott; Stewart, Tom; Bo Schneller, Mikkel

    2018-01-01

    PURPOSE: The aim of the present study was to examine the convergent validity of two commonly-used accelerometers for estimating time spent in various physical activity intensities in adults. METHODS: The sample comprised 37 adults (26 males) with a mean (SD) age of 37.6 (12.2) years from San Diego......, USA. Participants wore ActiGraph GT3X+ and Actical accelerometers for three consecutive days. Percent agreement was used to compare time spent within four physical activity intensity categories under three counts per minute (CPM) threshold protocols: (1) using thresholds developed specifically......Graph and Actical accelerometers provide significantly different estimates of time spent in various physical activity intensities. Regression and threshold adjustment were able to reduce these differences, although some level of non-agreement persisted. Researchers should be aware of the inherent limitations...

  7. Model-based PSF and MTF estimation and validation from skeletal clinical CT images.

    Science.gov (United States)

    Pakdel, Amirreza; Mainprize, James G; Robert, Normand; Fialkov, Jeffery; Whyne, Cari M

    2014-01-01

    A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge about the scanner-specific parameters.

  8. Model-based PSF and MTF estimation and validation from skeletal clinical CT images

    International Nuclear Information System (INIS)

    Pakdel, Amirreza; Mainprize, James G.; Robert, Normand; Fialkov, Jeffery; Whyne, Cari M.

    2014-01-01

    Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge about the

  9. Model-based PSF and MTF estimation and validation from skeletal clinical CT images

    Energy Technology Data Exchange (ETDEWEB)

    Pakdel, Amirreza [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Mainprize, James G.; Robert, Normand [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5 (Canada); Fialkov, Jeffery [Division of Plastic Surgery, Sunnybrook Health Sciences Center, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Whyne, Cari M., E-mail: cari.whyne@sunnybrook.ca [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada)

    2014-01-15

    Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge

  10. Type-specific human papillomavirus biological features: validated model-based estimates.

    Directory of Open Access Journals (Sweden)

    Iacopo Baussano

    Full Text Available Infection with high-risk (hr human papillomavirus (HPV is considered the necessary cause of cervical cancer. Vaccination against HPV16 and 18 types, which are responsible of about 75% of cervical cancer worldwide, is expected to have a major global impact on cervical cancer occurrence. Valid estimates of the parameters that regulate the natural history of hrHPV infections are crucial to draw reliable projections of the impact of vaccination. We devised a mathematical model to estimate the probability of infection transmission, the rate of clearance, and the patterns of immune response following the clearance of infection of 13 hrHPV types. To test the validity of our estimates, we fitted the same transmission model to two large independent datasets from Italy and Sweden and assessed finding consistency. The two populations, both unvaccinated, differed substantially by sexual behaviour, age distribution, and study setting (screening for cervical cancer or Chlamydia trachomatis infection. Estimated transmission probability of hrHPV types (80% for HPV16, 73%-82% for HPV18, and above 50% for most other types; clearance rates decreasing as a function of time since infection; and partial protection against re-infection with the same hrHPV type (approximately 20% for HPV16 and 50% for the other types were similar in the two countries. The model could accurately predict the HPV16 prevalence observed in Italy among women who were not infected three years before. In conclusion, our models inform on biological parameters that cannot at the moment be measured directly from any empirical data but are essential to forecast the impact of HPV vaccination programmes.

  11. Validity and reliability of central blood pressure estimated by upper arm oscillometric cuff pressure.

    Science.gov (United States)

    Climie, Rachel E D; Schultz, Martin G; Nikolic, Sonja B; Ahuja, Kiran D K; Fell, James W; Sharman, James E

    2012-04-01

    Noninvasive central blood pressure (BP) independently predicts mortality, but current methods are operator-dependent, requiring skill to obtain quality recordings. The aims of this study were first, to determine the validity of an automatic, upper arm oscillometric cuff method for estimating central BP (O(CBP)) by comparison with the noninvasive reference standard of radial tonometry (T(CBP)). Second, we determined the intratest and intertest reliability of O(CBP). To assess validity, central BP was estimated by O(CBP) (Pulsecor R6.5B monitor) and compared with T(CBP) (SphygmoCor) in 47 participants free from cardiovascular disease (aged 57 ± 9 years) in supine, seated, and standing positions. Brachial mean arterial pressure (MAP) and diastolic BP (DBP) from the O(CBP) device were used to calibrate in both devices. Duplicate measures were recorded in each position on the same day to assess intratest reliability, and participants returned within 10 ± 7 days for repeat measurements to assess intertest reliability. There was a strong intraclass correlation (ICC = 0.987, P difference (1.2 ± 2.2 mm Hg) for central systolic BP (SBP) determined by O(CBP) compared with T(CBP). Ninety-six percent of all comparisons (n = 495 acceptable recordings) were within 5 mm Hg. With respect to reliability, there were strong correlations but higher limits of agreement for the intratest (ICC = 0.975, P difference 0.6 ± 4.5 mm Hg) and intertest (ICC = 0.895, P difference 4.3 ± 8.0 mm Hg) comparisons. Estimation of central SBP using cuff oscillometry is comparable to radial tonometry and has good reproducibility. As a noninvasive, relatively operator-independent method, O(CBP) may be as useful as T(CBP) for estimating central BP in clinical practice.

  12. Full-range stress–strain behaviour of contemporary pipeline steels: Part II. Estimation of model parameters

    International Nuclear Information System (INIS)

    Hertelé, Stijn; De Waele, Wim; Denys, Rudi; Verstraete, Matthias

    2012-01-01

    Contemporary pipeline steels with a yield-to-tensile ratio above 0.80 often show two-stages of strain hardening, which cannot be simultaneously described by the standardized Ramberg–Osgood model. A companion paper (Part I) showed that the recently developed UGent model provides more accurate descriptions than the Ramberg–Osgood model, as it succeeds in describing both strain hardening stages. However, it may be challenging to obtain an optimal model fit in absence of full stress–strain data. This paper discusses on how to find suited parameter values for the UGent model, given a set of measurable tensile test characteristics. The proposed methodology shows good results for an extensive set of investigated experimental stress–strain curves. Next to some common tensile test characteristics, the 1.0% proof stress is needed. The authors therefore encourage the acquisition of this stress during tensile tests. - Highlights: ► An analytical procedure estimates UGent model parameters. ► The procedure requires a set of tensile test characteristics. ► The UGent model performs better than the Ramberg–Osgood model. ► Apart from common characteristics, the 1.0% proof stress is required. ► The authors encourage the acquisition of this 1.0% proof stress.

  13. Validated reverse transcription droplet digital PCR serves as a higher order method for absolute quantification of Potato virus Y strains.

    Science.gov (United States)

    Mehle, Nataša; Dobnik, David; Ravnikar, Maja; Pompe Novak, Maruša

    2018-05-03

    RNA viruses have a great potential for high genetic variability and rapid evolution that is generated by mutation and recombination under selection pressure. This is also the case of Potato virus Y (PVY), which comprises a high diversity of different recombinant and non-recombinant strains. Consequently, it is hard to develop reverse transcription real-time quantitative PCR (RT-qPCR) with the same amplification efficiencies for all PVY strains which would enable their equilibrate quantification; this is specially needed in mixed infections and other studies of pathogenesis. To achieve this, we initially transferred the PVY universal RT-qPCR assay to a reverse transcription droplet digital PCR (RT-ddPCR) format. RT-ddPCR is an absolute quantification method, where a calibration curve is not needed, and it is less prone to inhibitors. The RT-ddPCR developed and validated in this study achieved a dynamic range of quantification over five orders of magnitude, and in terms of its sensitivity, it was comparable to, or even better than, RT-qPCR. RT-ddPCR showed lower measurement variability. We have shown that RT-ddPCR can be used as a reference tool for the evaluation of different RT-qPCR assays. In addition, it can be used for quantification of RNA based on in-house reference materials that can then be used as calibrators in diagnostic laboratories.

  14. Biochemical Validation of the Glyoxylate Cycle in the Cyanobacterium Chlorogloeopsis fritschii Strain PCC 9212.

    Science.gov (United States)

    Zhang, Shuyi; Bryant, Donald A

    2015-05-29

    Cyanobacteria are important photoautotrophic bacteria with extensive but variable metabolic capacities. The existence of the glyoxylate cycle, a variant of the TCA cycle, is still poorly documented in cyanobacteria. Previous studies reported the activities of isocitrate lyase and malate synthase, the key enzymes of the glyoxylate cycle in some cyanobacteria, but other studies concluded that these enzymes are missing. In this study the genes encoding isocitrate lyase and malate synthase from Chlorogloeopsis fritschii PCC 9212 were identified, and the recombinant enzymes were biochemically characterized. Consistent with the presence of the enzymes of the glyoxylate cycle, C. fritschii could assimilate acetate under both light and dark growth conditions. Transcript abundances for isocitrate lyase and malate synthase increased, and C. fritschii grew faster, when the growth medium was supplemented with acetate. Adding acetate to the growth medium also increased the yield of poly-3-hydroxybutyrate. When the genes encoding isocitrate lyase and malate synthase were expressed in Synechococcus sp. PCC 7002, the acetate assimilation capacity of the resulting strain was greater than that of wild type. Database searches showed that the genes for the glyoxylate cycle exist in only a few other cyanobacteria, all of which are able to fix nitrogen. This study demonstrates that the glyoxylate cycle exists in a few cyanobacteria, and that this pathway plays an important role in the assimilation of acetate for growth in one of those organisms. The glyoxylate cycle might play a role in coordinating carbon and nitrogen metabolism under conditions of nitrogen fixation. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  15. Development and Statistical Validation of Spectrophotometric Methods for the Estimation of Nabumetone in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    A. R. Rote

    2010-01-01

    Full Text Available Three new simple, economic spectrophotometric methods were developed and validated for the estimation of nabumetone in bulk and tablet dosage form. First method includes determination of nabumetone at absorption maxima 330 nm, second method applied was area under curve for analysis of nabumetone in the wavelength range of 326-334 nm and third method was First order derivative spectra with scaling factor 4. Beer law obeyed in the concentration range of 10-30 μg/mL for all three methods. The correlation coefficients were found to be 0.9997, 0.9998 and 0.9998 by absorption maxima, area under curve and first order derivative spectra. Results of analysis were validated statistically and by performing recovery studies. The mean percent recoveries were found satisfactory for all three methods. The developed methods were also compared statistically using one way ANOVA. The proposed methods have been successfully applied for the estimation of nabumetone in bulk and pharmaceutical tablet dosage form.

  16. Internal strain estimation for quantification of human heel pad elastic modulus: A phantom study

    DEFF Research Database (Denmark)

    Holst, Karen; Liebgott, Hervé; Wilhjelm, Jens E.

    2013-01-01

    Shock absorption is the most important function of the human heel pad. However, changes in heel pad elasticity, as seen in e.g. long-distance runners, diabetes patients, and victims of Falanga torture are affecting this function, often in a painful manner. Assessment of heel pad elasticity...... is usually based on one or a few strain measurements obtained by an external load-deformation system. The aim of this study was to develop a technique for quantitative measurements of heel pad elastic modulus based on several internal strain measures from within the heel pad by use of ultrasound images. Nine...... heel phantoms were manufactured featuring a combination of three heel pad stiffnesses and three heel pad thicknesses to model the normal human variation. Each phantom was tested in an indentation system comprising a 7MHz linear array ultrasound transducer, working as the indentor, and a connected load...

  17. Validation of walk score for estimating neighborhood walkability: an analysis of four US metropolitan areas.

    Science.gov (United States)

    Duncan, Dustin T; Aldstadt, Jared; Whalen, John; Melly, Steven J; Gortmaker, Steven L

    2011-11-01

    Neighborhood walkability can influence physical activity. We evaluated the validity of Walk Score(®) for assessing neighborhood walkability based on GIS (objective) indicators of neighborhood walkability with addresses from four US metropolitan areas with several street network buffer distances (i.e., 400-, 800-, and 1,600-meters). Address data come from the YMCA-Harvard After School Food and Fitness Project, an obesity prevention intervention involving children aged 5-11 years and their families participating in YMCA-administered, after-school programs located in four geographically diverse metropolitan areas in the US (n = 733). GIS data were used to measure multiple objective indicators of neighborhood walkability. Walk Scores were also obtained for the participant's residential addresses. Spearman correlations between Walk Scores and the GIS neighborhood walkability indicators were calculated as well as Spearman correlations accounting for spatial autocorrelation. There were many significant moderate correlations between Walk Scores and the GIS neighborhood walkability indicators such as density of retail destinations and intersection density (p walkability. Correlations generally became stronger with a larger spatial scale, and there were some geographic differences. Walk Score(®) is free and publicly available for public health researchers and practitioners. Results from our study suggest that Walk Score(®) is a valid measure of estimating certain aspects of neighborhood walkability, particularly at the 1600-meter buffer. As such, our study confirms and extends the generalizability of previous findings demonstrating that Walk Score is a valid measure of estimating neighborhood walkability in multiple geographic locations and at multiple spatial scales.

  18. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    Science.gov (United States)

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological

  19. Validation of the Maslach Burnout Inventory-Human Services Survey for Estimating Burnout in Dental Students.

    Science.gov (United States)

    Montiel-Company, José María; Subirats-Roig, Cristian; Flores-Martí, Pau; Bellot-Arcís, Carlos; Almerich-Silla, José Manuel

    2016-11-01

    The aim of this study was to examine the validity and reliability of the Maslach Burnout Inventory-Human Services Survey (MBI-HSS) as a tool for assessing the prevalence and level of burnout in dental students in Spanish universities. The survey was adapted from English to Spanish. A sample of 533 dental students from 15 Spanish universities and a control group of 188 medical students self-administered the survey online, using the Google Drive service. The test-retest reliability or reproducibility showed an Intraclass Correlation Coefficient of 0.95. The internal consistency of the survey was 0.922. Testing the construct validity showed two components with an eigenvalue greater than 1.5, which explained 51.2% of the total variance. Factor I (36.6% of the variance) comprised the items that estimated emotional exhaustion and depersonalization. Factor II (14.6% of the variance) contained the items that estimated personal accomplishment. The cut-off point for the existence of burnout achieved a sensitivity of 92.2%, a specificity of 92.1%, and an area under the curve of 0.96. Comparison of the total dental students sample and the control group of medical students showed significantly higher burnout levels for the dental students (50.3% vs. 40.4%). In this study, the MBI-HSS was found to be viable, valid, and reliable for measuring burnout in dental students. Since the study also found that the dental students suffered from high levels of this syndrome, these results suggest the need for preventive burnout control programs.

  20. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    Directory of Open Access Journals (Sweden)

    Suzana Papile Maciel Carvalho

    2013-07-01

    Full Text Available Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. OBJECTIVE: This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995, previously used in a population sample from Northeast Brazil. MATERIAL AND METHODS: The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. RESULTS: The results demonstrated that the application of the method of Oliveira, et al. (1995 in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. CONCLUSION: It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995 presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South

  1. The Air Force Mobile Forward Surgical Team (MFST): Using the Estimating Supplies Program to Validate Clinical Requirement

    National Research Council Canada - National Science Library

    Nix, Ralph E; Onofrio, Kathleen; Konoske, Paula J; Galarneau, Mike R; Hill, Martin

    2004-01-01

    .... The primary objective of the study was to provide the Air Force with the ability to validate clinical requirements of the MFST assemblage, with the goal of using NHRC's Estimating Supplies Program (ESP...

  2. Reproducibility and relative validity of a food frequency questionnaire to estimate intake of dietary phylloquinone and menaquinones.

    NARCIS (Netherlands)

    Zwakenberg, S R; Engelen, A I P; Dalmeijer, G W; Booth, S L; Vermeer, C; Drijvers, J J M M; Ocke, M C; Feskens, E J M; van der Schouw, Y T; Beulens, J W J

    2017-01-01

    This study aims to investigate the reproducibility and relative validity of the Dutch food frequency questionnaire (FFQ), to estimate intake of dietary phylloquinone and menaquinones compared with 24-h dietary recalls (24HDRs) and plasma markers of vitamin K status.

  3. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  4. Validation and Refinement of Prediction Models to Estimate Exercise Capacity in Cancer Survivors Using the Steep Ramp Test

    NARCIS (Netherlands)

    Stuiver, Martijn M.; Kampshoff, Caroline S.; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J. M.; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M.

    2017-01-01

    Objective: To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo2(peak)) and peak power output (W-peak).&

  5. Estimating incidence from prevalence in generalised HIV epidemics: methods and validation.

    Directory of Open Access Journals (Sweden)

    Timothy B Hallett

    2008-04-01

    Full Text Available HIV surveillance of generalised epidemics in Africa primarily relies on prevalence at antenatal clinics, but estimates of incidence in the general population would be more useful. Repeated cross-sectional measures of HIV prevalence are now becoming available for general populations in many countries, and we aim to develop and validate methods that use these data to estimate HIV incidence.Two methods were developed that decompose observed changes in prevalence between two serosurveys into the contributions of new infections and mortality. Method 1 uses cohort mortality rates, and method 2 uses information on survival after infection. The performance of these two methods was assessed using simulated data from a mathematical model and actual data from three community-based cohort studies in Africa. Comparison with simulated data indicated that these methods can accurately estimates incidence rates and changes in incidence in a variety of epidemic conditions. Method 1 is simple to implement but relies on locally appropriate mortality data, whilst method 2 can make use of the same survival distribution in a wide range of scenarios. The estimates from both methods are within the 95% confidence intervals of almost all actual measurements of HIV incidence in adults and young people, and the patterns of incidence over age are correctly captured.It is possible to estimate incidence from cross-sectional prevalence data with sufficient accuracy to monitor the HIV epidemic. Although these methods will theoretically work in any context, we have able to test them only in southern and eastern Africa, where HIV epidemics are mature and generalised. The choice of method will depend on the local availability of HIV mortality data.

  6. An examination of healthy aging across a conceptual continuum: prevalence estimates, demographic patterns, and validity.

    Science.gov (United States)

    McLaughlin, Sara J; Jette, Alan M; Connell, Cathleen M

    2012-06-01

    Although the notion of healthy aging has gained wide acceptance in gerontology, measuring the phenomenon is challenging. Guided by a prominent conceptualization of healthy aging, we examined how shifting from a more to less stringent definition of healthy aging influences prevalence estimates, demographic patterns, and validity. Data are from adults aged 65 years and older who participated in the Health and Retirement Study. We examined four operational definitions of healthy aging. For each, we calculated prevalence estimates and examined the odds of healthy aging by age, education, gender, and race-ethnicity in 2006. We also examined the association between healthy aging and both self-rated health and death. Across definitions, the prevalence of healthy aging ranged from 3.3% to 35.5%. For all definitions, those classified as experiencing healthy aging had lower odds of fair or poor self-rated health and death over an 8-year period. The odds of being classified as "healthy" were lower among those of advanced age, those with less education, and women than for their corresponding counterparts across all definitions. Moving across the conceptual continuum--from a more to less rigid definition of healthy aging--markedly increases the measured prevalence of healthy aging. Importantly, results suggest that all examined definitions identified a subgroup of older adults who had substantially lower odds of reporting fair or poor health and dying over an 8-year period, providing evidence of the validity of our definitions. Conceptualizations that emphasize symptomatic disease and functional health may be particularly useful for public health purposes.

  7. Validation of equations and proposed reference values to estimate fat mass in Chilean university students.

    Science.gov (United States)

    Gómez Campos, Rossana; Pacheco Carrillo, Jaime; Almonacid Fierro, Alejandro; Urra Albornoz, Camilo; Cossío-Bolaños, Marco

    2018-03-01

    (i) To propose regression equations based on anthropometric measures to estimate fat mass (FM) using dual energy X-ray absorptiometry (DXA) as reference method, and (ii)to establish population reference standards for equation-derived FM. A cross-sectional study on 6,713 university students (3,354 males and 3,359 females) from Chile aged 17.0 to 27.0years. Anthropometric measures (weight, height, waist circumference) were taken in all participants. Whole body DXA was performed in 683 subjects. A total of 478 subjects were selected to develop regression equations, and 205 for their cross-validation. Data from 6,030 participants were used to develop reference standards for FM. Equations were generated using stepwise multiple regression analysis. Percentiles were developed using the LMS method. Equations for men were: (i) FM=-35,997.486 +232.285 *Weight +432.216 *CC (R 2 =0.73, SEE=4.1); (ii)FM=-37,671.303 +309.539 *Weight +66,028.109 *ICE (R2=0.76, SEE=3.8), while equations for women were: (iii)FM=-13,216.917 +461,302 *Weight+91.898 *CC (R 2 =0.70, SEE=4.6), and (iv) FM=-14,144.220 +464.061 *Weight +16,189.297 *ICE (R 2 =0.70, SEE=4.6). Percentiles proposed included p10, p50, p85, and p95. The developed equations provide valid and accurate estimation of FM in both sexes. The values obtained using the equations may be analyzed from percentiles that allow for categorizing body fat levels by age and sex. Copyright © 2017 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Lactate minimum in a ramp protocol and its validity to estimate the maximal lactate steady state

    Directory of Open Access Journals (Sweden)

    Emerson Pardono

    2009-01-01

    Full Text Available http://dx.doi.org/10.5007/1980-0037.2009v11n2p174   The objectives of this study were to evaluate the validity of the lactate minimum (LM using a ramp protocol for the determination of LM intensity (LMI, and to estimate the exercise intensity corresponding to maximal blood lactate steady state (MLSS. In addition, the possibility of determining aerobic and anaerobic fitness was investigated. Fourteen male cyclists of regional level performed one LM protocol on a cycle ergometer (Excalibur–Lode consisting of an incremental test at an initial workload of 75 Watts, with increments of 1 Watt every 6 seconds. Hyperlactatemia was induced by a 30-second Wingate anaerobic test (WAT (Monark–834E at a workload corresponding to 8.57% of the volunteer’s body weight. Peak power (11.5±2 Watts/kg, mean power output (9.8±1.7 Watts/kg, fatigue index (33.7±2.3% and lactate 7 min after WAT (10.5±2.3 mmol/L were determined. The incremental test identified LMI (207.8±17.7 Watts and its respective blood lactate concentration (2.9±0.7 mmol/L, heart rate (153.6±10.6 bpm, and also maximal aerobic power (305.2±31.0 Watts. MLSS intensity was identified by 2 to 4 constant exercise tests (207.8±17.7 Watts, with no difference compared to LMI and good agreement between the two parameters. The LM test using a ramp protocol seems to be a valid method for the identification of LMI and estimation of MLSS intensity in regional cyclists. In addition, both anaerobic and aerobic fitness parameters were identified during a single session.

  9. Estimating filtration coefficients for straining from percolation and random walk theories

    DEFF Research Database (Denmark)

    Yuan, Hao; Shapiro, Alexander; You, Zhenjiang

    2012-01-01

    In this paper, laboratory challenge tests are carried out under unfavorable attachment conditions, so that size exclusion or straining is the only particle capture mechanism. The experimental results show that far above the percolation threshold the filtration coefficients are not proportional...... size exclusion theory or the model of parallel tubes with mixing chambers, where the filtration coefficients are proportional to the flux through smaller pores, and the predicted penetration depths are much lower. A special capture mechanism is proposed, which makes it possible to explain...... the experimentally observed power law dependencies of filtration coefficients and large penetration depths of particles. Such a capture mechanism is realized in a 2D pore network model with periodical boundaries with the random walk of particles on the percolation lattice. Geometries of infinite and finite clusters...

  10. Validity of anthropometric procedures to estimate body density and body fat percent in military men

    Directory of Open Access Journals (Sweden)

    Ciro Romélio Rodriguez-Añez

    1999-12-01

    Full Text Available The objective of this study was to verify the validity of the Katch e McArdle’s equation (1973,which uses the circumferences of the arm, forearm and abdominal to estimate the body density and the procedure of Cohen (1986 which uses the circumferences of the neck and abdominal to estimate the body fat percent (%F in military men. Therefore data from 50 military men, with mean age of 20.26 ± 2.04 years serving in Santa Maria, RS, was collected. The circumferences were measured according with Katch e McArdle (1973 and Cohen (1986 procedures. The body density measured (Dm obtained under water weighting was used as criteria and its mean value was 1.0706 ± 0.0100 g/ml. The residual lung volume was estimated using the Goldman’s e Becklake’s equation (1959. The %F was obtained with the Siri’s equation (1961 and its mean value was 12.70 ± 4.71%. The validation criterion suggested by Lohman (1992 was followed. The analysis of the results indicated that the procedure developed by Cohen (1986 has concurrent validity to estimate %F in military men or in other samples with similar characteristics with standard error of estimate of 3.45%. . RESUMO Através deste estudo objetivou-se verificar a validade: da equação de Katch e McArdle (1973 que envolve os perímetros do braço, antebraço e abdômen, para estimar a densidade corporal; e, o procedimento de Cohen (1986 que envolve os perímetros do pescoço e abdômen, para estimar o % de gordura (%G; para militares. Para tanto, coletou-se os dados de 50 militares masculinos, com idade média de 20,26 ± 2,04 anos, lotados na cidade de Santa Maria, RS. Mensurou-se os perímetros conforme procedimentos de Katch e McArdle (1973 e Cohen (1986. Utilizou-se a densidade corporal mensurada (Dm através da pesagem hidrostática como critério de validação, cujo valor médio foi de 1,0706 ± 0,0100 g/ml. Estimou-se o volume residual pela equação de Goldman e Becklake (1959. O %G derivado da Dm estimou

  11. Validation of the Chinese version of the Modified Caregivers Strain Index among Hong Kong caregivers: an initiative of medical social workers.

    Science.gov (United States)

    Chan, Wallace Chi Ho; Chan, Christopher L F; Suen, Margaret

    2013-11-01

    Family caregivers may often experience caregiving stress and burden. To systematically assess this issue, medical social workers may need to use a brief and valid measurement in their practice. In the Hong Kong Chinese context, one additional challenge is to examine whether a measurement developed in the West is valid for Hong Kong Chinese caregivers. Thus, medical social workers in Hong Kong initiated this research study to validate the Chinese version of the Modified Caregiver Strain Index (C-M-CSI). A total of 223 Chinese caregivers of patients with various chronic illnesses were recruited for this validation study. C-M-CSI demonstrated good reliability (Cronbach's alpha coefficient = .91), concurrent validity with the Chinese version of the Caregiver Burden Inventory, and discriminant validity with the Chinese version of the Meaning in Life Questionnaire. Factor analysis yielded a single factor as the original M-CSI, which explained 49 percent of variance. Construct validity was shown by differentiating spousal and nonspousal caregivers, as well as caregivers of patients with and without behavioral problems. C-M-CSI is recommended as a brief and valid measurement that can be used by medical social workers in assessing the caregiving strain of Chinese caregivers of patients in Hong Kong.

  12. Estimation of dynamic rotor loads for the rotor systems research aircraft: Methodology development and validation

    Science.gov (United States)

    Duval, R. W.; Bahrami, M.

    1985-01-01

    The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.

  13. Secretin-stimulated ultrasound estimation of pancreatic secretion in cystic fibrosis validated by magnetic resonance imaging

    International Nuclear Information System (INIS)

    Engjom, Trond; Dimcevski, Georg; Tjora, Erling; Wathle, Gaute; Erchinger, Friedemann; Laerum, Birger N.; Gilja, Odd H.; Haldorsen, Ingfrid Salvesen

    2018-01-01

    Secretin-stimulated magnetic resonance imaging (s-MRI) is the best validated radiological modality assessing pancreatic secretion. The purpose of this study was to compare volume output measures from secretin-stimulated transabdominal ultrasonography (s-US) to s-MRI for the diagnosis of exocrine pancreatic failure in cystic fibrosis (CF). We performed transabdominal ultrasonography and MRI before and at timed intervals during 15 minutes after secretin stimulation in 21 CF patients and 13 healthy controls. To clearly identify the subjects with reduced exocrine pancreatic function, we classified CF patients as pancreas-sufficient or -insufficient by secretin-stimulated endoscopic short test and faecal elastase. Pancreas-insufficient CF patients had reduced pancreatic secretions compared to pancreas-sufficient subjects based on both imaging modalities (p < 0.001). Volume output estimates assessed by s-US correlated to that of s-MRI (r = 0.56-0.62; p < 0.001). Both s-US (AUC: 0.88) and s-MRI (AUC: 0.99) demonstrated good diagnostic accuracy for exocrine pancreatic failure. Pancreatic volume-output estimated by s-US corresponds well to exocrine pancreatic function in CF patients and yields comparable results to that of s-MRI. s-US provides a simple and feasible tool in the assessment of pancreatic secretion. (orig.)

  14. Regional GRACE-based estimates of water mass variations over Australia: validation and interpretation

    Science.gov (United States)

    Seoane, L.; Ramillien, G.; Frappart, F.; Leblanc, M.

    2013-04-01

    Time series of regional 2°-by-2° GRACE solutions have been computed from 2003 to 2011 with a 10 day resolution by using an energy integral method over Australia [112° E 156° E; 44° S 10° S]. This approach uses the dynamical orbit analysis of GRACE Level 1 measurements, and specially accurate along-track K Band Range Rate (KBRR) residuals (1 μm s-1 level of error) to estimate the total water mass over continental regions. The advantages of regional solutions are a significant reduction of GRACE aliasing errors (i.e. north-south stripes) providing a more accurate estimation of water mass balance for hydrological applications. In this paper, the validation of these regional solutions over Australia is presented as well as their ability to describe water mass change as a reponse of climate forcings such as El Niño. Principal component analysis of GRACE-derived total water storage maps show spatial and temporal patterns that are consistent with independent datasets (e.g. rainfall, climate index and in-situ observations). Regional TWS show higher spatial correlations with in-situ water table measurements over Murray-Darling drainage basin (80-90%), and they offer a better localization of hydrological structures than classical GRACE global solutions (i.e. Level 2 GRGS products and 400 km ICA solutions as a linear combination of GFZ, CSR and JPL GRACE solutions).

  15. Estimation of skull table thickness with clinical CT and validation with microCT.

    Science.gov (United States)

    Lillie, Elizabeth M; Urban, Jillian E; Weaver, Ashley A; Powers, Alexander K; Stitzel, Joel D

    2015-01-01

    Brain injuries resulting from motor vehicle crashes (MVC) are extremely common yet the details of the mechanism of injury remain to be well characterized. Skull deformation is believed to be a contributing factor to some types of traumatic brain injury (TBI). Understanding biomechanical contributors to skull deformation would provide further insight into the mechanism of head injury resulting from blunt trauma. In particular, skull thickness is thought be a very important factor governing deformation of the skull and its propensity for fracture. Current computed tomography (CT) technology is limited in its ability to accurately measure cortical thickness using standard techniques. A method to evaluate cortical thickness using cortical density measured from CT data has been developed previously. This effort validates this technique for measurement of skull table thickness in clinical head CT scans using two postmortem human specimens. Bone samples were harvested from the skulls of two cadavers and scanned with microCT to evaluate the accuracy of the estimated cortical thickness measured from clinical CT. Clinical scans were collected at 0.488 and 0.625 mm in plane resolution with 0.625 mm thickness. The overall cortical thickness error was determined to be 0.078 ± 0.58 mm for cortical samples thinner than 4 mm. It was determined that 91.3% of these differences fell within the scanner resolution. Color maps of clinical CT thickness estimations are comparable to color maps of microCT thickness measurements, indicating good quantitative agreement. These data confirm that the cortical density algorithm successfully estimates skull table thickness from clinical CT scans. The application of this technique to clinical CT scans enables evaluation of cortical thickness in population-based studies. © 2014 Anatomical Society.

  16. Validity of bioelectrical impedance analysis in estimation of fat-free mass in colorectal cancer patients.

    Science.gov (United States)

    Ræder, Hanna; Kværner, Ane Sørlie; Henriksen, Christine; Florholmen, Geir; Henriksen, Hege Berg; Bøhn, Siv Kjølsrud; Paur, Ingvild; Smeland, Sigbjørn; Blomhoff, Rune

    2018-02-01

    Bioelectrical impedance analysis (BIA) is an accessible and cheap method to measure fat-free mass (FFM). However, BIA estimates are subject to uncertainty in patient populations with altered body composition and hydration. The aim of the current study was to validate a whole-body and a segmental BIA device against dual-energy X-ray absorptiometry (DXA) in colorectal cancer (CRC) patients, and to investigate the ability of different empiric equations for BIA to predict DXA FFM (FFM DXA ). Forty-three non-metastatic CRC patients (aged 50-80 years) were enrolled in this study. Whole-body and segmental BIA FFM estimates (FFM whole-bodyBIA , FFM segmentalBIA ) were calculated using 14 empiric equations, including the equations from the manufacturers, before comparison to FFM DXA estimates. Strong linear relationships were observed between FFM BIA and FFM DXA estimates for all equations (R 2  = 0.94-0.98 for both devices). However, there were large discrepancies in FFM estimates depending on the equations used with mean differences in the ranges -6.5-6.8 kg and -11.0-3.4 kg for whole-body and segmental BIA, respectively. For whole-body BIA, 77% of BIA derived FFM estimates were significantly different from FFM DXA , whereas for segmental BIA, 85% were significantly different. For whole-body BIA, the Schols* equation gave the highest agreement with FFM DXA with mean difference ±SD of -0.16 ± 1.94 kg (p = 0.582). The manufacturer's equation gave a small overestimation of FFM with 1.46 ± 2.16 kg (p FFM DXA (0.17 ± 1.83 kg (p = 0.546)). Using the manufacturer's equation, no difference in FFM estimates was observed (-0.34 ± 2.06 kg (p = 0.292)), however, a clear proportional bias was detected (r = 0.69, p FFM compared to DXA using the optimal equation. In a population of non-metastatic CRC patients, mostly consisting of Caucasian adults and with a wide range of body composition measures, both the whole-body BIA and segmental BIA device

  17. Development and Cross-Validation of Equation for Estimating Percent Body Fat of Korean Adults According to Body Mass Index

    Directory of Open Access Journals (Sweden)

    Hoyong Sung

    2017-06-01

    Full Text Available Background : Using BMI as an independent variable is the easiest way to estimate percent body fat. Thus far, few studies have investigated the development and cross-validation of an equation for estimating the percent body fat of Korean adults according to the BMI. The goals of this study were the development and cross-validation of an equation for estimating the percent fat of representative Korean adults using the BMI. Methods : Samples were obtained from the Korea National Health and Nutrition Examination Survey between 2008 and 2011. The samples from 2008-2009 and 2010-2011 were labeled as the validation group (n=10,624 and the cross-validation group (n=8,291, respectively. The percent fat was measured using dual-energy X-ray absorptiometry, and the body mass index, gender, and age were included as independent variables to estimate the measured percent fat. The coefficient of determination (R², standard error of estimation (SEE, and total error (TE were calculated to examine the accuracy of the developed equation. Results : The cross-validated R² was 0.731 for Model 1 and 0.735 for Model 2. The SEE was 3.978 for Model 1 and 3.951 for Model 2. The equations developed in this study are more accurate for estimating percent fat of the cross-validation group than those previously published by other researchers. Conclusion : The newly developed equations are comparatively accurate for the estimation of the percent fat of Korean adults.

  18. Validation of computer code TRAFIC used for estimation of charcoal heatup in containment ventilation systems

    International Nuclear Information System (INIS)

    Yadav, D.H.; Datta, D.; Malhotra, P.K.; Ghadge, S.G.; Bajaj, S.S.

    2005-01-01

    Full text of publication follows: Standard Indian PHWRs are provided with a Primary Containment Filtration and Pump-Back System (PCFPB) incorporating charcoal filters in the ventilation circuit to remove radioactive iodine that may be released from reactor core into the containment during LOCA+ECCS failure which is a Design Basis Accident for containment of radioactive release. This system is provided with two identical air circulation loops, each having 2 full capacity fans (1 operating and 1 standby) for a bank of four combined charcoal and High Efficiency Particulate Activity (HEPA) filters, in addition to other filters. While the filtration circuit is designed to operate under forced flow conditions, it is of interest to understand the performance of the charcoal filters, in the event of failure of the fans after operating for some time, i.e., when radio-iodine inventory is at its peak value. It is of interest to check whether the buoyancy driven natural circulation occurring in the filtration circuit is sufficient enough to keep the temperature in the charcoal under safe limits. A computer code TRAFIC (Transient Analysis of Filters in Containment) was developed using conservative one dimensional model to analyze the system. Suitable parametric studies were carried out to understand the problem and to identify the safety of existing system. TRAFIC Code has two important components. The first one estimates the heat generation in charcoal filter based on 'Source Term'; while the other one performs thermal-hydraulic computations. In an attempt validate the Code, experimental studies have been carried out. For this purpose, an experimental set up comprising of scaled down model of filtration circuit with heating coils embedded in charcoal for simulating the heating effect due to radio iodine has been constructed. The present work of validation consists of utilizing the results obtained from experiments conducted for different heat loads, elevations and adsorbent

  19. Estimating mortality from external causes using data from retrospective surveys: A validation study in Niakhar (Senegal

    Directory of Open Access Journals (Sweden)

    Gilles Pison

    2018-03-01

    Full Text Available Background: In low- and middle-income countries (LMICs, data on causes of death is often inaccurate or incomplete. In this paper, we test whether adding a few questions about injuries and accidents to mortality questionnaires used in representative household surveys would yield accurate estimates of the extent of mortality due to external causes (accidents, homicides, or suicides. Methods: We conduct a validation study in Niakhar (Senegal, during which we compare reported survey data to high-quality prospective records of deaths collected by a health and demographic surveillance system (HDSS. Results: Survey respondents more frequently list the deaths of their adult siblings who die of external causes than the deaths of those who die from other causes. The specificity of survey data is high, but sensitivity is low. Among reported deaths, less than 60Š of the deaths classified as due to external causes by the HDSS are also classified as such by survey respondents. Survey respondents better report deaths due to road-traffic accidents than deaths from suicides and homicides. Conclusions: Asking questions about deaths resulting from injuries and accidents during surveys might help measure mortality from external causes in LMICs, but the resulting data displays systematic bias in a rural population of Senegal. Future studies should 1 investigate whether similar biases also apply in other settings and 2 test new methods to further improve the accuracy of survey data on mortality from external causes. Contribution: This study helps strengthen the monitoring of sustainable development targets in LMICs by validating a simple approach for the measurement of mortality from external causes.

  20. Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study

    OpenAIRE

    Hippisley-Cox, Julia; Coupland, Carol

    2017-01-01

    Objective: To develop and externally validate risk prediction equations to estimate absolute and conditional survival in patients with colorectal cancer. \\ud \\ud Design: Cohort study.\\ud \\ud Setting: General practices in England providing data for the QResearch database linked to the national cancer registry.\\ud \\ud Participants: 44 145 patients aged 15-99 with colorectal cancer from 947 practices to derive the equations. The equations were validated in 15 214 patients with colorectal cancer ...

  1. Validation of a spectrophotometer-based method for estimating daily sperm production and deferent duct transit.

    Science.gov (United States)

    Froman, D P; Rhoads, D D

    2012-10-01

    The objectives of the present work were 3-fold. First, a new method for estimating daily sperm production was validated. This method, in turn, was used to evaluate testis output as well as deferent duct throughput. Next, this analytical approach was evaluated in 2 experiments. The first experiment compared left and right reproductive tracts within roosters. The second experiment compared reproductive tract throughput in roosters from low and high sperm mobility lines. Standard curves were constructed from which unknown concentrations of sperm cells and sperm nuclei could be predicted from observed absorbance. In each case, the independent variable was based upon hemacytometer counts, and absorbance was a linear function of concentration. Reproductive tracts were excised, semen recovered from each duct, and the extragonadal sperm reserve determined by multiplying volume by sperm cell concentration. Testicular sperm nuclei were procured by homogenization of a whole testis, overlaying a 20-mL volume of homogenate upon 15% (wt/vol) Accudenz (Accurate Chemical and Scientific Corporation, Westbury, NY), and then washing nuclei by centrifugation through the Accudenz layer. Daily sperm production was determined by dividing the predicted number of sperm nuclei within the homogenate by 4.5 d (i.e., the time sperm with elongated nuclei spend within the testis). Sperm transit through the deferent duct was estimated by dividing the extragonadal reserve by daily sperm production. Neither the efficiency of sperm production (sperm per gram of testicular parenchyma per day) nor deferent duct transit differed between left and right reproductive tracts (P > 0.05). Whereas efficiency of sperm production did not differ (P > 0.05) between low and high sperm mobility lines, deferent duct transit differed between lines (P < 0.001). On average, this process required 2.2 and 1.0 d for low and high lines, respectively. In summary, we developed and then tested a method for quantifying male

  2. Validity of Two New Brief Instruments to Estimate Vegetable Intake in Adults

    Directory of Open Access Journals (Sweden)

    Janine Wright

    2015-08-01

    Full Text Available Cost effective population-based monitoring tools are needed for nutritional surveillance and interventions. The aim was to evaluate the relative validity of two new brief instruments (three item: VEG3 and five item: VEG5 for estimating usual total vegetable intake in comparison to a 7-day dietary record (7DDR. Sixty-four Australian adult volunteers aged 30 to 69 years (30 males, mean age ± SD 56.3 ± 9.2 years and 34 female mean age ± SD 55.3 ± 10.0 years. Pearson correlations between 7DDR and VEG3 and VEG5 were modest, at 0.50 and 0.56, respectively. VEG3 significantly (p < 0.001 underestimated mean vegetable intake compared to 7DDR measures (2.9 ± 1.3 vs. 3.6 ± 1.6 serves/day, respectively, whereas mean vegetable intake assessed by VEG5 did not differ from 7DDR measures (3.3 ± 1.5 vs. 3.6 ± 1.6 serves/day. VEG5 was also able to correctly identify 95%, 88% and 75% of those subjects not consuming five, four and three serves/day of vegetables according to their 7DDR classification. VEG5, but not VEG3, can estimate usual total vegetable intake of population groups and had superior performance to VEG3 in identifying those not meeting different levels of vegetable intake. VEG5, a brief instrument, shows measurement characteristics useful for population-based monitoring and intervention targeting.

  3. Validity and practicability of smartphone-based photographic food records for estimating energy and nutrient intake.

    Science.gov (United States)

    Kong, Kaimeng; Zhang, Lulu; Huang, Lisu; Tao, Yexuan

    2017-05-01

    Image-assisted dietary assessment methods are frequently used to record individual eating habits. This study tested the validity of a smartphone-based photographic food recording approach by comparing the results obtained with those of a weighed food record. We also assessed the practicality of the method by using it to measure the energy and nutrient intake of college students. The experiment was implemented in two phases, each lasting 2 weeks. In the first phase, a labelled menu and a photograph database were constructed. The energy and nutrient content of 31 randomly selected dishes in three different portion sizes were then estimated by the photograph-based method and compared with a weighed food record. In the second phase, we combined the smartphone-based photographic method with the WeChat smartphone application and applied this to 120 randomly selected participants to record their energy and nutrient intake. The Pearson correlation coefficients for energy, protein, fat, and carbohydrate content between the weighed and the photographic food record were 0.997, 0.936, 0.996, and 0.999, respectively. Bland-Altman plots showed good agreement between the two methods. The estimated protein, fat, and carbohydrate intake by participants was in accordance with values in the Chinese Residents' Nutrition and Chronic Disease report (2015). Participants expressed satisfaction with the new approach and the compliance rate was 97.5%. The smartphone-based photographic dietary assessment method combined with the WeChat instant messaging application was effective and practical for use by young people.

  4. Assessing the external validity of model-based estimates of the incidence of heart attack in England: a modelling study

    Directory of Open Access Journals (Sweden)

    Peter Scarborough

    2016-11-01

    Full Text Available Abstract Background The DisMod II model is designed to estimate epidemiological parameters on diseases where measured data are incomplete and has been used to provide estimates of disease incidence for the Global Burden of Disease study. We assessed the external validity of the DisMod II model by comparing modelled estimates of the incidence of first acute myocardial infarction (AMI in England in 2010 with estimates derived from a linked dataset of hospital records and death certificates. Methods Inputs for DisMod II were prevalence rates of ever having had an AMI taken from a population health survey, total mortality rates and AMI mortality rates taken from death certificates. By definition, remission rates were zero. We estimated first AMI incidence in an external dataset from England in 2010 using a linked dataset including all hospital admissions and death certificates since 1998. 95 % confidence intervals were derived around estimates from the external dataset and DisMod II estimates based on sampling variance and reported uncertainty in prevalence estimates respectively. Results Estimates of the incidence rate for the whole population were higher in the DisMod II results than the external dataset (+54 % for men and +26 % for women. Age-specific results showed that the DisMod II results over-estimated incidence for all but the oldest age groups. Confidence intervals for the DisMod II and external dataset estimates did not overlap for most age groups. Conclusion By comparison with AMI incidence rates in England, DisMod II did not achieve external validity for age-specific incidence rates, but did provide global estimates of incidence that are of similar magnitude to measured estimates. The model should be used with caution when estimating age-specific incidence rates.

  5. Validation of a Robust Neural Real-Time Voltage Estimator for Active Distribution Grids on Field Data

    DEFF Research Database (Denmark)

    Pertl, Michael; Douglass, Philip James; Heussen, Kai

    2018-01-01

    network approach for voltage estimation in active distribution grids by means of measured data from two feeders of a real low voltage distribution grid. The approach enables a real-time voltage estimation at locations in the distribution grid, where otherwise only non-real-time measurements are available......The installation of measurements in distribution grids enables the development of data driven methods for the power system. However, these methods have to be validated in order to understand the limitations and capabilities for their use. This paper presents a systematic validation of a neural...

  6. Validation of the iPhone app using the force platform to estimate vertical jump height.

    Science.gov (United States)

    Carlos-Vivas, Jorge; Martin-Martinez, Juan P; Hernandez-Mocholi, Miguel A; Perez-Gomez, Jorge

    2018-03-01

    Vertical jump performance has been evaluated with several devices: force platforms, contact mats, Vertec, accelerometers, infrared cameras and high-velocity cameras; however, the force platform is considered the gold standard for measuring vertical jump height. The purpose of this study was to validate an iPhone app called My Jump, that measures vertical jump height by comparing it with other methods that use the force platform to estimate vertical jump height, namely, vertical velocity at take-off and time in the air. A total of 40 sport sciences students (age 21.4±1.9 years) completed five countermovement jumps (CMJs) over a force platform. Thus, 200 CMJ heights were evaluated from the vertical velocity at take-off and the time in the air using the force platform, and from the time in the air with the My Jump mobile application. The height obtained was compared using the intraclass correlation coefficient (ICC). Correlation between APP and force platform using the time in the air was perfect (ICC=1.000, PJump, is an appropriate method to evaluate the vertical jump performance; however, vertical jump height is slightly overestimated compared with that of the force platform.

  7. Development and validation of RP-HPLC method for estimation of eplerenone in spiked human plasma

    Directory of Open Access Journals (Sweden)

    Paraag Gide

    2012-10-01

    Full Text Available A rapid and simple high performance liquid chromatography (HPLC method with a UV detection (241 nm was developed and validated for estimation of eplerenone from spiked human plasma. The analyte and the internal standard (valdecoxib were extracted with a mixture of dichloromethane and diethyl ether. The chromatographic separation was performed on a HiQSil C-18HS column (250 mm×4.6 mm, 5 μm with a mobile phase consisting of acetonitrile:water (50:50, v/v at flow rate of 1 mL/min. The calibration curve was linear in the range 100–3200 ng/mL and the heteroscedasticity was minimized by using weighted least squares regression with weighting factor 1/X. Keywords: Eplerenone, Liquid–liquid extraction, Weighted regression, HPLC–UV

  8. Validation of equations using anthropometric and bioelectrical impedance for estimating body composition of the elderly

    Directory of Open Access Journals (Sweden)

    Cassiano Ricardo Rech

    2006-08-01

    Full Text Available The increase of the elderly population has enhanced the need for studying aging-related issues. In this context, the analysis of morphological alterations occurring with the age has been discussed thoroughly. Evidences point that there are few information on valid methods for estimating body composition of senior citizens in Brazil. Therefore, the objective of this study was to cross-validate equations using either anthropometric or bioelectrical impedance (BIA data for estimation of body fat (%BF and of fat-free mass (FFM in a sample of older individuals from Florianópolis-SC, having the dual energy x-ray absorptiometry (DEXA as the criterion-measurement. The group was composed by 180 subjects (60 men and 120 women who participated in four community Groups for the elderly and were systematically randomly selected by a telephone interview, with age ranging from 60 to 81 years. The variables stature, body mass, body circumferences, skinfold thickness, reactance and resistance were measured in the morning at The Sports Center of the Federal University of Santa Catarina. The DEXA evaluation was performed in the afternoon at The Diagnosis Center through Image in Florianópolis-SC. Twenty anthropometric and 8 BIA equations were analyzed for cross-validation. For those equations that estimate body density, the equation of Siri (1961 and the adapted-equation by Deurenberg et al. (1989 were used for conversion into %BF. The analyses were performed with the statistical package SPSS, version 11.5, establishing the level of significance at 5%. The criteria of cross-validation suggested by Lohman (1992 and the graphic dispersion analyses in relation to the mean, as proposed by Bland and Altman (1986 were used. The group presented values for the body mass index (BMI between 18.4kg.m-2 and 39.3kg.m-2. The mean %BF was of 23.1% (sd=5.8 for men and 37.3% (sd=6.9 in women, varying from 6% to 51.4%. There were no differences among the estimates of the equations

  9. A short 18 items food frequency questionnaire biochemically validated to estimate zinc status in humans.

    Science.gov (United States)

    Trame, Sarah; Wessels, Inga; Haase, Hajo; Rink, Lothar

    2018-02-21

    Inadequate dietary zinc intake is wide-spread in the world's population. Despite the clinical significance of zinc deficiency there is no established method or biomarker to reliably evaluate the zinc status. The aim of our study was to develop a biochemically validated questionnaire as a clinically useful tool that can predict the risk of an individual being zinc deficient. From 71 subjects aged 18-55 years blood and urine samples were collected. Zinc concentrations in serum and urine were determined by atomic absorption spectrometry. A food frequency questionnaire (FFQ) including 38 items was filled out representing the consumption during the last 6 months obtaining nutrient diet scores. Latter were calculated by multiplication of the particular frequency of consumption, the nutrient intake of the respective portion size and the extent of the consumed quantity. Results from the FFQ were compared with nutrient intake information gathered in 24-h dietary recalls. A hemogram was performed and cytokine concentrations were obtained using Enzyme-linked Immunosorbent Assay. Reducing the items of the primary FFQ from 38 to 18 did not result in a significant variance between both calculated scores. Zinc diet scores showed highly significant correlation with serum zinc (r = 0.37; p < 0.01) and urine zinc concentrations (r = 0.34; p < 0.01). Serum zinc concentrations and zinc diet scores showed a significant positive correlation with animal protein intake (r = 0.37; p < 0.01/r = 0.54; p < 0.0001). Higher zinc diet scores were found in omnivores compared to vegetarians (213.5 vs. 111.9; p < 0.0001). The 18 items FFQ seems to be a sufficient tool to provide a good estimation of the zinc status. Moreover, shortening of the questionnaire to 18 items without a loss of predictive efficiency enables a facilitated and resource-saving routine use. A validation of the questionnaire in other cohorts could enable the progression towards clinical

  10. Validating alternative methodologies to estimate the hydrological regime of temporary streams when flow data are unavailable

    Science.gov (United States)

    Llorens, Pilar; Gallart, Francesc; Latron, Jérôme; Cid, Núria; Rieradevall, Maria; Prat, Narcís

    2016-04-01

    Aquatic life in temporary streams is strongly conditioned by the temporal variability of the hydrological conditions that control the occurrence and connectivity of diverse mesohabitats. In this context, the software TREHS (Temporary Rivers' Ecological and Hydrological Status) has been developed, in the framework of the LIFE Trivers project, to help managers for adequately implement the Water Framework Directive in this type of water bodies. TREHS, using the methodology described in Gallart et al (2012), defines six temporal 'aquatic states', based on the hydrological conditions representing different mesohabitats, for a given reach at a particular moment. Nevertheless, hydrological data for assessing the regime of temporary streams are often non-existent or scarce. The scarcity of flow data makes frequently impossible the characterization of temporary streams hydrological regimes and, as a consequence, the selection of the correct periods and methods to determine their ecological status. Because of its qualitative nature, the TREHS approach allows the use of alternative methodologies to assess the regime of temporary streams in the lack of observed flow data. However, to adapt the TREHS to this qualitative data both the temporal scheme (from monthly to seasonal) as well as the number of aquatic states (from 6 to 3) have been modified. Two alternatives complementary methodologies were tested within the TREHS framework to assess the regime of temporary streams: interviews and aerial photographs. All the gauging stations (13) belonging to the Catalan Internal Catchments (NE, Spain) with recurrent zero flows periods were selected to validate both methodologies. On one hand, non-structured interviews were carried out to inhabitants of villages and small towns near the gauging stations. Flow permanence metrics for input into TREHS were drawn from the notes taken during the interviews. On the other hand, the historical series of available aerial photographs (typically 10

  11. Prevalence Estimation and Validation of New Instruments in Psychiatric Research: An Application of Latent Class Analysis and Sensitivity Analysis

    Science.gov (United States)

    Pence, Brian Wells; Miller, William C.; Gaynes, Bradley N.

    2009-01-01

    Prevalence and validation studies rely on imperfect reference standard (RS) diagnostic instruments that can bias prevalence and test characteristic estimates. The authors illustrate 2 methods to account for RS misclassification. Latent class analysis (LCA) combines information from multiple imperfect measures of an unmeasurable latent condition to…

  12. The validity and reproducibility of food-frequency questionnaire–based total antioxidant capacity estimates in Swedish women

    Science.gov (United States)

    Total antioxidant capacity (TAC) provides an assessment of antioxidant activity and synergistic interactions of redox molecules in foods and plasma. We investigated the validity and reproducibility of food frequency questionnaire (FFQ)–based TAC estimates assessed by oxygen radical absorbance capaci...

  13. A comparative study and validation of state estimation algorithms for Li-ion batteries in battery management systems

    International Nuclear Information System (INIS)

    Klee Barillas, Joaquín; Li, Jiahao; Günther, Clemens; Danzer, Michael A.

    2015-01-01

    Highlights: • Description of state observers for estimating the battery’s SOC. • Implementation of four estimation algorithms in a BMS. • Reliability and performance study of BMS regarding the estimation algorithms. • Analysis of the robustness and code properties of the estimation approaches. • Guide to evaluate estimation algorithms to improve the BMS performance. - Abstract: To increase lifetime, safety, and energy usage battery management systems (BMS) for Li-ion batteries have to be capable of estimating the state of charge (SOC) of the battery cells with a very low estimation error. The accurate SOC estimation and the real time reliability are critical issues for a BMS. In general an increasing complexity of the estimation methods leads to higher accuracy. On the other hand it also leads to a higher computational load and may exceed the BMS limitations or increase its costs. An approach to evaluate and verify estimation algorithms is presented as a requisite prior the release of the battery system. The approach consists of an analysis concerning the SOC estimation accuracy, the code properties, complexity, the computation time, and the memory usage. Furthermore, a study for estimation methods is proposed for their evaluation and validation with respect to convergence behavior, parameter sensitivity, initialization error, and performance. In this work, the introduced analysis is demonstrated with four of the most published model-based estimation algorithms including Luenberger observer, sliding-mode observer, Extended Kalman Filter and Sigma-point Kalman Filter. The experiments under dynamic current conditions are used to verify the real time functionality of the BMS. The results show that a simple estimation method like the sliding-mode observer can compete with the Kalman-based methods presenting less computational time and memory usage. Depending on the battery system’s application the estimation algorithm has to be selected to fulfill the

  14. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    Science.gov (United States)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the

  15. A deep learning approach to estimate chemically-treated collagenous tissue nonlinear anisotropic stress-strain responses from microscopy images.

    Science.gov (United States)

    Liang, Liang; Liu, Minliang; Sun, Wei

    2017-11-01

    Biological collagenous tissues comprised of networks of collagen fibers are suitable for a broad spectrum of medical applications owing to their attractive mechanical properties. In this study, we developed a noninvasive approach to estimate collagenous tissue elastic properties directly from microscopy images using Machine Learning (ML) techniques. Glutaraldehyde-treated bovine pericardium (GLBP) tissue, widely used in the fabrication of bioprosthetic heart valves and vascular patches, was chosen to develop a representative application. A Deep Learning model was designed and trained to process second harmonic generation (SHG) images of collagen networks in GLBP tissue samples, and directly predict the tissue elastic mechanical properties. The trained model is capable of identifying the overall tissue stiffness with a classification accuracy of 84%, and predicting the nonlinear anisotropic stress-strain curves with average regression errors of 0.021 and 0.031. Thus, this study demonstrates the feasibility and great potential of using the Deep Learning approach for fast and noninvasive assessment of collagenous tissue elastic properties from microstructural images. In this study, we developed, to our best knowledge, the first Deep Learning-based approach to estimate the elastic properties of collagenous tissues directly from noninvasive second harmonic generation images. The success of this study holds promise for the use of Machine Learning techniques to noninvasively and efficiently estimate the mechanical properties of many structure-based biological materials, and it also enables many potential applications such as serving as a quality control tool to select tissue for the manufacturing of medical devices (e.g. bioprosthetic heart valves). Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  16. Mixed-strain housing for female C57BL/6, DBA/2, and BALB/c mice: validating a split-plot design that promotes refinement and reduction.

    Science.gov (United States)

    Walker, Michael; Fureix, Carole; Palme, Rupert; Newman, Jonathan A; Ahloy Dallaire, Jamie; Mason, Georgia

    2016-01-27

    Inefficient experimental designs are common in animal-based biomedical research, wasting resources and potentially leading to unreplicable results. Here we illustrate the intrinsic statistical power of split-plot designs, wherein three or more sub-units (e.g. individual subjects) differing in a variable of interest (e.g. genotype) share an experimental unit (e.g. a cage or litter) to which a treatment is applied (e.g. a drug, diet, or cage manipulation). We also empirically validate one example of such a design, mixing different mouse strains -- C57BL/6, DBA/2, and BALB/c -- within cages varying in degree of enrichment. As well as boosting statistical power, no other manipulations are needed for individual identification if co-housed strains are differentially pigmented, so also sparing mice from stressful marking procedures. The validation involved housing 240 females from weaning to 5 months of age in single- or mixed- strain trios, in cages allocated to enriched or standard treatments. Mice were screened for a range of 26 commonly-measured behavioural, physiological and haematological variables. Living in mixed-strain trios did not compromise mouse welfare (assessed via corticosterone metabolite output, stereotypic behaviour, signs of aggression, and other variables). It also did not alter the direction or magnitude of any strain- or enrichment-typical difference across the 26 measured variables, or increase variance in the data: indeed variance was significantly decreased by mixed- strain housing. Furthermore, using Monte Carlo simulations to quantify the statistical power benefits of this approach over a conventional design demonstrated that for our effect sizes, the split- plot design would require significantly fewer mice (under half in most cases) to achieve a power of 80%. Mixed-strain housing allows several strains to be tested at once, and potentially refines traditional marking practices for research mice. Furthermore, it dramatically illustrates the

  17. Mixed-strain housing for female C57BL/6, DBA/2, and BALB/c mice: validating a split-plot design that promotes refinement and reduction

    Directory of Open Access Journals (Sweden)

    Michael Walker

    2016-01-01

    Full Text Available Abstract Background Inefficient experimental designs are common in animal-based biomedical research, wasting resources and potentially leading to unreplicable results. Here we illustrate the intrinsic statistical power of split-plot designs, wherein three or more sub-units (e.g. individual subjects differing in a variable of interest (e.g. genotype share an experimental unit (e.g. a cage or litter to which a treatment is applied (e.g. a drug, diet, or cage manipulation. We also empirically validate one example of such a design, mixing different mouse strains -- C57BL/6, DBA/2, and BALB/c -- within cages varying in degree of enrichment. As well as boosting statistical power, no other manipulations are needed for individual identification if co-housed strains are differentially pigmented, so also sparing mice from stressful marking procedures. Methods The validation involved housing 240 females from weaning to 5 months of age in single- or mixed- strain trios, in cages allocated to enriched or standard treatments. Mice were screened for a range of 26 commonly-measured behavioural, physiological and haematological variables. Results Living in mixed-strain trios did not compromise mouse welfare (assessed via corticosterone metabolite output, stereotypic behaviour, signs of aggression, and other variables. It also did not alter the direction or magnitude of any strain- or enrichment-typical difference across the 26 measured variables, or increase variance in the data: indeed variance was significantly decreased by mixed- strain housing. Furthermore, using Monte Carlo simulations to quantify the statistical power benefits of this approach over a conventional design demonstrated that for our effect sizes, the split- plot design would require significantly fewer mice (under half in most cases to achieve a power of 80 %. Conclusions Mixed-strain housing allows several strains to be tested at once, and potentially refines traditional marking practices

  18. Validation and uncertainty estimation of fast neutron activation analysis method for Cu, Fe, Al, Si elements in sediment samples

    International Nuclear Information System (INIS)

    Sunardi; Samin Prihatin

    2010-01-01

    Validation and uncertainty estimation of Fast Neutron Activation Analysis (FNAA) method for Cu, Fe, Al, Si elements in sediment samples has been conduced. The aim of the research is to confirm whether FNAA method is still matches to ISO/lEC 17025-2005 standard. The research covered the verification, performance, validation of FNM and uncertainty estimation. Standard of SRM 8704 and sediments were weighted for certain weight and irradiated with 14 MeV fast neutron and then counted using gamma spectrometry. The result of validation method for Cu, Fe, Al, Si element showed that the accuracy were in the range of 95.89-98.68 %, while the precision were in the range 1.13-2.29 %. The result of uncertainty estimation for Cu, Fe, Al, and Si were 2.67, 1.46, 1.71 and 1.20 % respectively. From this data, it can be concluded that the FNM method is still reliable and valid for element contents analysis in samples, because the accuracy is up to 95 % and the precision is under 5 %, while the uncertainty are relatively small and suitable for the range 95 % level of confidence where the uncertainty maximum is 5 %. (author)

  19. Validity of parent-reported weight and height of preschool children measured at home or estimated without home measurement: a validation study

    Directory of Open Access Journals (Sweden)

    Cox Bianca

    2011-07-01

    Full Text Available Abstract Background Parental reports are often used in large-scale surveys to assess children's body mass index (BMI. Therefore, it is important to know to what extent these parental reports are valid and whether it makes a difference if the parents measured their children's weight and height at home or whether they simply estimated these values. The aim of this study is to compare the validity of parent-reported height, weight and BMI values of preschool children (3-7 y-old, when measured at home or estimated by parents without actual measurement. Methods The subjects were 297 Belgian preschool children (52.9% male. Participation rate was 73%. A questionnaire including questions about height and weight of the children was completed by the parents. Nurses measured height and weight following standardised procedures. International age- and sex-specific BMI cut-off values were employed to determine categories of weight status and obesity. Results On the group level, no important differences in accuracy of reported height, weight and BMI were identified between parent-measured or estimated values. However, for all 3 parameters, the correlations between parental reports and nurse measurements were higher in the group of children whose body dimensions were measured by the parents. Sensitivity for underweight and overweight/obesity were respectively 73% and 47% when parents measured their child's height and weight, and 55% and 47% when parents estimated values without measurement. Specificity for underweight and overweight/obesity were respectively 82% and 97% when parents measured the children, and 75% and 93% with parent estimations. Conclusions Diagnostic measures were more accurate when parents measured their child's weight and height at home than when those dimensions were based on parental judgements. When parent-reported data on an individual level is used, the accuracy could be improved by encouraging the parents to measure weight and height

  20. Validation of the CHIRPS Satellite Rainfall Estimates over Eastern of Africa

    Science.gov (United States)

    Dinku, T.; Funk, C. C.; Tadesse, T.; Ceccato, P.

    2017-12-01

    Long and temporally consistent rainfall time series are essential in climate analyses and applications. Rainfall data from station observations are inadequate over many parts of the world due to sparse or non-existent observation networks, or limited reporting of gauge observations. As a result, satellite rainfall estimates have been used as an alternative or as a supplement to station observations. However, many satellite-based rainfall products with long time series suffer from coarse spatial and temporal resolutions and inhomogeneities caused by variations in satellite inputs. There are some satellite rainfall products with reasonably consistent time series, but they are often limited to specific geographic areas. The Climate Hazards Group Infrared Precipitation (CHIRP) and CHIRP combined with station observations (CHIRPS) are recently produced satellite-based rainfall products with relatively high spatial and temporal resolutions and quasi-global coverage. In this study, CHIRP and CHIRPS were evaluated over East Africa at daily, dekadal (10-day) and monthly time scales. The evaluation was done by comparing the satellite products with rain gauge data from about 1200 stations. The is unprecedented number of validation stations for this region covering. The results provide a unique region-wide understanding of how satellite products perform over different climatic/geographic (low lands, mountainous regions, and coastal) regions. The CHIRP and CHIRPS products were also compared with two similar satellite rainfall products: the African Rainfall Climatology version 2 (ARC2) and the latest release of the Tropical Applications of Meteorology using Satellite data (TAMSAT). The results show that both CHIRP and CHIRPS products are significantly better than ARC2 with higher skill and low or no bias. These products were also found to be slightly better than the latest version of the TAMSAT product. A comparison was also done between the latest release of the TAMSAT product

  1. Lifetime estimates of a fusion reactor first wall by linear damage summation and strain range partitioning methods

    International Nuclear Information System (INIS)

    Liu, K.C.; Grossbeck, M.L.

    1979-01-01

    A generalized model of a first wall made of 20% cold-worked steel was examined for neutron wall loadings ranging from 2 to 5 MW/m 2 . A spectrum of simplified on-off duty cycles was assumed with a 95% burn time. Independent evaluations of cyclic lifetimes were based on two methods: the method of linear damage summation currently being employed for use in ASME high-temperature design Code Case N-47 and that of strain range partitioning being studied for inclusion in the design code. An important point is that the latter method can incorporate a known decrease in ductility for materials subject to irradiation as a parameter, so low-cycle fatigue behavior can be estimated for irradiated material. Lifetimes predicted by the two methods agree reasonably well despite their diversity in concept. Lack of high-cycle fatigue data for the material tested at temperatures within the range of our interest precludes making conclusions on the accuracy of the predicted results, but such data are forthcoming. The analysis includes stress relaxation due to thermal and irradiation-induced creep. Reduced ductility values from irradiations that simulate the environment of the first wall of a fusion reactor were used to estimate the lifetime of the first wall under irradiation. These results indicate that 20% cold-worked type 316 stainless steel could be used as a first-wall material meeting a 8 to 10 MW-year/m 2 lifetime goal for a neutron wall loading of about 2 MW-year/m 2 and a maximum temperature of about 500 0 C

  2. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    Science.gov (United States)

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating

  3. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time

    OpenAIRE

    Martin, C. K.; Correa, J. B.; Han, H.; Allen, H. R.; Rood, J.; Champagne, C. M.; Gunturk, B. K.; Bray, G. A.

    2011-01-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1’s objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake with the Remote Food Photography Method (RFPM) over six days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server...

  4. Validation by theoretical approach to the experimental estimation of efficiency for gamma spectrometry of gas in 100 ml standard flask

    International Nuclear Information System (INIS)

    Mohan, V.; Chudalayandi, K.; Sundaram, M.; Krishnamony, S.

    1996-01-01

    Estimation of gaseous activity forms an important component of air monitoring at Madras Atomic Power Station (MAPS). The gases of importance are argon 41 an air activation product and fission product noble gas xenon 133. For estimating the concentration, the experimental method is used in which a grab sample is collected in a 100 ml volumetric standard flask. The activity of gas is then computed by gamma spectrometry using a predetermined efficiency estimated experimentally. An attempt is made using theoretical approach to validate the experimental method of efficiency estimation. Two analytical models named relative flux model and absolute activity model were developed independently of each other. Attention is focussed on the efficiencies for 41 Ar and 133 Xe. Results show that the present method of sampling and analysis using 100 ml volumetric flask is adequate and acceptable. (author). 5 refs., 2 tabs

  5. Validity test of the IPD-Work consortium approach for creating comparable job strain groups between Job Content Questionnaire and Demand-Control Questionnaire

    Directory of Open Access Journals (Sweden)

    Bongkyoo Choi

    2015-04-01

    Full Text Available Objectives: This study aims to test the validity of the IPD-Work Consortium approach for creating comparable job strain groups between the Job Content Questionnaire (JCQ and the Demand-Control Questionnaire (DCQ. Material and Methods: A random population sample (N = 682 of all middle-aged Malmö males and females was given a questionnaire with the 14-item JCQ and 11-item DCQ for the job control and job demands. The JCQ job control and job demands scores were calculated in 3 different ways: using the 14-item JCQ standard scale formulas (method 1; dropping 3 job control items and using the 11-item JCQ standard scale formulas with additional scale weights (method 2; and the approach of the IPD Group (method 3, dropping 3 job control items, but using the simple 11-item summation-based scale formulas. The high job strain was defined as a combination of high demands and low control. Results: Between the 2 questionnaires, false negatives for the high job strain were much greater than false positives (37–49% vs. 7–13%. When the method 3 was applied, the sensitivity of the JCQ for the high job strain against the DCQ was lowest (0.51 vs. 0.60–0.63 when the methods 1 and 2 were applied, although the specificity was highest (0.93 vs. 0.87–0.89 when the methods 1 and 2 were applied. The prevalence of the high job strain with the JCQ (the method 3 was applied was considerably lower (4–7% than with the JCQ (the methods 1 and 2 were applied and the DCQ. The number of congruent cases for the high job strain between the 2 questionnaires was smallest when the method 3 was applied. Conclusions: The IPD-Work Consortium approach showed 2 major weaknesses to be used for epidemiological studies on the high job strain and health outcomes as compared to the standard JCQ methods: the greater misclassification of the high job strain and lower prevalence of the high job strain.

  6. Validity test of the IPD-Work consortium approach for creating comparable job strain groups between Job Content Questionnaire and Demand-Control Questionnaire.

    Science.gov (United States)

    Choi, Bongkyoo; Ko, Sangbaek; Ostergren, Per-Olof

    2015-01-01

    This study aims to test the validity of the IPD-Work Consortium approach for creating comparable job strain groups between the Job Content Questionnaire (JCQ) and the Demand-Control Questionnaire (DCQ). A random population sample (N = 682) of all middle-aged Malmö males and females was given a questionnaire with the 14-item JCQ and 11-item DCQ for the job control and job demands. The JCQ job control and job demands scores were calculated in 3 different ways: using the 14-item JCQ standard scale formulas (method 1); dropping 3 job control items and using the 11-item JCQ standard scale formulas with additional scale weights (method 2); and the approach of the IPD Group (method 3), dropping 3 job control items, but using the simple 11-item summation-based scale formulas. The high job strain was defined as a combination of high demands and low control. Between the 2 questionnaires, false negatives for the high job strain were much greater than false positives (37-49% vs. 7-13%). When the method 3 was applied, the sensitivity of the JCQ for the high job strain against the DCQ was lowest (0.51 vs. 0.60-0.63 when the methods 1 and 2 were applied), although the specificity was highest (0.93 vs. 0.87-0.89 when the methods 1 and 2 were applied). The prevalence of the high job strain with the JCQ (the method 3 was applied) was considerably lower (4-7%) than with the JCQ (the methods 1 and 2 were applied) and the DCQ. The number of congruent cases for the high job strain between the 2 questionnaires was smallest when the method 3 was applied. The IPD-Work Consortium approach showed 2 major weaknesses to be used for epidemiological studies on the high job strain and health outcomes as compared to the standard JCQ methods: the greater misclassification of the high job strain and lower prevalence of the high job strain. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  7. Development and validation of a CFD based methodology to estimate the pressure loss of flow through perforated plates

    International Nuclear Information System (INIS)

    Barros Filho, Jose A.; Navarro, Moyses A.; Santos, Andre A.C. dos; Jordao, E.

    2011-01-01

    In spite of the recent great development of Computational Fluid Dynamics (CFD), there are still some issues about how to assess its accurateness. This work presents the validation of a CFD methodology devised to estimate the pressure drop of water flow through perforated plates similar to the ones used in some reactor core components. This was accomplished by comparing the results of CFD simulations against experimental data of 5 perforated plates with different geometric characteristics. The proposed methodology correlates the experimental data within a range of ± 7.5%. The validation procedure recommended by the ASME Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer-V and V 20 is also evaluated. The conclusion is that it is not adequate to this specific use. (author)

  8. Relative validation of a food frequency questionnaire to estimate food intake in an adult population.

    Science.gov (United States)

    Steinemann, Nina; Grize, Leticia; Ziesemer, Katrin; Kauf, Peter; Probst-Hensch, Nicole; Brombach, Christine

    2017-01-01

    Background : Scientifically valid descriptions of dietary intake at population level are crucial for investigating diet effects on health and disease. Food frequency questionnaires (FFQs) are the most common dietary tools used in large epidemiological studies. Objective : To examine the relative validity of a newly developed FFQ to be used as dietary assessment tool in epidemiological studies. Design : Validity was evaluated by comparing the FFQ and a 4-day weighed food record (4-d FR) at nutrient and food group levels, Spearman's correlations, Bland-Altman analysis and Wilcoxon rank sum tests were used. Fifty-six participants completed a paper format FFQ and a 4-d FR within 4 weeks. Results : Corrected correlations between the two instruments ranged from 0.27 (carbohydrates) to 0.55 (protein), and at food group level from 0.09 (soup) to 0.92 (alcohol). Nine out of 25 food groups showed correlations > 0.5, indicating moderate validity. More than half the food groups were overestimated in the FFQ, especially vegetables (82.8%) and fruits (56.3%). Water, tea and coffee were underestimated (-14.0%). Conclusions : The FFQ showed moderate relative validity for protein and the food groups fruits, egg, meat, sausage, nuts, salty snacks and beverages. This study supports the use of the FFQ as an acceptable tool for assessing nutrition as a health determinant in large epidemiological studies.

  9. Validation of an efficient visual method for estimating leaf area index ...

    African Journals Online (AJOL)

    This study aimed to evaluate the accuracy and applicability of a visual method for estimating LAI in clonal Eucalyptus grandis × E. urophylla plantations and to compare it with hemispherical photography, ceptometer and LAI-2000® estimates. Destructive sampling for direct determination of the actual LAI was performed in ...

  10. Estimating and validating ground-based timber harvesting production through computer simulation

    Science.gov (United States)

    Jingxin Wang; Chris B. LeDoux

    2003-01-01

    Estimating ground-based timber harvesting systems production with an object oriented methodology was investigated. The estimation model developed generates stands of trees, simulates chain saw, drive-to-tree feller-buncher, swing-to-tree single-grip harvester felling, and grapple skidder and forwarder extraction activities, and analyzes costs and productivity. It also...

  11. CANDU radiotoxicity inventories estimation: A calculated experiment cross-check for data verification and validation

    International Nuclear Information System (INIS)

    Pavelescu, Alexandru Octavian; Cepraga, Dan Gabriel

    2007-01-01

    This paper is related to the Clearance Potential Index, Ingestion and Inhalation Hazard Factors of the nuclear spent fuel and radioactive wastes. This study required a complex activity that consisted of various phases such us: the acquisition, setting up, validation and application of procedures, codes and libraries. The paper reflects the validation phase of this study. Its objective was to compare the measured inventories of selected actinide and fission products radionuclides in an element from a Pickering CANDU reactor with inventories predicted using a recent version of the ORIGEN-ARP from SCALE 5 coupled with the time dependent cross sections library, CANDU 28.lib, produced by the sequence SAS2H of SCALE 4.4a. In this way, the procedures, codes and libraries for the characterization of radioactive material in terms of radioactive inventories, clearance, and biological hazard factors are being qualified and validated, in support for the safety management of the radioactive wastes. (authors)

  12. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time.

    Science.gov (United States)

    Martin, Corby K; Correa, John B; Han, Hongmei; Allen, H Raymond; Rood, Jennifer C; Champagne, Catherine M; Gunturk, Bahadir K; Bray, George A

    2012-04-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1's objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake (EI) with the Remote Food Photography Method (RFPM) over 6 days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, EI estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n = 24) or Customized Prompts (n = 16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating EI when Standard (mean ± s.d. = -895 ± 770 kcal/day, P < 0.0001), but not Customized Prompts (-270 ± 748 kcal/day, P = 0.22) were used. Error (EI from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM's ability to accurately estimate EI in free-living adults (N = 50) over 6 days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living EI (-152 ± 694 kcal/day, P = 0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake.

  13. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship

    Science.gov (United States)

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab’s software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001) but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37). The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key points Some commercial devices allow to estimate 1 RM from the force-velocity relationship. These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription. Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations. PMID:24149641

  14. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time

    Science.gov (United States)

    Martin, C. K.; Correa, J. B.; Han, H.; Allen, H. R.; Rood, J.; Champagne, C. M.; Gunturk, B. K.; Bray, G. A.

    2014-01-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1’s objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake with the Remote Food Photography Method (RFPM) over six days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, energy intake estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n=24) or Customized Prompts (n=16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating energy intake when Standard (mean±SD = −895±770 kcal/day, p<.0001), but not Customized Prompts (−270±748 kcal/day, p=.22) were used. Error (energy intake from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM’s ability to accurately estimate energy intake in free-living adults (N=50) over six days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living energy intake (−152±694 kcal/day, p=0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake. PMID:22134199

  15. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    Science.gov (United States)

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  16. Performance evaluation of methods for two-dimensional displacement and strain estimation using ultrasound radio frequency data

    NARCIS (Netherlands)

    Lopata, R.G.P.; Nillesen, M.M.; Hansen, H.H.G.; Gerrits, I.H.; Thijssen, J.M.; Korte, de C.L.

    2009-01-01

    In elastography, several methods for 2-D strain imaging have been introduced, based on both raw frequency (RF) data and speckle-tracking. Although the precision and lesion detectability of axial strain imaging in terms of elastographic signal-to-noise ratio (SNRe) and elastographic contrast-to-noise

  17. Validation of operant social motivation paradigms using BTBR T+tf/J and C57BL/6J inbred mouse strains.

    Science.gov (United States)

    Martin, Loren; Sample, Hannah; Gregg, Michael; Wood, Caleb

    2014-09-01

    As purported causal factors are identified for autism spectrum disorder (ASD), new assays are needed to better phenotype animal models designed to explore these factors. With recent evidence suggesting that deficits in social motivation are at the core of ASD behavior, the development of quantitative measures of social motivation is particularly important. The goal of our study was to develop and validate novel assays to quantitatively measure social motivation in mice. In order to test the validity of our paradigms, we compared the BTBR strain, with documented social deficits, to the prosocial C57BL/6J strain. Two novel conditioning paradigms were developed that allowed the test mouse to control access to a social partner. In the social motivation task, the test mice lever pressed for a social reward. The reward contingency was set on a progressive ratio of reinforcement and the number of lever presses achieved in the final trial of a testing session (breakpoint) was used as an index of social motivation. In the valence comparison task, motivation for a food reward was compared to a social reward. We also explored activity, social affiliation, and preference for social novelty through a series of tasks using an ANY-Maze video-tracking system in an open-field arena. BTBR mice had significantly lower breakpoints in the social motivation paradigm than C57BL/6J mice. However, the valence comparison task revealed that BTBR mice also made significantly fewer lever presses for a food reward. The results of the conditioning paradigms suggest that the BTBR strain has an overall deficit in motivated behavior. Furthermore, the results of the open-field observations may suggest that social differences in the BTBR strain are anxiety induced.

  18. Validation of operant social motivation paradigms using BTBR T+tf/J and C57BL/6J inbred mouse strains

    Science.gov (United States)

    Martin, Loren; Sample, Hannah; Gregg, Michael; Wood, Caleb

    2014-01-01

    Background As purported causal factors are identified for autism spectrum disorder (ASD), new assays are needed to better phenotype animal models designed to explore these factors. With recent evidence suggesting that deficits in social motivation are at the core of ASD behavior, the development of quantitative measures of social motivation is particularly important. The goal of our study was to develop and validate novel assays to quantitatively measure social motivation in mice. Methods In order to test the validity of our paradigms, we compared the BTBR strain, with documented social deficits, to the prosocial C57BL/6J strain. Two novel conditioning paradigms were developed that allowed the test mouse to control access to a social partner. In the social motivation task, the test mice lever pressed for a social reward. The reward contingency was set on a progressive ratio of reinforcement and the number of lever presses achieved in the final trial of a testing session (breakpoint) was used as an index of social motivation. In the valence comparison task, motivation for a food reward was compared to a social reward. We also explored activity, social affiliation, and preference for social novelty through a series of tasks using an ANY-Maze video-tracking system in an open-field arena. Results BTBR mice had significantly lower breakpoints in the social motivation paradigm than C57BL/6J mice. However, the valence comparison task revealed that BTBR mice also made significantly fewer lever presses for a food reward. Conclusions The results of the conditioning paradigms suggest that the BTBR strain has an overall deficit in motivated behavior. Furthermore, the results of the open-field observations may suggest that social differences in the BTBR strain are anxiety induced. PMID:25328850

  19. In Vivo Validation of a Blood Vector Velocity Estimator with MR Angiography

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Udesen, Jesper; Thomsen, Carsten

    2009-01-01

    Conventional Doppler methods for blood velocity estimation only estimate the velocity component along the ultrasound beam direction. This implies that a Doppler angle under examination close to 90° results in unreliable information about the true blood direction and blood velocity. The novel method...... indicate that reliable vector velocity estimates can be obtained in vivo using the presented angle-independent 2-D vector velocity method. The TO method can be a useful alternative to conventional Doppler systems by avoiding the angle artifact, thus giving quantitative velocity information....

  20. Validation of SMAP Root Zone Soil Moisture Estimates with Improved Cosmic-Ray Neutron Probe Observations

    Science.gov (United States)

    Babaeian, E.; Tuller, M.; Sadeghi, M.; Franz, T.; Jones, S. B.

    2017-12-01

    Soil Moisture Active Passive (SMAP) soil moisture products are commonly validated based on point-scale reference measurements, despite the exorbitant spatial scale disparity. The difference between the measurement depth of point-scale sensors and the penetration depth of SMAP further complicates evaluation efforts. Cosmic-ray neutron probes (CRNP) with an approximately 500-m radius footprint provide an appealing alternative for SMAP validation. This study is focused on the validation of SMAP level-4 root zone soil moisture products with 9-km spatial resolution based on CRNP observations at twenty U.S. reference sites with climatic conditions ranging from semiarid to humid. The CRNP measurements are often biased by additional hydrogen sources such as surface water, atmospheric vapor, or mineral lattice water, which sometimes yield unrealistic moisture values in excess of the soil water storage capacity. These effects were removed during CRNP data analysis. Comparison of SMAP data with corrected CRNP observations revealed a very high correlation for most of the investigated sites, which opens new avenues for validation of current and future satellite soil moisture products.

  1. Repeated holdout Cross-Validation of Model to Estimate Risk of Lyme Disease by Landscape Attributes

    Science.gov (United States)

    We previously modeled Lyme disease (LD) risk at the landscape scale; here we evaluate the model's overall goodness-of-fit using holdout validation. Landscapes were characterized within road-bounded analysis units (AU). Observed LD cases (obsLD) were ascertained per AU. Data were ...

  2. Validation of traffic-related air pollution exposure estimates for long-term studies

    NARCIS (Netherlands)

    Van Roosbroeck, S.

    2007-01-01

    This thesis describes a series of studies that investigate the validity of using outdoor concentrations and/or traffic-related indicator exposure variables as a measure for exposure assessment in epidemiological studies on the long-term effect of traffic-related air pollution. A pilot study was

  3. Reliability and validity of food portion size estimation from images using manual flexible digital virtual meshes

    Science.gov (United States)

    The eButton takes frontal images at 4 second intervals throughout the day. A three-dimensional (3D) manually administered wire mesh procedure has been developed to quantify portion sizes from the two-dimensional (2D) images. This paper reports a test of the interrater reliability and validity of use...

  4. Validation and measurement uncertainty estimation in food microbiology: differences between quantitative and qualitative methods

    Directory of Open Access Journals (Sweden)

    Vesna Režić Dereani

    2010-09-01

    Full Text Available The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the technical competence of the staff involved in the tests, recently are widely introduced in food microbiology laboratories in Croatia. In addition to quality manual introduction, and a lot of general documents, some of the most demanding procedures in routine microbiology laboratories are measurement uncertainty (MU procedures and validation experiment design establishment. Those procedures are not standardized yet even at international level, and they require practical microbiological knowledge, altogether with statistical competence. Differences between validation experiments design for quantitative and qualitative food microbiology analysis are discussed in this research, and practical solutions are shortly described. MU for quantitative determinations is more demanding issue than qualitative MU calculation. MU calculations are based on external proficiency testing data and internal validation data. In this paper, practical schematic descriptions for both procedures are shown.

  5. A comparative study of soft sensor design for lipid estimation of microalgal photobioreactor system with experimental validation.

    Science.gov (United States)

    Yoo, Sung Jin; Jung, Dong Hwi; Kim, Jung Hun; Lee, Jong Min

    2015-03-01

    This study examines the applicability of various nonlinear estimators for online estimation of the lipid concentration in microalgae cultivation system. Lipid is a useful bio-product that has many applications including biofuels and bioactives. However, the improvement of lipid productivity using real-time monitoring and control with experimental validation is limited because measurement of lipid in microalgae is a difficult and time-consuming task. In this study, estimation of lipid concentration from other measurable sources such as biomass or glucose sensor was studied. Extended Kalman filter (EKF), unscented Kalman filter (UKF), and particle filter (PF) were compared in various cases for their applicability to photobioreactor systems. Furthermore, simulation studies to identify appropriate types of sensors for estimating lipid were also performed. Based on the case studies, the most effective case was validated with experimental data and found that UKF and PF with time-varying system noise covariance is effective for microalgal photobioreactor system. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Evaluation of wet bulb globe temperature index for estimation of heat strain in hot/humid conditions in the Persian Gulf

    OpenAIRE

    Habibolah Dehghan; Seyed Bagher Mortazavi; Mohammad J Jafari; Mohammad R Maracy

    2012-01-01

    Background: Heat exposure among construction workers in the Persian Gulf region is a serious hazard for health. The aim of this study was to evaluate the performance of wet bulb globe temperature (WBGT) Index for estimation of heat strain in hot/humid conditions by the use of Physiological Strain Index (PSI) as the gold standard. Material and Methods : This cross-sectional study was carried out on 71 workers of two Petrochemical Companies in South of Iran in 2010 summer. The WBGT index, heart...

  7. A new analytical method for estimating lumped parameter constants of linear viscoelastic models from strain rate tests

    Science.gov (United States)

    Mattei, G.; Ahluwalia, A.

    2018-04-01

    We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.

  8. Estimating population cause-specific mortality fractions from in-hospital mortality: validation of a new method.

    Directory of Open Access Journals (Sweden)

    Christopher J L Murray

    2007-11-01

    Full Text Available Cause-of-death data for many developing countries are not available. Information on deaths in hospital by cause is available in many low- and middle-income countries but is not a representative sample of deaths in the population. We propose a method to estimate population cause-specific mortality fractions (CSMFs using data already collected in many middle-income and some low-income developing nations, yet rarely used: in-hospital death records.For a given cause of death, a community's hospital deaths are equal to total community deaths multiplied by the proportion of deaths occurring in hospital. If we can estimate the proportion dying in hospital, we can estimate the proportion dying in the population using deaths in hospital. We propose to estimate the proportion of deaths for an age, sex, and cause group that die in hospital from the subset of the population where vital registration systems function or from another population. We evaluated our method using nearly complete vital registration (VR data from Mexico 1998-2005, which records whether a death occurred in a hospital. In this validation test, we used 45 disease categories. We validated our method in two ways: nationally and between communities. First, we investigated how the method's accuracy changes as we decrease the amount of Mexican VR used to estimate the proportion of each age, sex, and cause group dying in hospital. Decreasing VR data used for this first step from 100% to 9% produces only a 12% maximum relative error between estimated and true CSMFs. Even if Mexico collected full VR information only in its capital city with 9% of its population, our estimation method would produce an average relative error in CSMFs across the 45 causes of just over 10%. Second, we used VR data for the capital zone (Distrito Federal and Estado de Mexico and estimated CSMFs for the three lowest-development states. Our estimation method gave an average relative error of 20%, 23%, and 31% for

  9. Optical Tracking Data Validation and Orbit Estimation for Sparse Observations of Satellites by the OWL-Net.

    Science.gov (United States)

    Choi, Jin; Jo, Jung Hyun; Yim, Hong-Suh; Choi, Eun-Jung; Cho, Sungki; Park, Jang-Hyun

    2018-06-07

    An Optical Wide-field patroL-Network (OWL-Net) has been developed for maintaining Korean low Earth orbit (LEO) satellites' orbital ephemeris. The OWL-Net consists of five optical tracking stations. Brightness signals of reflected sunlight of the targets were detected by a charged coupled device (CCD). A chopper system was adopted for fast astrometric data sampling, maximum 50 Hz, within a short observation time. The astrometric accuracy of the optical observation data was validated with precise orbital ephemeris such as Consolidated Prediction File (CPF) data and precise orbit determination result with onboard Global Positioning System (GPS) data from the target satellite. In the optical observation simulation of the OWL-Net for 2017, an average observation span for a single arc of 11 LEO observation targets was about 5 min, while an average optical observation separation time was 5 h. We estimated the position and velocity with an atmospheric drag coefficient of LEO observation targets using a sequential-batch orbit estimation technique after multi-arc batch orbit estimation. Post-fit residuals for the multi-arc batch orbit estimation and sequential-batch orbit estimation were analyzed for the optical measurements and reference orbit (CPF and GPS data). The post-fit residuals with reference show few tens-of-meters errors for in-track direction for multi-arc batch and sequential-batch orbit estimation results.

  10. Optical Tracking Data Validation and Orbit Estimation for Sparse Observations of Satellites by the OWL-Net

    Directory of Open Access Journals (Sweden)

    Jin Choi

    2018-06-01

    Full Text Available An Optical Wide-field patroL-Network (OWL-Net has been developed for maintaining Korean low Earth orbit (LEO satellites’ orbital ephemeris. The OWL-Net consists of five optical tracking stations. Brightness signals of reflected sunlight of the targets were detected by a charged coupled device (CCD. A chopper system was adopted for fast astrometric data sampling, maximum 50 Hz, within a short observation time. The astrometric accuracy of the optical observation data was validated with precise orbital ephemeris such as Consolidated Prediction File (CPF data and precise orbit determination result with onboard Global Positioning System (GPS data from the target satellite. In the optical observation simulation of the OWL-Net for 2017, an average observation span for a single arc of 11 LEO observation targets was about 5 min, while an average optical observation separation time was 5 h. We estimated the position and velocity with an atmospheric drag coefficient of LEO observation targets using a sequential-batch orbit estimation technique after multi-arc batch orbit estimation. Post-fit residuals for the multi-arc batch orbit estimation and sequential-batch orbit estimation were analyzed for the optical measurements and reference orbit (CPF and GPS data. The post-fit residuals with reference show few tens-of-meters errors for in-track direction for multi-arc batch and sequential-batch orbit estimation results.

  11. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship.

    Science.gov (United States)

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab's software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, pvelocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key pointsSome commercial devices allow to estimate 1 RM from the force-velocity relationship.These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription.Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations.

  12. Relative Validity and Reproducibility of a Food-Frequency Questionnaire for Estimating Food Intakes among Flemish Preschoolers

    Directory of Open Access Journals (Sweden)

    Inge Huybrechts

    2009-01-01

    Full Text Available The aims of this study were to assess the relative validity and reproducibility of a semi-quantitative food-frequency questionnaire (FFQ applied in a large region-wide survey among 2.5-6.5 year-old children for estimating food group intakes. Parents/guardians were used as a proxy. Estimated diet records (3d were used as reference method and reproducibility was measured by repeated FFQ administrations five weeks apart. In total 650 children were included in the validity analyses and 124 in the reproducibility analyses. Comparing median FFQ1 to FFQ2 intakes, almost all evaluated food groups showed median differences within a range of ± 15%. However, for median vegetables, fruit and cheese intake, FFQ1 was > 20% higher than FFQ2. For most foods a moderate correlation (0.5-0.7 was obtained between FFQ1 and FFQ2. For cheese, sugared drinks and fruit juice intakes correlations were even > 0.7. For median differences between the 3d EDR and the FFQ, six food groups (potatoes & grains; vegetables Fruit; cheese; meat, game, poultry and fish; and sugared drinks gave a difference > 20%. The largest corrected correlations (>0.6 were found for the intake of potatoes and grains, fruit, milk products, cheese, sugared drinks, and fruit juice, while the lowest correlations (<0.4 for bread and meat products. The proportion of subjects classified within one quartile (in the same/adjacent category by FFQ and EDR ranged from 67% (for meat products to 88% (for fruit juice. Extreme misclassification into the opposite quartiles was for all food groups < 10%. The results indicate that our newly developed FFQ gives reproducible estimates of food group intake. Overall, moderate levels of relative validity were observed for estimates of food group intake.

  13. Development and test validation of a computational scheme for high-fidelity fluence estimations of the Swiss BWRs

    International Nuclear Information System (INIS)

    Vasiliev, A.; Wieselquist, W.; Ferroukhi, H.; Canepa, S.; Heldt, J.; Ledergerber, G.

    2011-01-01

    One of the current objectives within reactor analysis related projects at the Paul Scherrer Institut is the establishment of a comprehensive computational methodology for fast neutron fluence (FNF) estimations of reactor pressure vessels (RPV) and internals for both PWRs and BWRs. In the recent past, such an integral calculational methodology based on the CASMO-4/SIMULATE- 3/MCNPX system of codes was developed for PWRs and validated against RPV scraping tests. Based on the very satisfactory validation results, the methodology was recently applied for predictive FNF evaluations of a Swiss PWR to support the national nuclear safety inspectorate in the framework of life-time estimations. Today, focus is at PSI given to develop a corresponding advanced methodology for high-fidelity FNF estimations of BWR reactors. In this paper, the preliminary steps undertaken in that direction are presented. To start, the concepts of the PWR computational scheme and its transfer/adaptation to BWR are outlined. Then, the modelling of a Swiss BWR characterized by very heterogeneous core designs is presented along with preliminary sensitivity studies carried out to assess the sufficient level of details required for the complex core region. Finally, a first validation test case is presented on the basis of two dosimeter monitors irradiated during two recent cycles of the given BWR reactor. The achieved computational results show a satisfactory agreement with measured dosimeter data and illustrate thereby the feasibility of applying the PSI FNF computational scheme also for BWRs. Further sensitivity/optimization studies are nevertheless necessary in order to consolidate the scheme and to ensure increasing continuously, the fidelity and reliability of the BWR FNF estimations. (author)

  14. MR-based Water Content Estimation in Cartilage: Design and Validation of a Method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sofie; Ringgaard, Steffen

    2012-01-01

    Objective Design and validation of an MR-based method that allows the calculation of the water content in cartilage tissue. Material and Methods We modified and adapted to cartilage tissue T1 map based water content MR sequences commonly used in the neurology field. Using a 37 Celsius degree stable...... was costumed and programmed. Finally, we validated the method after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue...... contains) and we measured the water they contained. Results We could reproduce twice the 37 Celsius degree system and could perform the measurements in a similar way. We found that the MR T1 map based water content sequences can provide information that, after being analyzed with a special software, can...

  15. PEANO, a toolbox for real-time process signal validation and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Fantoni, Paolo F.; Figedy, Stefan; Racz, Attila

    1998-02-01

    PEANO (Process Evaluation and Analysis by Neural Operators), a toolbox for real time process signal validation and condition monitoring has been developed. This system analyses the signals, which are e.g. the readings of process monitoring sensors, computes their expected values and alerts if real values are deviated from the expected ones more than limits allow. The reliability level of the current analysis is also produced. The system is based on neuro-fuzzy techniques. Artificial Neural Networks and Fuzzy Logic models can be combined to exploit learning and generalisation capability of the first technique with the approximate reasoning embedded in the second approach. Real-time process signal validation is an application field where the use of this technique can improve the diagnosis of faulty sensors and the identification of outliers in a robust and reliable way. This study implements a fuzzy and possibilistic clustering algorithm to classify the operating region where the validation process has to be performed. The possibilistic approach (rather than probabilistic) allows a ''don't know'' classification that results in a fast detection of unforeseen plant conditions or outliers. Specialised Artificial Neural Networks are used for the validation process, one for each fuzzy cluster in which the operating map has been divided. There are two main advantages in using this technique: the accuracy and generalisation capability is increased compared to the case of a single network working in the entire operating region, and the ability to identify abnormal conditions, where the system is not capable to operate with a satisfactory accuracy, is improved. This model has been tested in a simulated environment on a French PWR, to monitor safety-related reactor variables over the entire power-flow operating map. (author)

  16. PEANO, a toolbox for real-time process signal validation and estimation

    International Nuclear Information System (INIS)

    Fantoni, Paolo F.; Figedy, Stefan; Racz, Attila

    1998-02-01

    PEANO (Process Evaluation and Analysis by Neural Operators), a toolbox for real time process signal validation and condition monitoring has been developed. This system analyses the signals, which are e.g. the readings of process monitoring sensors, computes their expected values and alerts if real values are deviated from the expected ones more than limits allow. The reliability level of the current analysis is also produced. The system is based on neuro-fuzzy techniques. Artificial Neural Networks and Fuzzy Logic models can be combined to exploit learning and generalisation capability of the first technique with the approximate reasoning embedded in the second approach. Real-time process signal validation is an application field where the use of this technique can improve the diagnosis of faulty sensors and the identification of outliers in a robust and reliable way. This study implements a fuzzy and possibilistic clustering algorithm to classify the operating region where the validation process has to be performed. The possibilistic approach (rather than probabilistic) allows a ''don't know'' classification that results in a fast detection of unforeseen plant conditions or outliers. Specialised Artificial Neural Networks are used for the validation process, one for each fuzzy cluster in which the operating map has been divided. There are two main advantages in using this technique: the accuracy and generalisation capability is increased compared to the case of a single network working in the entire operating region, and the ability to identify abnormal conditions, where the system is not capable to operate with a satisfactory accuracy, is improved. This model has been tested in a simulated environment on a French PWR, to monitor safety-related reactor variables over the entire power-flow operating map. (author)

  17. Relative validity of an FFQ to estimate daily food and nutrient intakes for Chilean adults.

    Science.gov (United States)

    Dehghan, Mahshid; Martinez, Solange; Zhang, Xiaohe; Seron, Pamela; Lanas, Fernando; Islam, Shofiqul; Merchant, Anwar T

    2013-10-01

    FFQ are commonly used to rank individuals by their food and nutrient intakes in large epidemiological studies. The purpose of the present study was to develop and validate an FFQ to rank individuals participating in an ongoing Prospective Urban and Rural Epidemiological (PURE) study in Chile. An FFQ and four 24 h dietary recalls were completed over 1 year. Pearson correlation coefficients, energy-adjusted and de-attenuated correlations and weighted kappa were computed between the dietary recalls and the FFQ. The level of agreement between the two dietary assessment methods was evaluated by Bland-Altman analysis. Temuco, Chile. Overall, 166 women and men enrolled in the present study. One hundred men and women participated in FFQ development and sixty-six individuals participated in FFQ validation. The FFQ consisted of 109 food items. For nutrients, the crude correlation coefficients between the dietary recalls and FFQ varied from 0.14 (protein) to 0.44 (fat). Energy adjustment and de-attenuation improved correlation coefficients and almost all correlation coefficients exceeded 0.40. Similar correlation coefficients were observed for food groups; the highest de-attenuated energy adjusted correlation coefficient was found for margarine and butter (0.75) and the lowest for potatoes (0.12). The FFQ showed moderate to high agreement for most nutrients and food groups, and can be used to rank individuals based on energy, nutrient and food intakes. The validation study was conducted in a unique setting and indicated that the tool is valid for use by adults in Chile.

  18. Engineering C-integral estimates for generalised creep behaviour and finite element validation

    International Nuclear Information System (INIS)

    Kim, Yun-Jae; Kim, Jin-Su; Huh, Nam-Su; Kim, Young-Jin

    2002-01-01

    This paper proposes an engineering method to estimate the creep C-integral for realistic creep laws to assess defective components operating at elevated temperatures. The proposed estimation method is mainly for the steady-state C * -integral, but a suggestion is also given for estimating the transient C(t)-integral. The reference stress approach is the basis of the proposed equation, but an enhancement in terms of accuracy is made through the definition of the reference stress. The proposed estimation equations are compared with extensive elastic-creep FE results employing various creep-deformation constitutive laws for six different geometries, including two-dimensional, axi-symmetric and three-dimensional geometries. Overall good agreement between the proposed method and the FE results provides confidence in the use of the proposed method for defect assessment of components at elevated temperatures. Moreover, it is shown that for surface cracks the proposed method can be used to estimate C * at any location along the crack front

  19. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    Science.gov (United States)

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  20. Validation of a food quantification picture book and portion sizes estimation applying perception and memory methods.

    Science.gov (United States)

    Szenczi-Cseh, J; Horváth, Zs; Ambrus, Á

    2017-12-01

    We tested the applicability of EPIC-SOFT food picture series used in the context of a Hungarian food consumption survey gathering data for exposure assessment, and investigated errors in food portion estimation resulted from the visual perception and conceptualisation-memory. Sixty-two participants in three age groups (10 to foods. The results were considered acceptable if the relative difference between average estimated and actual weight obtained through the perception method was ≤25%, and the relative standard deviation of the individual weight estimates was food items were rated acceptable. Small portion sizes were tended to be overestimated, large ones were tended to be underestimated. Portions of boiled potato and creamed spinach were all over- and underestimated, respectively. Recalling the portion sizes resulted in overestimation with larger differences (up to 60.7%).

  1. In-vivo validation of fast spectral velocity estimation techniques – preliminary results

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Gran, Fredrik; Pedersen, Mads Møller

    2008-01-01

    Spectral Doppler is a common way to estimate blood velocities in medical ultrasound (US). The standard way of estimating spectrograms is by using Welch's method (WM). WM is dependent on a long observation window (OW) (about 100 transmissions) to produce spectrograms with sufficient spectral...... resolution and contrast. Two adaptive filterbank methods have been suggested to circumvent this problem: the Blood spectral Power Capon method (BPC) and the Blood Amplitude and Phase Estimation method (BAPES). Previously, simulations and flow rig experiments have indicated that the two adaptive methods can...... was scanned using the experimental ultrasound scanner RASMUS and a B-K Medical 5 MHz linear array transducer with an angle of insonation not exceeding 60deg. All 280 spectrograms were then randomised and presented to a radiologist blinded for method and OW for visual evaluation: useful or not useful. WMbw...

  2. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations.

    Science.gov (United States)

    Checchi, Francesco; Stewart, Barclay T; Palmer, Jennifer J; Grundy, Chris

    2013-01-23

    Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons' camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to "gold standard" reference population figures from census or other robust methods. Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of layout. For each site, estimates were produced in 2-5 working person-days. In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or shelters, a complex pattern of roofs and multi-level buildings. Based on these results, we discuss possible ways forward for the method's development.

  3. Validation and scale dependencies of the triangle method for the evaporative fraction estimation over heterogeneous areas

    DEFF Research Database (Denmark)

    de Tomás, Alberto; Nieto, Héctor; Guzinski, Radoslaw

    2014-01-01

    Remote sensing has proved to be a consistent tool for monitoring water fluxes at regional scales. The triangle method, in particular, estimates the evaporative fraction (EF), defined as the ratio of latent heat flux (LE) to available energy, based on the relationship between satellite observations...... of land surface temperature and a vegetation index. Among other methodologies, this approach has been commonly used as an approximation to estimate LE, mainly over large semi-arid areas with uniform landscape features. In this study, an interpretation of the triangular space has been applied over...

  4. Validity of Standing Posture Eight-electrode Bioelectrical Impedance to Estimate Body Composition in Taiwanese Elderly

    Directory of Open Access Journals (Sweden)

    Ling-Chun Lee

    2014-09-01

    Conclusion: The results of this study showed that the impedance index and LST in the whole body, upper limbs, and lower limbs derived from DXA findings were highly correlated. The LST and BF% estimated by BIA8 in whole body and various body segments were highly correlated with the corresponding DXA results; however, BC-418 overestimates the participants' appendicular LST and underestimates whole body BF%. Therefore, caution is needed when interpreting the results of appendicular LST and whole body BF% estimated for elderly adults.

  5. [Prognostic estimation in critical patients. Validation of a new and very simple system of prognostic estimation of survival in an intensive care unit].

    Science.gov (United States)

    Abizanda, R; Padron, A; Vidal, B; Mas, S; Belenguer, A; Madero, J; Heras, A

    2006-04-01

    To make the validation of a new system of prognostic estimation of survival in critical patients (EPEC) seen in a multidisciplinar Intensive care unit (ICU). Prospective analysis of a patient cohort seen in the ICU of a multidisciplinar Intensive Medicine Service of a reference teaching hospital with 19 beds. Four hundred eighty four patients admitted consecutively over 6 months in 2003. Data collection of a basic minimum data set that includes patient identification data (gender, age), reason for admission and their origin, prognostic estimation of survival by EPEC, MPM II 0 and SAPS II (the latter two considered as gold standard). Mortality was evaluated on hospital discharge. EPEC validation was done with analysis of its discriminating capacity (ROC curve), calibration of its prognostic capacity (Hosmer Lemeshow C test), resolution of the 2 x 2 Contingency tables around different probability values (20, 50, 70 and mean value of prognostic estimation). The standardized mortality rate (SMR) for each one of the methods was calculated. Linear regression of the EPEC regarding the MPM II 0 and SAPS II was established and concordance analyses were done (Bland-Altman test) of the prediction of mortality by the three systems. In spite of an apparently good linear correlation, similar accuracy of prediction and discrimination capacity, EPEC is not well-calibrated (no likelihood of death greater than 50%) and the concordance analyses show that more than 10% of the pairs were outside the 95% confidence interval. In spite of its ease of application and calculation and of incorporating delay of admission in ICU as a variable, EPEC does not offer any predictive advantage on MPM II 0 or SAPS II, and its predictions adapt to reality worse.

  6. Relative validity of fruit and vegetable intake estimated by the food frequency questionnaire used in the Danish National Birth Cohort

    DEFF Research Database (Denmark)

    Mikkelsen, Tina B.; Olsen, Sjurdur F.; Rasmussen, Salka E.

    2007-01-01

    ) (r=0.57); and fruit, vegetables, and juice (F&V&J) (r=0.62). Sensitivities of correct classification by FFQ into the two lowest and the two highest quintiles of F&V&J intake were 58-67% and 50-74%, respectively, and specificities were 71-79% and 65-83%, respectively. F&V&J intake estimated from......Objective: To validate the fruit and vegetable intake estimated from the Food Frequency Questionnaire (FFQ) used in the Danish National Birth Cohort (DNBC). Subjects and setting: The DNBC is a cohort of 101,042 pregnant women in Denmark, who received a FFQ by mail in gestation week 25. A validation...... study with 88 participants was made. A seven-day weighed food diary (FD) and three different biomarkers were employed as comparison methods. Results: Significant correlations between FFQ and FD-based estimates were found for fruit (r=0.66); vegetables (r=0.32); juice (r=0.52); fruit and vegetables (F&V...

  7. Sulphur levels in saliva as an estimation of sulphur status in cattle: a validation study

    NARCIS (Netherlands)

    Dermauw, V.; Froidmont, E.; Dijkstra, J.; Boever, de J.L.; Vyverman, W.; Debeer, A.E.; Janssens, G.P.J.

    2012-01-01

    Effective assessment of sulphur (S) status in cattle is important for optimal health, yet remains difficult. Rumen fluid S concentrations are preferred, but difficult to sample under practical conditions. This study aimed to evaluate salivary S concentration as estimator of S status in cattle.

  8. Validating diagnoses from hospital discharge registers change risk estimates for acute coronary syndrome

    DEFF Research Database (Denmark)

    Joensen, Albert Marni; Schmidt, E.B.; Dethlefsen, Claus

    2007-01-01

    of acute coronary syndrome (ACS) diagnoses identified in a hospital discharge register changed the relative risk estimates of well-established risk factors for ACS. Methods All first-time ACS diagnoses (n=1138) in the Danish National Patient Registry were identified among male participants in the Danish...

  9. Validation of the Ejike-Ijeh equations for the estimation of body fat ...

    African Journals Online (AJOL)

    The Ejike-Ijeh equations for the estimation of body fat percentage makes it possible for the body fat content of individuals and populations to be determined without the use of costly equipment. However, because the equations were derived using data from a young-adult (18-29 years old) Nigerian population, it is important ...

  10. Development and validation of a method to estimate body weight in ...

    African Journals Online (AJOL)

    Mid-arm circumference (MAC) has previously been used as a surrogate indicator of habitus, and the objective of this study was to determine whether MAC cut-off values could be used to predict habitus scores (HSs) to create an objective and standardised weight estimation methodology, the PAWPER XL-MAC method.

  11. Simultaneous Validation of Seven Physical Activity Questionnaires Used in Japanese Cohorts for Estimating Energy Expenditure: A Doubly Labeled Water Study.

    Science.gov (United States)

    Sasai, Hiroyuki; Nakata, Yoshio; Murakami, Haruka; Kawakami, Ryoko; Nakae, Satoshi; Tanaka, Shigeho; Ishikawa-Takata, Kazuko; Yamada, Yosuke; Miyachi, Motohiko

    2018-04-28

    Physical activity questionnaires (PAQs) used in large-scale Japanese cohorts have rarely been simultaneously validated against the gold standard doubly labeled water (DLW) method. This study examined the validity of seven PAQs used in Japan for estimating energy expenditure against the DLW method. Twenty healthy Japanese adults (9 men; mean age, 32.4 [standard deviation {SD}, 9.4] years, mainly researchers and students) participated in this study. Fifteen-day daily total energy expenditure (TEE) and basal metabolic rate (BMR) were measured using the DLW method and a metabolic chamber, respectively. Activity energy expenditure (AEE) was calculated as TEE - BMR - 0.1 × TEE. Seven PAQs were self-administered to estimate TEE and AEE. The mean measured values of TEE and AEE were 2,294 (SD, 318) kcal/day and 721 (SD, 161) kcal/day, respectively. All of the PAQs indicated moderate-to-strong correlations with the DLW method in TEE (rho = 0.57-0.84). Two PAQs (Japan Public Health Center Study [JPHC]-PAQ Short and JPHC-PAQ Long) showed significant equivalence in TEE and moderate intra-class correlation coefficients (ICC). None of the PAQs showed significantly equivalent AEE estimates, with differences ranging from -547 to 77 kcal/day. Correlations and ICCs in AEE were mostly weak or fair (rho = 0.02-0.54, and ICC = 0.00-0.44). Only JPHC-PAQ Short provided significant and fair agreement with the DLW method. TEE estimated by the PAQs showed moderate or strong correlations with the results of DLW. Two PAQs showed equivalent TEE and moderate agreement. None of the PAQs showed equivalent AEE estimation to the gold standard, with weak-to-fair correlations and agreements. Further studies with larger sample sizes are needed to confirm these findings.

  12. A new validation technique for estimations of body segment inertia tensors: Principal axes of inertia do matter.

    Science.gov (United States)

    Rossi, Marcel M; Alderson, Jacqueline; El-Sallam, Amar; Dowling, James; Reinbolt, Jeffrey; Donnelly, Cyril J

    2016-12-08

    The aims of this study were to: (i) establish a new criterion method to validate inertia tensor estimates by setting the experimental angular velocity data of an airborne objects as ground truth against simulations run with the estimated tensors, and (ii) test the sensitivity of the simulations to changes in the inertia tensor components. A rigid steel cylinder was covered with reflective kinematic markers and projected through a calibrated motion capture volume. Simulations of the airborne motion were run with two models, using inertia tensor estimated with geometric formula or the compound pendulum technique. The deviation angles between experimental (ground truth) and simulated angular velocity vectors and the root mean squared deviation angle were computed for every simulation. Monte Carlo analyses were performed to assess the sensitivity of simulations to changes in magnitude of principal moments of inertia within ±10% and to changes in orientation of principal axes of inertia within ±10° (of the geometric-based inertia tensor). Root mean squared deviation angles ranged between 2.9° and 4.3° for the inertia tensor estimated geometrically, and between 11.7° and 15.2° for the compound pendulum values. Errors up to 10% in magnitude of principal moments of inertia yielded root mean squared deviation angles ranging between 3.2° and 6.6°, and between 5.5° and 7.9° when lumped with errors of 10° in principal axes of inertia orientation. The proposed technique can effectively validate inertia tensors from novel estimation methods of body segment inertial parameter. Principal axes of inertia orientation should not be neglected when modelling human/animal mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. An intercomparison and validation of satellite-based surface radiative energy flux estimates over the Arctic

    Science.gov (United States)

    Riihelä, Aku; Key, Jeffrey R.; Meirink, Jan Fokke; Kuipers Munneke, Peter; Palo, Timo; Karlsson, Karl-Göran

    2017-05-01

    Accurate determination of radiative energy fluxes over the Arctic is of crucial importance for understanding atmosphere-surface interactions, melt and refreezing cycles of the snow and ice cover, and the role of the Arctic in the global energy budget. Satellite-based estimates can provide comprehensive spatiotemporal coverage, but the accuracy and comparability of the existing data sets must be ascertained to facilitate their use. Here we compare radiative flux estimates from Clouds and the Earth's Radiant Energy System (CERES) Synoptic 1-degree (SYN1deg)/Energy Balanced and Filled, Global Energy and Water Cycle Experiment (GEWEX) surface energy budget, and our own experimental FluxNet / Satellite Application Facility on Climate Monitoring cLoud, Albedo and RAdiation (CLARA) data against in situ observations over Arctic sea ice and the Greenland Ice Sheet during summer of 2007. In general, CERES SYN1deg flux estimates agree best with in situ measurements, although with two particular limitations: (1) over sea ice the upwelling shortwave flux in CERES SYN1deg appears to be underestimated because of an underestimated surface albedo and (2) the CERES SYN1deg upwelling longwave flux over sea ice saturates during midsummer. The Advanced Very High Resolution Radiometer-based GEWEX and FluxNet-CLARA flux estimates generally show a larger range in retrieval errors relative to CERES, with contrasting tendencies relative to each other. The largest source of retrieval error in the FluxNet-CLARA downwelling shortwave flux is shown to be an overestimated cloud optical thickness. The results illustrate that satellite-based flux estimates over the Arctic are not yet homogeneous and that further efforts are necessary to investigate the differences in the surface and cloud properties which lead to disagreements in flux retrievals.

  14. Are traditional body fat equations and anthropometry valid to estimate body fat in children and adolescents living with HIV?

    Science.gov (United States)

    Lima, Luiz Rodrigo Augustemak de; Martins, Priscila Custódio; Junior, Carlos Alencar Souza Alves; Castro, João Antônio Chula de; Silva, Diego Augusto Santos; Petroski, Edio Luiz

    The aim of this study was to assess the validity of traditional anthropometric equations and to develop predictive equations of total body and trunk fat for children and adolescents living with HIV based on anthropometric measurements. Forty-eight children and adolescents of both sexes (24 boys) aged 7-17 years, living in Santa Catarina, Brazil, participated in the study. Dual-energy X-ray absorptiometry was used as the reference method to evaluate total body and trunk fat. Height, body weight, circumferences and triceps, subscapular, abdominal and calf skinfolds were measured. The traditional equations of Lohman and Slaughter were used to estimate body fat. Multiple regression models were fitted to predict total body fat (Model 1) and trunk fat (Model 2) using a backward selection procedure. Model 1 had an R 2 =0.85 and a standard error of the estimate of 1.43. Model 2 had an R 2 =0.80 and standard error of the estimate=0.49. The traditional equations of Lohman and Slaughter showed poor performance in estimating body fat in children and adolescents living with HIV. The prediction models using anthropometry provided reliable estimates and can be used by clinicians and healthcare professionals to monitor total body and trunk fat in children and adolescents living with HIV. Copyright © 2017 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.

  15. Validation of a simple evaporation-transpiration scheme (SETS) to estimate evaporation using micro-lysimeter measurements

    Science.gov (United States)

    Ghazanfari, Sadegh; Pande, Saket; Savenije, Hubert

    2014-05-01

    Several methods exist to estimate E and T. The Penman-Montieth or Priestly-Taylor methods along with the Jarvis scheme for estimating vegetation resistance are commonly used to estimate these fluxes as a function of land cover, atmospheric forcing and soil moisture content. In this study, a simple evaporation transpiration method is developed based on MOSAIC Land Surface Model that explicitly accounts for soil moisture. Soil evaporation and transpiration estimated by SETS is validated on a single column of soil profile with measured evaporation data from three micro-lysimeters located at Ferdowsi University of Mashhad synoptic station, Iran, for the year 2005. SETS is run using both implicit and explicit computational schemes. Results show that the implicit scheme estimates the vapor flux close to that by the explicit scheme. The mean difference between the implicit and explicit scheme is -0.03 mm/day. The paired T-test of mean difference (p-Value = 0.042 and t-Value = 2.04) shows that there is no significant difference between the two methods. The sum of soil evaporation and transpiration from SETS is also compared with P-M equation and micro-lysimeters measurements. The SETS predicts the actual evaporation with a lower bias (= 1.24mm/day) than P-M (= 1.82 mm/day) and with R2 value of 0.82.

  16. Validation of an elastic registration technique to estimate anatomical lung modification in Non-Small-Cell Lung Cancer Tomotherapy

    International Nuclear Information System (INIS)

    Faggiano, Elena; Cattaneo, Giovanni M; Ciavarro, Cristina; Dell'Oca, Italo; Persano, Diego; Calandrino, Riccardo; Rizzo, Giovanna

    2011-01-01

    The study of lung parenchyma anatomical modification is useful to estimate dose discrepancies during the radiation treatment of Non-Small-Cell Lung Cancer (NSCLC) patients. We propose and validate a method, based on free-form deformation and mutual information, to elastically register planning kVCT with daily MVCT images, to estimate lung parenchyma modification during Tomotherapy. We analyzed 15 registrations between the planning kVCT and 3 MVCT images for each of the 5 NSCLC patients. Image registration accuracy was evaluated by visual inspection and, quantitatively, by Correlation Coefficients (CC) and Target Registration Errors (TRE). Finally, a lung volume correspondence analysis was performed to specifically evaluate registration accuracy in lungs. Results showed that elastic registration was always satisfactory, both qualitatively and quantitatively: TRE after elastic registration (average value of 3.6 mm) remained comparable and often smaller than voxel resolution. Lung volume variations were well estimated by elastic registration (average volume and centroid errors of 1.78% and 0.87 mm, respectively). Our results demonstrate that this method is able to estimate lung deformations in thorax MVCT, with an accuracy within 3.6 mm comparable or smaller than the voxel dimension of the kVCT and MVCT images. It could be used to estimate lung parenchyma dose variations in thoracic Tomotherapy

  17. Global temperature estimates in the troposphere and stratosphere: a validation study of COSMIC/FORMOSAT-3 measurements

    Directory of Open Access Journals (Sweden)

    P. Kishore

    2009-02-01

    Full Text Available This paper mainly focuses on the validation of temperature estimates derived with the newly launched Constellation Observing System for Meteorology Ionosphere and Climate (COSMIC/Formosa Satellite 3 (FORMOSAT-3 system. The analysis is based on the radio occultation (RO data samples collected during the first year observation from April 2006 to April 2007. For the validation, we have used the operational stratospheric analyses including the National Centers for Environmental Prediction - Reanalysis (NCEP, the Japanese 25-year Reanalysis (JRA-25, and the United Kingdom Met Office (MetO data sets. Comparisons done in different formats reveal good agreement between the COSMIC and reanalysis outputs. Spatially, the largest deviations are noted in the polar latitudes, and height-wise, the tropical tropopause region noted the maximum differences (2–4 K. We found that among the three reanalysis data sets the NCEP data sets have the best resemblance with the COSMIC measurements.

  18. Validation of choice and determination of geotechnology parameters with regard to stress–strain state of rocks

    Science.gov (United States)

    Freidin, AM; Neverov, SA; Neverov, AA; Konurin, AI

    2018-03-01

    The paper illustrates efficiency and reliability of types of rock mass stress state conditioned by geological and structural features of rocks in design, selection and validation of geotechnology parameters. The authors of the paper present calculation of stresses in rock mass under sublevel stoping depending on the type of the geosphere and on the depth of the ore body occurrence.

  19. The Validity of Value-Added Estimates from Low-Stakes Testing Contexts: The Impact of Change in Test-Taking Motivation and Test Consequences

    Science.gov (United States)

    Finney, Sara J.; Sundre, Donna L.; Swain, Matthew S.; Williams, Laura M.

    2016-01-01

    Accountability mandates often prompt assessment of student learning gains (e.g., value-added estimates) via achievement tests. The validity of these estimates have been questioned when performance on tests is low stakes for students. To assess the effects of motivation on value-added estimates, we assigned students to one of three test consequence…

  20. Evaluation of Different Estimation Methods for Accuracy and Precision in Biological Assay Validation.

    Science.gov (United States)

    Yu, Binbing; Yang, Harry

    2017-01-01

    Biological assays ( bioassays ) are procedures to estimate the potency of a substance by studying its effects on living organisms, tissues, and cells. Bioassays are essential tools for gaining insight into biologic systems and processes including, for example, the development of new drugs and monitoring environmental pollutants. Two of the most important parameters of bioassay performance are relative accuracy (bias) and precision. Although general strategies and formulas are provided in USP, a comprehensive understanding of the definitions of bias and precision remain elusive. Additionally, whether there is a beneficial use of data transformation in estimating intermediate precision remains unclear. Finally, there are various statistical estimation methods available that often pose a dilemma for the analyst who must choose the most appropriate method. To address these issues, we provide both a rigorous definition of bias and precision as well as three alternative methods for calculating relative standard deviation (RSD). All methods perform similarly when the RSD ≤10%. However, the USP estimates result in larger bias and root-mean-square error (RMSE) compared to the three proposed methods when the actual variation was large. Therefore, the USP method should not be used for routine analysis. For data with moderate skewness and deviation from normality, the estimates based on the original scale perform well. The original scale method is preferred, and the method based on log-transformation may be used for noticeably skewed data. LAY ABSTRACT: Biological assays, or bioassays, are essential in the development and manufacture of biopharmaceutical products for potency testing and quality monitoring. Two important parameters of assay performance are relative accuracy (bias) and precision. The definitions of bias and precision in USP 〈1033〉 are elusive and confusing. Another complicating issue is whether log-transformation should be used for calculating the

  1. Development, validation and field evaluation of a quantitative real-time PCR able to differentiate between field Mycoplasma synoviae and the MS-H-live vaccine strain.

    Science.gov (United States)

    Dijkman, R; Feberwee, A; Landman, W J M

    2017-08-01

    A quantitative PCR (qPCR) able to differentiate between field Mycoplasma synoviae and MS-H vaccine strain was developed, validated and evaluated. It was developed using nucleotide differences in the obg gene. Analytical specificity and sensitivity assessed using DNA from 194 M. synoviae field samples, three different batches of MS-H vaccine and from 43 samples representing four other avian Mycoplasma species proved to be 100%. The detection limit for field M. synoviae and MS-H vaccine strain was 10 2-3 and 10 2 colony-forming units PCR equivalents/g trachea mucus, respectively. The qPCR was able to detect both, field M. synoviae and MS-H vaccine strain in ratios of 1:100 determined both using spiked and field samples. One hundred and twenty samples from M. synoviae-infected non-vaccinated birds, 110 samples from M. synoviae-vaccinated birds from a bird experiment and 224 samples from M. synoviae negative (serology and PCR) birds were used to determine the relative sensitivity and specificity using a previously described M. synoviae PCR as reference. The relative sensitivity and specificity for field M. synoviae were 95.0% and 99.6%, respectively, and 94.6% and 100% for the MS-H-live vaccine, respectively. Field validation and confirmation by multi locus sequence typing revealed that the qPCR correctly distinguished between MS-H and field M. synoviae. Evaluation of the differentiating M. synoviae qPCR in three commercial flocks suggested transmission of MS-H-live vaccine from vaccinated to non-vaccinated flocks at the same farm. Furthermore, it showed evidence for the colonization with field M. synoviae in MS-H-vaccinated flocks.

  2. Gradient HPLC method development and validation for Simultaneous estimation of Rosiglitazone and Gliclazide.

    Directory of Open Access Journals (Sweden)

    Uttam Singh Baghel

    2012-10-01

    Full Text Available Objective: The aim of present work was to develop a gradient RP-HPLC method for simultaneous analysis of rosiglitazone and gliclazide, in a tablet dosage form. Method: Chromatographic system was optimized using a hypersil C18 (250mm x 4.6mm, 5毺 m column with potassium dihydrogen phosphate (pH-7.0 and acetonitrile in the ratio of 60:40, as mobile phase, at a flow rate of 1.0 ml/min. Detection was carried out at 225 nm by a SPD-20A prominence UV/Vis detector. Result: Rosiglitazone and gliclazide were eluted with retention times of 17.36 and 7.06 min, respectively. Beer’s Lambert ’s Law was obeyed over the concentration ranges of 5 to 70 毺 g/ml and 2 to 12 毺 g/ml for rosiglitazone and gliclazide, respectively. Conclusion: The high recovery and low coefficients of variation confirm the suitability of the method for simultaneous analysis of both drugs in a tablets dosage form. Statistical analysis proves that the method is sensitive and significant for the analysis of rosiglitazone and gliclazide in pure and in pharmaceutical dosage form without any interference from the excipients. The method was validated in accordance with ICH guidelines. Validation revealed the method is specific, rapid, accurate, precise, reliable, and reproducible.

  3. Validation of energy intake estimated from a food frequency questionnaire: a doubly labelled water study.

    Science.gov (United States)

    Andersen, L Frost; Tomten, H; Haggarty, P; Løvø, A; Hustvedt, B-E

    2003-02-01

    The validation of dietary assessment methods is critical in the evaluation of the relation between dietary intake and health. The aim of this study was to assess the validity of a food frequency questionnaire by comparing energy intake with energy expenditure measured with the doubly labelled water method. Total energy expenditure was measured with the doubly labelled water (DLW) method during a 10 day period. Furthermore, the subjects filled in the food frequency questionnaire about 18-35 days after the DLW phase of the study was completed. Twenty-one healthy, non-pregnant females volunteered to participate in the study; only 17 subjects completed the study. The group energy intake was on average 10% lower than the energy expenditure, but the difference was not statistically significant. However, there was a wide range in reporting accuracy: seven subjects were identified as acceptable reporters, eight as under-reporters and two were identified as over-reporters. The width of the 95% confidence limits of agreement in a Bland and Altman plot for energy intake and energy expenditure varied from -5 to 3 MJ. The data showed that there was substantial variability in the accuracy of the food frequency questionnaire at the individual level. Furthermore, the results showed that the questionnaire was more accurate for groups than individuals.

  4. CSNI Integral Test Facility Matrices for Validation of Best-Estimate Thermal-Hydraulic Computer Codes

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Internationally agreed Integral Test Facility (ITF) matrices for validation of realistic thermal hydraulic system computer codes were established. ITF development is mainly for Pressurised Water Reactors (PWRs) and Boiling Water Reactors (BWRs). A separate activity was for Russian Pressurised Water-cooled and Water-moderated Energy Reactors (WWER). Firstly, the main physical phenomena that occur during considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. In this paper some specific examples from the ITF matrices will also be provided. The matrices will be a guide for code validation, will be a basis for comparisons of code predictions performed with different system codes, and will contribute to the quantification of the uncertainty range of code model predictions. In addition to this objective, the construction of such a matrix is an attempt to record information which has been generated around the world over the last years, so that it is more accessible to present and future workers in that field than would otherwise be the case.

  5. A Validated RP-HPLC Method for Simultaneous Estimation of Atenolol and Indapamide in Pharmaceutical Formulations

    Directory of Open Access Journals (Sweden)

    G. Tulja Rani

    2011-01-01

    Full Text Available A simple, fast, precise, selective and accurate RP-HPLC method was developed and validated for the simultaneous determination of atenolol and indapamide from bulk and formulations. Chromatographic separation was achieved isocratically on a Waters C18 column (250×4.6 mm, 5 µ particle size using a mobile phase, methanol and water (adjusted to pH 2.7 with 1% orthophosphoric acid in the ratio of 80:20. The flow rate was 1 mL/min and effluent was detected at 230 nm. The retention time of atenolol and indapamide were 1.766 min and 3.407 min. respectively. Linearity was observed in the concentration range of 12.5-150 µg/mL for atenolol and 0.625-7.5 µg/mL for indapamide. Percent recoveries obtained for both the drugs were 99.74-100.06% and 98.65-99.98% respectively. The method was validated according to the ICH guidelines with respect to specificity, linearity, accuracy, precision and robustness. The method developed can be used for the routine analysis of atenolol and indapamide from their combined dosage form.

  6. Validating novel air pollution sensors to improve exposure estimates for epidemiological analyses and citizen science.

    Science.gov (United States)

    Jerrett, Michael; Donaire-Gonzalez, David; Popoola, Olalekan; Jones, Roderic; Cohen, Ronald C; Almanza, Estela; de Nazelle, Audrey; Mead, Iq; Carrasco-Turigas, Glòria; Cole-Hunter, Tom; Triguero-Mas, Margarita; Seto, Edmund; Nieuwenhuijsen, Mark

    2017-10-01

    Low cost, personal air pollution sensors may reduce exposure measurement errors in epidemiological investigations and contribute to citizen science initiatives. Here we assess the validity of a low cost personal air pollution sensor. Study participants were drawn from two ongoing epidemiological projects in Barcelona, Spain. Participants repeatedly wore the pollution sensor - which measured carbon monoxide (CO), nitric oxide (NO), and nitrogen dioxide (NO 2 ). We also compared personal sensor measurements to those from more expensive instruments. Our personal sensors had moderate to high correlations with government monitors with averaging times of 1-h and 30-min epochs (r ~ 0.38-0.8) for NO and CO, but had low to moderate correlations with NO 2 (~0.04-0.67). Correlations between the personal sensors and more expensive research instruments were higher than with the government monitors. The sensors were able to detect high and low air pollution levels in agreement with expectations (e.g., high levels on or near busy roadways and lower levels in background residential areas and parks). Our findings suggest that the low cost, personal sensors have potential to reduce exposure measurement error in epidemiological studies and provide valid data for citizen science studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Parameters estimation for reactive transport: A way to test the validity of a reactive model

    Science.gov (United States)

    Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme

    The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.

  8. Theoretical estimation and validation of radiation field in alkaline hydrolysis plant

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Sanjay; Krishnamohanan, T.; Gopalakrishnan, R.K., E-mail: singhs@barc.gov.in [Radiation Safety Systems Division, Bhabha Atomic Research Centre, Mumbai (India); Anand, S. [Health Physics Division, Bhabha Atomic Research Centre, Mumbai (India); Pancholi, K. C. [Waste Management Division, Bhabha Atomic Research Centre, Mumbai (India)

    2014-07-01

    Spent organic solvent (30% TBP + 70% n-Dodecane) from reprocessing facility is treated at ETP in Alkaline Hydrolysis Plant (AHP) and Organic Waste Incineration (ORWIN) Facility. In AHP-ORWIN, there are three horizontal cylindrical tanks having 2.0 m{sup 3} operating capacity used for waste storage and transfer. The three tanks are, Aqueous Waste Tank (AWT), Waste Receiving Tank (WRT) and Dodecane Waste Tank (DWT). These tanks are en-housed in a shielded room in this facility. Monte Carlo N-Particle (MCNP) radiation transport code was used to estimate ambient radiation field levels when the storage tanks are having hold up volumes of desired specific activity levels. In this paper the theoretically estimated values of radiation field is compared with the actual measured dose.

  9. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship

    OpenAIRE

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15...

  10. Towards valid 'serious non-fatal injury' indicators for international comparisons based on probability of admission estimates

    DEFF Research Database (Denmark)

    Cryer, Colin; Miller, Ted R; Lyons, Ronan A

    2017-01-01

    in regions of Canada, Denmark, Greece, Spain and the USA. International Classification of Diseases (ICD)-9 or ICD-10 4-digit/character injury diagnosis-specific ED attendance and inpatient admission counts were provided, based on a common protocol. Diagnosis-specific and region-specific PrAs with 95% CIs...... diagnoses with high estimated PrAs. These diagnoses can be used as the basis for more valid international comparisons of life-threatening injury, based on hospital discharge data, for countries with well-developed healthcare and data collection systems....

  11. Lightning stroke distance estimation from single station observation and validation with WWLLN data

    Directory of Open Access Journals (Sweden)

    V. Ramachandran

    2007-07-01

    Full Text Available A simple technique to estimate the distance of the lightning strikes d with a single VLF electromagnetic wave receiver at a single station is described. The technique is based on the recording of oscillatory waveforms of the electric fields of sferics. Even though the process of estimating d using the waveform is a rather classical one, a novel and simple procedure for finding d is proposed in this paper. The procedure adopted provides two independent estimates of the distance of the stroke. The accuracy of measurements has been improved by employing high speed (333 ns sampling rate signal processing techniques. GPS time is used as the reference time, which enables us to compare the calculated distances of the lightning strikes, by both methods, with those calculated from the data obtained by the World-Wide Lightning Location Network (WWLLN, which uses a multi-station technique. The estimated distances of the lightning strikes (77, whose times correlated, ranged from ~3000–16 250 km. When dd compared with those calculated with the multi-station lightning location system is ~4.7%, while for all the strokes it was ~8.8%. One of the lightnings which was recorded by WWLLN, whose field pattern was recorded and the spectrogram of the sferic was also recorded at the site, is analyzed in detail. The deviations in d calculated from the field pattern and from the arrival time of the sferic were 3.2% and 1.5%, respectively, compared to d calculated from the WWLLN location. FFT analysis of the waveform showed that only a narrow band of frequencies is received at the site, which is confirmed by the intensity of the corresponding sferic in the spectrogram.

  12. Validation of real-time zenith tropospheric delay estimation with TOMION software within WAGNSS networks

    OpenAIRE

    Graffigna, Victoria

    2017-01-01

    The TOmographic Model of the IONospheric electron content (TOMION) software implements a simultaneous precise geodetic and ionospheric modeling, which can be used to test new approaches for real-time precise GNSS modeling (positioning, ionospheric and tropospheric delays, clock errors, among others). In this work, the software is used to estimate the Zenith Tropospheric Delay (ZTD) emulating real time and its performance is evaluated through a comparative analysis with a built-in GIPSY estima...

  13. Validity of rapid estimation of erythrocyte volume in the diagnosis of polycytemia vera

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, S.; Roedbro, P.

    1989-01-01

    In the diagnosis of polycytemia vera, estimation of erythrocyte volume (EV) from plasma volume (PV) and venous hematocrit (Hct/sub v/) is usually thought unadvisable, because the ratio of whole body hematocrit to venous hematocrit (f ratio) is higher in patients with splenomegaly than in normal subjects, and varies considerably between individuals. We determined the mean f ratio in 232 consecutive patients suspected of polycytemia vera (anti f=0.967; SD 0.048) and used it with each patient's PV and Hct/sub v/ to calculate an estimated normalised EV/sub n/. With measured EV as a reference value, EV/sub n/ was investigated as a diagnostic test. By means of two cut off levels the EV/sub n/ values could be divided into EV/sub n/ elevated, EV/sub n/ not elevated (both with high predictive values), and an EV/sub n/ borderline group. The size of the borderline EV/sub n/ group ranged from 5% to 46% depending on position of the cut off levels, i.e. with the efficiency demanded from the diagnostic test. EV can safely and rapidly be estimated from PV and Hct/sub v/, if anti f is determined from the relevant population, and if the results in an easily definable borderline range of EV/sub n/ values are supplemented by direct EV determination.

  14. De-individualized psychophysiological strain assessment during a flight simulation test—Validation of a space methodology

    Science.gov (United States)

    Johannes, Bernd; Salnitski, Vyacheslav; Soll, Henning; Rauch, Melina; Hoermann, Hans-Juergen

    For the evaluation of an operator's skill reliability indicators of work quality as well as of psychophysiological states during the work have to be considered. The herein presented methodology and measurement equipment were developed and tested in numerous terrestrial and space experiments using a simulation of a spacecraft docking on a space station. However, in this study the method was applied to a comparable terrestrial task—the flight simulator test (FST) used in the DLR selection procedure for ab initio pilot applicants for passenger airlines. This provided a large amount of data for a statistical verification of the space methodology. For the evaluation of the strain level of applicants during the FST psychophysiological measurements were used to construct a "psychophysiological arousal vector" (PAV) which is sensitive to various individual reaction patterns of the autonomic nervous system to mental load. Its changes and increases will be interpreted as "strain". In the first evaluation study, 614 subjects were analyzed. The subjects first underwent a calibration procedure for the assessment of their autonomic outlet type (AOT) and on the following day they performed the FST, which included three tasks and was evaluated by instructors applying well-established and standardized rating scales. This new method will possibly promote a wide range of other future applications in aviation and space psychology.

  15. Evaluation of Strain-Life Fatigue Curve Estimation Methods and Their Application to a Direct-Quenched High-Strength Steel

    Science.gov (United States)

    Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.

    2018-03-01

    Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.

  16. Development and systematic validation of qPCR assays for rapid and reliable differentiation of Xylella fastidiosa strains causing citrus variegated chlorosis.

    Science.gov (United States)

    Li, Wenbin; Teixeira, Diva C; Hartung, John S; Huang, Qi; Duan, Yongping; Zhou, Lijuan; Chen, Jianchi; Lin, Hong; Lopes, Silvio; Ayres, A Juliano; Levy, Laurene

    2013-01-01

    The xylem-limited, Gram-negative, fastidious plant bacterium Xylella fastidiosa is the causal agent of citrus variegated chlorosis (CVC), a destructive disease affecting approximately half of the citrus plantations in the State of São Paulo, Brazil. The disease was recently found in Central America and is threatening the multi-billion U.S. citrus industry. Many strains of X. fastidiosa are pathogens or endophytes in various plants growing in the U.S., and some strains cross infect several host plants. In this study, a TaqMan-based assay targeting the 16S rDNA signature region was developed for the identification of X. fastidiosa at the species level. Another TaqMan-based assay was developed for the specific identification of the CVC strains. Both new assays have been systematically validated in comparison with the primer/probe sets from four previously published assays on one platform and under similar PCR conditions, and shown to be superior. The species specific assay detected all X. fastidiosa strains and did not amplify any other citrus pathogen or endophyte tested. The CVC-specific assay detected all CVC strains but did not amplify any non-CVC X. fastidiosa nor any other citrus pathogen or endophyte evaluated. Both sets were multiplexed with a reliable internal control assay targeting host plant DNA, and their diagnostic specificity and sensitivity remained unchanged. This internal control provides quality assurance for DNA extraction, performance of PCR reagents, platforms and operators. The limit of detection for both assays was equivalent to 2 to 10 cells of X. fastidiosa per reaction for field citrus samples. Petioles and midribs of symptomatic leaves of sweet orange harbored the highest populations of X. fastidiosa, providing the best materials for detection of the pathogen. These new species specific assay will be invaluable for molecular identification of X. fastidiosa at the species level, and the CVC specific assay will be very powerful for the

  17. Cosmic Ray Neutron Sensing: Use, Calibration and Validation for Soil Moisture Estimation

    International Nuclear Information System (INIS)

    2017-03-01

    Nuclear and related techniques can help develop climate-smart agricultural practices by optimizing water use efficiency. The measurement of soil water content is essential to improve the use of this resource in agriculture. However, most sensors monitor small areas (less than 1m in radius), hence a large number of sensors are needed to obtain soil water content across a large area. This can be both costly and labour intensive and so larger scale measuring devices are needed as an alternative to traditional point-based soil moisture sensing techniques. The cosmic ray neutron sensor (CRNS) is such a device that monitors soil water content in a non-invasive and continuous way. This publication provides background information about this novel technique, and explains in detail the calibration and validation process.

  18. Validation of the Visible Occlusal Plaque Index (VOPI) in estimating caries lesion activity

    DEFF Research Database (Denmark)

    Carvalho, J.C.; Mestrinho, H D; Oliveira, L S

    2017-01-01

    ). RESULTS: Construct validity was assumed based on qualitative assessment as no plaque (score 0) and thin plaque (score 1) reflected the theoretical knowledge that a regular disorganization of the dental biofilm either maintains the caries process at sub-clinical levels or inactivate it clinically. The VOPI...... of the VOPI was evidenced with multivariable analysis (GEE), by its ability to discriminate between the groups of adolescents with different oral hygiene status; negative association between adolescents with thick and heavy plaque and those with sound occlusal surfaces was found (OR=0.3, p... of oral hygiene and caries lesion activity. The VOPI is recommended to standardize and categorize information on the occlusal biofilm, thus being suitable for direct application in research and clinical settings....

  19. Development and Validation of UV Spectrophotometric Method For Estimation of Dolutegravir Sodium in Tablet Dosage Form

    International Nuclear Information System (INIS)

    Balasaheb, B.G.

    2015-01-01

    A simple, rapid, precise and accurate spectrophotometric method has been developed for quantitative analysis of Dolutegravir sodium in tablet formulations. The initial stock solution of Dolutegravir sodium was prepared in methanol solvent and subsequent dilution was done in water. The standard solution of Dolutegravir sodium in water showed maximum absorption at wavelength 259.80 nm. The drug obeyed Beer-Lamberts law in the concentration range of 5-40 μg/ mL with coefficient of correlation (R"2) was 0.9992. The method was validated as per the ICH guidelines. The developed method can be adopted in routine analysis of Dolutegravir sodium in bulk or tablet dosage form and it involves relatively low cost solvents and no complex extraction techniques. (author)

  20. Development and Validation of Liquid Chromatographic Method for Estimation of Naringin in Nanoformulation

    Directory of Open Access Journals (Sweden)

    Kranti P. Musmade

    2014-01-01

    Full Text Available A simple, precise, accurate, rapid, and sensitive reverse phase high performance liquid chromatography (RP-HPLC method with UV detection has been developed and validated for quantification of naringin (NAR in novel pharmaceutical formulation. NAR is a polyphenolic flavonoid present in most of the citrus plants having variety of pharmacological activities. Method optimization was carried out by considering the various parameters such as effect of pH and column. The analyte was separated by employing a C18 (250.0 × 4.6 mm, 5 μm column at ambient temperature in isocratic conditions using phosphate buffer pH 3.5: acetonitrile (75 : 25% v/v as mobile phase pumped at a flow rate of 1.0 mL/min. UV detection was carried out at 282 nm. The developed method was validated according to ICH guidelines Q2(R1. The method was found to be precise and accurate on statistical evaluation with a linearity range of 0.1 to 20.0 μg/mL for NAR. The intra- and interday precision studies showed good reproducibility with coefficients of variation (CV less than 1.0%. The mean recovery of NAR was found to be 99.33 ± 0.16%. The proposed method was found to be highly accurate, sensitive, and robust. The proposed liquid chromatographic method was successfully employed for the routine analysis of said compound in developed novel nanopharmaceuticals. The presence of excipients did not show any interference on the determination of NAR, indicating method specificity.

  1. Development and Validation of a Prediction Model to Estimate Individual Risk of Pancreatic Cancer.

    Science.gov (United States)

    Yu, Ami; Woo, Sang Myung; Joo, Jungnam; Yang, Hye-Ryung; Lee, Woo Jin; Park, Sang-Jae; Nam, Byung-Ho

    2016-01-01

    There is no reliable screening tool to identify people with high risk of developing pancreatic cancer even though pancreatic cancer represents the fifth-leading cause of cancer-related death in Korea. The goal of this study was to develop an individualized risk prediction model that can be used to screen for asymptomatic pancreatic cancer in Korean men and women. Gender-specific risk prediction models for pancreatic cancer were developed using the Cox proportional hazards model based on an 8-year follow-up of a cohort study of 1,289,933 men and 557,701 women in Korea who had biennial examinations in 1996-1997. The performance of the models was evaluated with respect to their discrimination and calibration ability based on the C-statistic and Hosmer-Lemeshow type χ2 statistic. A total of 1,634 (0.13%) men and 561 (0.10%) women were newly diagnosed with pancreatic cancer. Age, height, BMI, fasting glucose, urine glucose, smoking, and age at smoking initiation were included in the risk prediction model for men. Height, BMI, fasting glucose, urine glucose, smoking, and drinking habit were included in the risk prediction model for women. Smoking was the most significant risk factor for developing pancreatic cancer in both men and women. The risk prediction model exhibited good discrimination and calibration ability, and in external validation it had excellent prediction ability. Gender-specific risk prediction models for pancreatic cancer were developed and validated for the first time. The prediction models will be a useful tool for detecting high-risk individuals who may benefit from increased surveillance for pancreatic cancer.

  2. Strain tuneable whispering gallery mode resonators in the estimation of the elasto-optic parameters of soft materials

    Science.gov (United States)

    Pissadakis, Stavros; Milenko, Karolina; Aluculesei, Alina; Fytas, George

    2016-04-01

    In this manuscript we present the fabrication and characterization of a novel, polymer whispering gallery modes (WGMs) spherical micro-resonator, formed around the waist of an optical fiber taper. Fiber taper with well attached spheroid works as a cord, fixed on two ends enabling strain application to the resonator body. Controllable elastic elongation of the encapsulated fiber taper causes a change in the shape of the spheroid, which modifies the diameter and directional refractive index of the cavity. These changes influence the wavelength position of the WGMs resonances with a linear blue shift up to 0.6 nm, with corresponding strains up to 700Μɛ. The strain induced WGMs shift with respect to resonator diameter and annealing process is presented and analyzed.

  3. VALIDITY OF A COMMERCIAL LINEAR ENCODER TO ESTIMATE BENCH PRESS 1 RM FROM THE FORCE-VELOCITY RELATIONSHIP

    Directory of Open Access Journals (Sweden)

    Laurent Bosquet

    2010-09-01

    Full Text Available The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway to estimate Bench press 1 repetition maximum (1RM from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg, while 1 RM estimated by the Musclelab's software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg. Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001 but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37. The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level.

  4. Empirical models validation to estimate global solar irradiance on a horizontal plan in Ouargla, Algeria

    Science.gov (United States)

    Gougui, Abdelmoumen; Djafour, Ahmed; Khelfaoui, Narimane; Boutelli, Halima

    2018-05-01

    In this paper a comparison between three models for predicting the total solar flux falling on a horizontal surface has been processed. Capderou, Perrin & Brichambaut and Hottel models used to estimate the global solar radiation, the models are identified and evaluated using MATLAB environment. The recorded data have been obtained from a small weather station installed at the LAGE laboratory of Ouargla University, Algeria. Solar radiation data have been recorded using four sample days, every 15thday of the month, (March, April, May and October). The Root Mean Square Error (RMSE), Correlation Coefficient (CC) and Mean Absolute Percentage Error (MAPE) have been also calculated so as that to test the reliability of the proposed models. A comparisons between the measured and the calculated values have been made. The results obtained in this study depict that Perrin & Brichambaut and Capderou models are more effective to estimate the total solar intensity on a horizontal surface for clear sky over Ouargla city (Latitude of 31° 95' N, Longitude of 5° 24' E, and Altitude of 0.141km above Mean Sea Level), these models dedicated from meteorological parameters, geographical location and number of days since the first January. Perrin & Brichambaut and Capderou models give the best tendency with a CC of 0.985-0.999 and 0.932-0.995 consecutively further, Hottel give's a CC of 0.617-0.942.

  5. Development and Validation of RP-HPLC Method for Simultaneous Estimation of Aspirin and Esomeprazole Magnesium in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    Dipali Patel

    2013-01-01

    Full Text Available A simple, specific, precise, and accurate reversed-phase HPLC method was developed and validated for simultaneous estimation of aspirin and esomeprazole magnesium in tablet dosage forms. The separation was achieved by HyperChrom ODS-BP C18 column (200 mm × 4.6 mm; 5.0 μm using acetonitrile: methanol: 0.05 M phosphate buffer at pH 3 adjusted with orthophosphoric acid (25 : 25 : 50, v/v as eluent, at a flow rate of 1 mL/min. Detection was carried out at wavelength 230 nm. The retention times of aspirin and esomeprazole magnesium were 4.29 min and 6.09 min, respectively. The linearity was established over the concentration ranges of 10–70 μg/mL and 10–30 μg/mL with correlation coefficients (r2 0.9986 and 0.9973 for aspirin and esomeprazole magnesium, respectively. The mean recoveries were found to be in the ranges of 99.80–100.57% and 99.70–100.83% for aspirin and esomeprazole magnesium, respectively. The proposed method has been validated as per ICH guidelines and successfully applied to the estimation of aspirin and esomeprazole magnesium in their combined tablet dosage form.

  6. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry--BREALD-30.

    Science.gov (United States)

    Junkes, Monica C; Fraiz, Fabian C; Sardenberg, Fernanda; Lee, Jessica Y; Paiva, Saul M; Ferreira, Fernanda M

    2015-01-01

    The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis. The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.

  7. Model validation and error estimation of tsunami runup using high resolution data in Sadeng Port, Gunungkidul, Yogyakarta

    Science.gov (United States)

    Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo

    2017-07-01

    A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.

  8. Development and validation of analytical method for the estimation of nateglinide in rabbit plasma

    Directory of Open Access Journals (Sweden)

    Nihar Ranjan Pani

    2012-12-01

    Full Text Available Nateglinide has been widely used in the treatment of type-2 diabetics as an insulin secretogoga. A reliable, rapid, simple and sensitive reversed-phase high performance liquid chromatography (RP-HPLC method was developed and validated for determination of nateglinide in rabbit plasma. The method was developed on Hypersil BDSC-18 column (250 mm×4.6 mm, 5 mm using a mobile phase of 10 mM phosphate buffer (pH 2.5 and acetonitrile (35:65, v/v. The elute was monitored with the UV–vis detector at 210 nm with a flow rate of 1 mL/min. Calibration curve was linear over the concentration range of 25–2000 ng/mL. The retention times of nateglinide and internal standard (gliclazide were 9.608 min and 11.821 min respectively. The developed RP-HPLC method can be successfully applied to the quantitative pharmacokinetic parameters determination of nateglinide in rabbit model. Keywords: HPLC, Nateglinide, Rabbit plasma, Pharmacokinetics

  9. HPLC method development and validation for the estimation of axitinibe in rabbit plasma

    Directory of Open Access Journals (Sweden)

    Achanta Suneetha

    2017-10-01

    Full Text Available ABSTRACT A rapid, sensitive, and accurate high performance liquid chromatography for the determination of axitinibe (AN in rabbit plasma is developed using crizotinibe as an internal standard (IS. Axitinibe is a tyrosine kinase inhibitor, used in the treatment of advanced kidney cancer, which works by slowing or stopping the growth of cancer cells. The chromatographic separation was performed on a Waters 2695, Kromosil (150 mm × 4.6 mm, 5 µm column using a mobile phase containing buffer (pH 4.6 and acetonitrile in the ratio of 65:35 v/v with a flow rate of1 mL/min. The analyte and internal standard were extracted using liquid-liquid extraction with acetonitrile. The elution was detected by photo diode array detector at 320 nm.The total chromatographic runtime is 10.0 min with a retention time for axitinibe and IS of 5.685, and 3.606 min, respectively. The method was validated over a dynamic linear range of 0.002-0.2µg/mL for axitinibe with a correlation coefficient of r2 0.999.

  10. Development and validation of satellite-based estimates of surface visibility

    Science.gov (United States)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2016-02-01

    A satellite-based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5 % for classifying clear (V ≥ 30 km), moderate (10 km ≤ V United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  11. Validation of Walk Score® for Estimating Neighborhood Walkability: An Analysis of Four US Metropolitan Areas

    Science.gov (United States)

    Duncan, Dustin T.; Aldstadt, Jared; Whalen, John; Melly, Steven J.; Gortmaker, Steven L.

    2011-01-01

    Neighborhood walkability can influence physical activity. We evaluated the validity of Walk Score® for assessing neighborhood walkability based on GIS (objective) indicators of neighborhood walkability with addresses from four US metropolitan areas with several street network buffer distances (i.e., 400-, 800-, and 1,600-meters). Address data come from the YMCA-Harvard After School Food and Fitness Project, an obesity prevention intervention involving children aged 5–11 years and their families participating in YMCA-administered, after-school programs located in four geographically diverse metropolitan areas in the US (n = 733). GIS data were used to measure multiple objective indicators of neighborhood walkability. Walk Scores were also obtained for the participant’s residential addresses. Spearman correlations between Walk Scores and the GIS neighborhood walkability indicators were calculated as well as Spearman correlations accounting for spatial autocorrelation. There were many significant moderate correlations between Walk Scores and the GIS neighborhood walkability indicators such as density of retail destinations and intersection density (p walkability. Correlations generally became stronger with a larger spatial scale, and there were some geographic differences. Walk Score® is free and publicly available for public health researchers and practitioners. Results from our study suggest that Walk Score® is a valid measure of estimating certain aspects of neighborhood walkability, particularly at the 1600-meter buffer. As such, our study confirms and extends the generalizability of previous findings demonstrating that Walk Score is a valid measure of estimating neighborhood walkability in multiple geographic locations and at multiple spatial scales. PMID:22163200

  12. Validity of segmental bioelectrical impedance analysis for estimating fat-free mass in children including overweight individuals.

    Science.gov (United States)

    Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki

    2017-02-01

    This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.

  13. Estimation and Validation of Land Surface Temperatures from Chinese Second-Generation Polar-Orbit FY-3A VIRR Data

    Directory of Open Access Journals (Sweden)

    Bo-Hui Tang

    2015-03-01

    Full Text Available This work estimated and validated the land surface temperature (LST from thermal-infrared Channels 4 (10.8 µm and 5 (12.0 µm of the Visible and Infrared Radiometer (VIRR onboard the second-generation Chinese polar-orbiting FengYun-3A (FY-3A meteorological satellite. The LST, mean emissivity and atmospheric water vapor content (WVC were divided into several tractable sub-ranges with little overlap to improve the fitting accuracy. The experimental results showed that the root mean square errors (RMSEs were proportional to the viewing zenith angles (VZAs and WVC. The RMSEs were below 1.0 K for VZA sub-ranges less than 30° or for VZA sub-ranges less than 60° and WVC less than 3.5 g/cm2, provided that the land surface emissivities were known. A preliminary validation using independently simulated data showed that the estimated LSTs were quite consistent with the actual inputs, with a maximum RMSE below 1 K for all VZAs. An inter-comparison using the Moderate Resolution Imaging Spectroradiometer (MODIS-derived LST product MOD11_L2 showed that the minimum RMSE was 1.68 K for grass, and the maximum RMSE was 3.59 K for barren or sparsely vegetated surfaces. In situ measurements at the Hailar field site in northeastern China from October, 2013, to September, 2014, were used to validate the proposed method. The result showed that the RMSE between the LSTs calculated from the ground measurements and derived from the VIRR data was 1.82 K.

  14. Development and validation of a Kalman filter-based model for vehicle slip angle estimation

    Science.gov (United States)

    Gadola, M.; Chindamo, D.; Romano, M.; Padula, F.

    2014-01-01

    It is well known that vehicle slip angle is one of the most difficult parameters to measure on a vehicle during testing or racing activities. Moreover, the appropriate sensor is very expensive and it is often difficult to fit to a car, especially on race cars. We propose here a strategy to eliminate the need for this sensor by using a mathematical tool which gives a good estimation of the vehicle slip angle. A single-track car model, coupled with an extended Kalman filter, was used in order to achieve the result. Moreover, a tuning procedure is proposed that takes into consideration both nonlinear and saturation characteristics typical of vehicle lateral dynamics. The effectiveness of the proposed algorithm has been proven by both simulation results and real-world data.

  15. Validation of estimating food intake in gray wolves by 22Na turnover

    Science.gov (United States)

    DelGiudice, G.D.; Duquette, L.S.; Seal, U.S.; Mech, L.D.

    1991-01-01

    We studied 22sodium (22Na) turnover as a means of estimating food intake in 6 captive, adult gray wolves (Canis lupus) (2 F, 4 M) over a 31-day feeding period. Wolves were fed white-tailed deer (Odocoileus virginianus) meat only. Mean mass-specific exchangeable Na pool was 44.8 .+-. 0.7 mEq/kg; there was no differeence between males and females. Total exchangeable Na was related (r2 = 0.85, P food consumption (g/kg/day) in wolves over a 32-day period. Sampling blood and weighing wolves every 1-4 days permitted identification of several potential sources of error, including changes in size of exchangeable Na pools, exchange of 22Na with gastrointestinal and bone Na, and rapid loss of the isotope by urinary excretion.

  16. A novel method for coil efficiency estimation: Validation with a 13C birdcage

    DEFF Research Database (Denmark)

    Giovannetti, Giulio; Frijia, Francesca; Hartwig, Valentina

    2012-01-01

    Coil efficiency, defined as the B1 magnetic field induced at a given point on the square root of supplied power P, is an important parameter that characterizes both the transmit and receive performance of the radiofrequency (RF) coil. Maximizing coil efficiency will maximize also the signal......-to-noise ratio. In this work, we propose a novel method for RF coil efficiency estimation based on the use of a perturbing loop. The proposed method consists of loading the coil with a known resistor by inductive coupling and measuring the quality factor with and without the load. We tested the method...... by measuring the efficiency of a 13C birdcage coil tuned at 32.13 MHz and verified its accuracy by comparing the results with the nuclear magnetic resonance nutation experiment. The method allows coil performance characterization in a short time and with great accuracy, and it can be used both on the bench...

  17. Lattice strain estimation for CoAl{sub 2}O{sub 4} nano particles using Williamson-Hall analysis

    Energy Technology Data Exchange (ETDEWEB)

    Aly, Kamal A., E-mail: kamalaly2001@gmail.com [Physics Department, Faculty of Science & Arts, Khulais, University of Jeddah, Jiddah (Saudi Arabia); Physics Department, Faculty of Science, Al-Azhar University, Assiut Branch, Assiut (Egypt); Khalil, N.M. [Chemistry Department, Faculty of Science & Arts, Khulais, University of Jeddah, Jiddah (Saudi Arabia); Refractories, Ceramics and Building Materials Department, National Research Centre, 12311 Cairo (Egypt); Algamal, Yousif [Chemistry Department, Faculty of Science & Arts, Khulais, University of Jeddah, Jiddah (Saudi Arabia); Saleem, Qaid M.A. [Chemistry Department, Faculty of Science & Arts, Khulais, University of Jeddah, Jiddah (Saudi Arabia); Chemistry Department, Faculty of Education, Aden University, Sabwa (Yemen)

    2016-08-15

    CoAl{sub 2}O{sub 4} nanoparticles were prepared via coprecipitation technique through mixing 1:1 M ratio of cobalt nitrate and aluminium nitrate solutions at pH 10. CoAl{sub 2}O{sub 4} crystalline phase was confirmed by X-ray diffraction. Scanning electron microscopy (SEM) result reveals that the particles of CoAl{sub 2}O{sub 4} fired at 900 °C were relatively small (21 nm) and uniform. Increased temperature to 1200 °C gives rise to blocky particles and changes in the powders shape, that because of agglomeration came from the calcination of CoAl{sub 2}O{sub 4}. Furthermore, the particle size increase with increasing the calcinated temperature. The crystalline sizes were evaluated by using X-ray peak broadening analysis suggested by Williamson-Hall (W-H) analysis. It was successfully applied for lattice strain and to calculate mechanical stress and energy density values using different three models namely uniform deformation model (UDM), uniform deformation stress model (UDSM) and uniform deformation energy density model (UDEDM). Also, the root mean square strain was determined. These models gave a different strain values which suggested an isotropic nature of the nanoparticles. Besides, the obtained results W-H analysis are in good agreement with that deduced from SEM analysis and Scherrer's formula. - Highlights: • CoAl{sub 2}O{sub 4} nanoparticles were prepared via coprecipitation technique. • CoAl{sub 2}O{sub 4} nanoparticles were characterized by SEM and XRD. • the lattice size and strain were investigated according to W-H analysis. • The latic size were investigated by W-H analysis, SEM and Sherrar's method. • The root mean square strain was determined.

  18. Validity of energy intake estimated by digital photography plus recall in overweight and obese young adults.

    Science.gov (United States)

    Ptomey, Lauren T; Willis, Erik A; Honas, Jeffery J; Mayo, Matthew S; Washburn, Richard A; Herrmann, Stephen D; Sullivan, Debra K; Donnelly, Joseph E

    2015-09-01

    Recent reports have questioned the adequacy of self-report measures of dietary intake as the basis for scientific conclusions regarding the associations of dietary intake and health, and reports have recommended the development and evaluation of better methods for the assessment of dietary intake in free-living individuals. We developed a procedure that used pre- and post-meal digital photographs in combination with dietary recalls (DP+R) to assess energy intake during ad libitum eating in a cafeteria setting. To compare mean daily energy intake of overweight and obese young adults assessed by a DP+R method with mean total daily energy expenditure assessed by doubly labeled water (TDEE(DLW)). Energy intake was assessed using the DP+R method in 91 overweight and obese young adults (age = 22.9±3.2 years, body mass index [BMI; calculated as kg/m(2)]=31.2±5.6, female=49%) over 7 days of ad libitum eating in a university cafeteria. Foods consumed outside the cafeteria (ie, snacks, non-cafeteria meals) were assessed using multiple-pass recall procedures, using food models and standardized, neutral probing questions. TDEE(DLW) was assessed in all participants over the 14-day period. The mean energy intakes estimated by DP+R and TDEE(DLW) were not significantly different (DP+R=2912±661 kcal/d; TDEE(DLW)=2849±748 kcal/d, P=0.42). The DP+R method overestimated TDEE(DLW) by 63±750 kcal/d (6.8±28%). Results suggest that the DP+R method provides estimates of energy intake comparable to those obtained by TDEE(DLW). Copyright © 2015 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  19. Validity of a self-administered food frequency questionnaire (FFQ and its generalizability to the estimation of dietary folate intake in Japan

    Directory of Open Access Journals (Sweden)

    Iso Hiroyasu

    2005-10-01

    Full Text Available Abstract Background In an epidemiological study, it is essential to test the validity of the food frequency questionnaire (FFQ for its ability to estimate dietary intake. The objectives of our study were to 1 validate a FFQ for estimating folate intake, and to identify the foods that contribute to inter-individual variation of folate intake in the Japanese population. Methods Validity of the FFQ was evaluated using 28-day weighed dietary records (DRs as gold standard in the two groups independently. In the group for which the FFQ was developed, validity was evaluated by Spearman's correlation coefficients (CCs, and linear regression analysis was used to identify foods with large inter-individual variation. The cumulative mean intake of these foods was compared with total intake estimated by the DR. The external validity of the FFQ and intake from foods on the same list were evaluated in the other group to verify generalizability. Subjects were a subsample from the Japan Public Health Center-based prospective Study who volunteered to participate in the FFQ validation study. Results CCs for the internal validity of the FFQ were 0.49 for men and 0.29 and women, while CCs for external validity were 0.33 for men and 0.42 for women. CCs for cumulative folate intake from 33 foods selected by regression analysis were also applicable to an external population. Conclusion Our FFQ was valid for and generalizable to the estimation of folate intake. Foods identified as predictors of inter-individual variation in folate intake were also generalizable in Japanese populations. The FFQ with 138 foods was valid for the estimation of folate intake, while that with 33 foods might be useful for estimating inter-individual variation and ranking of individual folate intake.

  20. Validity of a self-administered food frequency questionnaire (FFQ) and its generalizability to the estimation of dietary folate intake in Japan

    Science.gov (United States)

    Ishihara, Junko; Yamamoto, Seiichiro; Iso, Hiroyasu; Inoue, Manami; Tsugane, Shoichiro

    2005-01-01

    Background In an epidemiological study, it is essential to test the validity of the food frequency questionnaire (FFQ) for its ability to estimate dietary intake. The objectives of our study were to 1) validate a FFQ for estimating folate intake, and to identify the foods that contribute to inter-individual variation of folate intake in the Japanese population. Methods Validity of the FFQ was evaluated using 28-day weighed dietary records (DRs) as gold standard in the two groups independently. In the group for which the FFQ was developed, validity was evaluated by Spearman's correlation coefficients (CCs), and linear regression analysis was used to identify foods with large inter-individual variation. The cumulative mean intake of these foods was compared with total intake estimated by the DR. The external validity of the FFQ and intake from foods on the same list were evaluated in the other group to verify generalizability. Subjects were a subsample from the Japan Public Health Center-based prospective Study who volunteered to participate in the FFQ validation study. Results CCs for the internal validity of the FFQ were 0.49 for men and 0.29 and women, while CCs for external validity were 0.33 for men and 0.42 for women. CCs for cumulative folate intake from 33 foods selected by regression analysis were also applicable to an external population. Conclusion Our FFQ was valid for and generalizable to the estimation of folate intake. Foods identified as predictors of inter-individual variation in folate intake were also generalizable in Japanese populations. The FFQ with 138 foods was valid for the estimation of folate intake, while that with 33 foods might be useful for estimating inter-individual variation and ranking of individual folate intake. PMID:16202175

  1. Development and validation of a method to estimate the potential wind erosion risk in Germany

    Science.gov (United States)

    Funk, Roger; Deumlich, Detlef; Völker, Lidia

    2017-04-01

    The introduction of the Cross Compliance (CC) regulations for soil protection resulted in the demand for the classification of the the wind erosion risk on agricultural areas in Germany nationwide. A spatial highly resolved method was needed based on uniform data sets and validation principles, which provides a fair and equivalent procedure for all affected farmers. A GIS-procedure was developed, which derives the site specific wind erosion risk from the main influencing factors: soil texture, wind velocity, wind direction and landscape structure following the German standard DIN 19706. The procedure enables different approaches in the Federal States and comparable classification results. Here, we present the approach of the Federal State of Brandenburg. In the first step a complete soil data map was composed in a grid size of 10 x 10 m. Data were taken from 1.) the Soil quality Appraisal (scale 1:10.000), 2.) the Medium-scale Soil Mapping (MMK, 1:25.000), 3.) extrapolating the MMK, 4.) new Soil quality Appraisal (new areas after coal-mining). Based on the texture and carbon content the wind erosion susceptibility was divided in 6 classes. This map was combined with data of the annual average wind velocity resulting in an increase of the risk classes for wind velocities > 5 ms-1 and a decrease for structure is regarded by allocating a height to each landscape element, corresponding to the described features in the digital "Biotope and Land Use Map". The "hill shade" procedure of ArcGIS was used to set virtual shadows behind the landscape elements for eight directions. The relative frequency of wind from each direction was used as a weighting factor and multiplied with the numerical values of the shadowed cells. Depending on the distance to the landscape element the shadowing effect was combined with the risk classes. The results show that the wind erosion risk is obviously reduced by integrating landscape structures into the risk assessment. After the renewed

  2. Are people living next to mobile phone base stations more strained? Relationship of health concerns, self-estimated distance to base station, and psychological parameters.

    Science.gov (United States)

    Augner, Christoph; Hacker, Gerhard W

    2009-12-01

    Coeval with the expansion of mobile phone technology and the associated obvious presence of mobile phone base stations, some people living close to these masts reported symptoms they attributed to electromagnetic fields (EMF). Public and scientific discussions arose with regard to whether these symptoms were due to EMF or were nocebo effects. The aim of this study was to find out if people who believe that they live close to base stations show psychological or psychobiological differences that would indicate more strain or stress. Furthermore, we wanted to detect the relevant connections linking self-estimated distance between home and the next mobile phone base station (DBS), daily use of mobile phone (MPU), EMF-health concerns, electromagnetic hypersensitivity, and psychological strain parameters. Fifty-seven participants completed standardized and non-standardized questionnaires that focused on the relevant parameters. In addition, saliva samples were used as an indication to determine the psychobiological strain by concentration of alpha-amylase, cortisol, immunoglobulin A (IgA), and substance P. Self-declared base station neighbors (DBS base station neighbors are more strained than others. EMF-related health concerns cannot explain these findings. Further research should identify if actual EMF exposure or other factors are responsible for these results.

  3. Development and validation of a noncontact spectroscopic device for hemoglobin estimation at point-of-care

    Science.gov (United States)

    Sarkar, Probir Kumar; Pal, Sanchari; Polley, Nabarun; Aich, Rajarshi; Adhikari, Aniruddha; Halder, Animesh; Chakrabarti, Subhananda; Chakrabarti, Prantar; Pal, Samir Kumar

    2017-05-01

    Anemia severely and adversely affects human health and socioeconomic development. Measuring hemoglobin with the minimal involvement of human and financial resources has always been challenging. We describe a translational spectroscopic technique for noncontact hemoglobin measurement at low-resource point-of-care settings in human subjects, independent of their skin color, age, and sex, by measuring the optical spectrum of the blood flowing in the vascular bed of the bulbar conjunctiva. We developed software on the LabVIEW platform for automatic data acquisition and interpretation by nonexperts. The device is calibrated by comparing the differential absorbance of light of wavelength 576 and 600 nm with the clinical hemoglobin level of the subject. Our proposed method is consistent with the results obtained using the current gold standard, the automated hematology analyzer. The proposed noncontact optical device for hemoglobin estimation is highly efficient, inexpensive, feasible, and extremely useful in low-resource point-of-care settings. The device output correlates with the different degrees of anemia with absolute and trending accuracy similar to those of widely used invasive methods. Moreover, the device can instantaneously transmit the generated report to a medical expert through e-mail, text messaging, or mobile apps.

  4. Parametric validations of analytical lifetime estimates for radiation belt electron diffusion by whistler waves

    Directory of Open Access Journals (Sweden)

    A. V. Artemyev

    2013-04-01

    Full Text Available The lifetimes of electrons trapped in Earth's radiation belts can be calculated from quasi-linear pitch-angle diffusion by whistler-mode waves, provided that their frequency spectrum is broad enough and/or their average amplitude is not too large. Extensive comparisons between improved analytical lifetime estimates and full numerical calculations have been performed in a broad parameter range representative of a large part of the magnetosphere from L ~ 2 to 6. The effects of observed very oblique whistler waves are taken into account in both numerical and analytical calculations. Analytical lifetimes (and pitch-angle diffusion coefficients are found to be in good agreement with full numerical calculations based on CRRES and Cluster hiss and lightning-generated wave measurements inside the plasmasphere and Cluster lower-band chorus waves measurements in the outer belt for electron energies ranging from 100 keV to 5 MeV. Comparisons with lifetimes recently obtained from electron flux measurements on SAMPEX, SCATHA, SAC-C and DEMETER also show reasonable agreement.

  5. Validation of a Dietary History Questionnaire against a 7-D Weighed Record for Estimating Nutrient Intake among Rural Elderly Malays.

    Science.gov (United States)

    Shahar, S; Earland, J; Abdulrahman, S

    2000-03-01

    Energy and nutrient intake estimated using a pre-coded dietary history questionnaire (DHQ) was compared with results obtained from a 7-d weighed intake record (WI) in a group of 37 elderly Malays residing in rural areas of Mersing District, Johor, Malaysia to determine the validity of the DHQ. The DHQ consists of a pre-coded dietary history with a qualitative food frequency questionnaire which was developed to obtain information on food intake and usual dietary habits. The 7-d WI requires subjects to weigh each food immediately before eating and to weigh any leftovers. The medians of intake from the two methods were rather similar and varied by less than 30% for every nutrient, except for vitamin C (114%). For most of the nutrients, analysis of group means using the Wilcoxon matched pairs signed rank sum test showed no significant difference between the estimation of intake from the DHQ and from the WI, with the exceptions of vitamin C and niacin. The DHQ significantly overestimated the intake of vitamin C compared to the WI (ppopulation with a high prevalence of illiteracy, a specially designed DHQ can provide very similar estimations to that obtained from 7-d WI.

  6. Assessment of heat transfer correlations for supercritical water in the frame of best-estimate code validation

    International Nuclear Information System (INIS)

    Jaeger, Wadim; Espinoza, Victor H. Sanchez; Schneider, Niko; Hurtado, Antonio

    2009-01-01

    Within the frame of the Generation IV international forum six innovative reactor concepts are the subject of comprehensive investigations. In some projects supercritical water will be considered as coolant, moderator (as for the High Performance Light Water Reactor) or secondary working fluid (one possible option for Liquid Metal-cooled Fast Reactors). Supercritical water is characterized by a pronounced change of the thermo-physical properties when crossing the pseudo-critical line, which goes hand in hand with a change in the heat transfer (HT) behavior. Hence, it is essential to estimate, in a proper way, the heat-transfer coefficient and subsequently the wall temperature. The scope of this paper is to present and discuss the activities at the Institute for Reactor Safety (IRS) related to the implementation of correlations for wall-to-fluid HT at supercritical conditions in Best-Estimate codes like TRACE as well as its validation. It is important to validate TRACE before applying it to safety analyses of HPLWR or of other reactor systems. In the past 3 decades various experiments have been performed all over the world to reveal the peculiarities of wall-to-fluid HT at supercritical conditions. Several different heat transfer phenomena such as HT enhancement (due to higher Prandtl numbers in the vicinity of the pseudo-critical point) or HT deterioration (due to strong property variations) were observed. Since TRACE is a component based system code with a finite volume method the resolution capabilities are limited and not all physical phenomena can be modeled properly. But Best -Estimate system codes are nowadays the preferred option for safety related investigations of full plants or other integral systems. Thus, the increase of the confidence in such codes is of high priority. In this paper, the post-test analysis of experiments with supercritical parameters will be presented. For that reason various correlations for the HT, which considers the characteristics

  7. Arterial stiffness estimation in healthy subjects: a validation of oscillometric (Arteriograph) and tonometric (SphygmoCor) techniques.

    Science.gov (United States)

    Ring, Margareta; Eriksson, Maria Jolanta; Zierath, Juleen Rae; Caidahl, Kenneth

    2014-11-01

    Arterial stiffness is an important cardiovascular risk marker, which can be measured noninvasively with different techniques. To validate such techniques in healthy subjects, we compared the recently introduced oscillometric Arteriograph (AG) technique with the tonometric SphygmoCor (SC) method and their associations with carotid ultrasound measures and traditional risk indicators. Sixty-three healthy subjects aged 20-69 (mean 48 ± 15) years were included. We measured aortic pulse wave velocity (PWVao) and augmentation index (AIx) by AG and SC, and with SC also the PWVao standardized to 80% of the direct distance between carotid and femoral sites (St-PWVaoSC). The carotid strain, stiffness index and intima-media thickness (cIMTmean) were evaluated by ultrasound. PWVaoAG (8.00 ± 2.16 m s(-1)) was higher (Pstiffness indices by AG and SC correlate with vascular risk markers in healthy subjects. AIxao results by AG and SC are closely interrelated, but higher values are obtained by AG. In the lower range, PWVao values by AG and SC are similar, but differ for higher values. Our results imply the necessity to apply one and the same technique for repeated studies.

  8. Accuracy and feasibility of estimated tumour volumetry in primary gastric gastrointestinal stromal tumours: validation using semiautomated technique in 127 patients.

    Science.gov (United States)

    Tirumani, Sree Harsha; Shinagare, Atul B; O'Neill, Ailbhe C; Nishino, Mizuki; Rosenthal, Michael H; Ramaiya, Nikhil H

    2016-01-01

    To validate estimated tumour volumetry in primary gastric gastrointestinal stromal tumours (GISTs) using semiautomated volumetry. In this IRB-approved retrospective study, we measured the three longest diameters in x, y, z axes on CTs of primary gastric GISTs in 127 consecutive patients (52 women, 75 men, mean age 61 years) at our institute between 2000 and 2013. Segmented volumes (Vsegmented) were obtained using commercial software by two radiologists. Estimate volumes (V1-V6) were obtained using formulae for spheres and ellipsoids. Intra- and interobserver agreement of Vsegmented and agreement of V1-6 with Vsegmented were analysed with concordance correlation coefficients (CCC) and Bland-Altman plots. Median Vsegmented and V1-V6 were 75.9, 124.9, 111.6, 94.0, 94.4, 61.7 and 80.3 cm(3), respectively. There was strong intra- and interobserver agreement for Vsegmented. Agreement with Vsegmented was highest for V6 (scalene ellipsoid, x ≠ y ≠ z), with CCC of 0.96 [95 % CI 0.95-0.97]. Mean relative difference was smallest for V6 (0.6 %), while it was -19.1 % for V5, +14.5 % for V4, +17.9 % for V3, +32.6 % for V2 and +47 % for V1. Ellipsoidal approximations of volume using three measured axes may be used to closely estimate Vsegmented when semiautomated techniques are unavailable. Estimation of tumour volume in primary GIST using mathematical formulae is feasible. Gastric GISTs are rarely spherical. Segmented volumes are highly concordant with three axis-based scalene ellipsoid volumes. Ellipsoid volume can be used as an alternative for automated tumour volumetry.

  9. Validation of a protocol for the estimation of three-dimensional body center of mass kinematics in sport.

    Science.gov (United States)

    Mapelli, Andrea; Zago, Matteo; Fusini, Laura; Galante, Domenico; Colombo, Andrea; Sforza, Chiarella

    2014-01-01

    Since strictly related to balance and stability control, body center of mass (CoM) kinematics is a relevant quantity in sport surveys. Many methods have been proposed to estimate CoM displacement. Among them, segmental method appears to be suitable to investigate CoM kinematics in sport: human body is assumed as a system of rigid bodies, hence the whole-body CoM is calculated as the weighted average of the CoM of each segment. The number of landmarks represents a crucial choice in the protocol design process: one have to find the proper compromise between accuracy and invasivity. In this study, using a motion analysis system, a protocol based upon the segmental method is validated, adopting an anatomical model comprising 14 landmarks. Two sets of experiments were conducted. Firstly, our protocol was compared to the ground reaction force method (GRF), accounted as a standard in CoM estimation. In the second experiment, we investigated the aerial phase typical of many disciplines, comparing our protocol with: (1) an absolute reference, the parabolic regression of the vertical CoM trajectory during the time of flight; (2) two common approaches to estimate CoM kinematics in gait, known as sacrum and reconstructed pelvis methods. Recognized accuracy indexes proved that the results obtained were comparable to the GRF; what is more, during the aerial phases our protocol showed to be significantly more accurate than the two other methods. The protocol assessed can therefore be adopted as a reliable tool for CoM kinematics estimation in further sport researches. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Are cannabis prevalence estimates comparable across countries and regions? A cross-cultural validation using search engine query data.

    Science.gov (United States)

    Steppan, Martin; Kraus, Ludwig; Piontek, Daniela; Siciliano, Valeria

    2013-01-01

    Prevalence estimation of cannabis use is usually based on self-report data. Although there is evidence on the reliability of this data source, its cross-cultural validity is still a major concern. External objective criteria are needed for this purpose. In this study, cannabis-related search engine query data are used as an external criterion. Data on cannabis use were taken from the 2007 European School Survey Project on Alcohol and Other Drugs (ESPAD). Provincial data came from three Italian nation-wide studies using the same methodology (2006-2008; ESPAD-Italia). Information on cannabis-related search engine query data was based on Google search volume indices (GSI). (1) Reliability analysis was conducted for GSI. (2) Latent measurement models of "true" cannabis prevalence were tested using perceived availability, web-based cannabis searches and self-reported prevalence as indicators. (3) Structure models were set up to test the influences of response tendencies and geographical position (latitude, longitude). In order to test the stability of the models, analyses were conducted on country level (Europe, US) and on provincial level in Italy. Cannabis-related GSI were found to be highly reliable and constant over time. The overall measurement model was highly significant in both data sets. On country level, no significant effects of response bias indicators and geographical position on perceived availability, web-based cannabis searches and self-reported prevalence were found. On provincial level, latitude had a significant positive effect on availability indicating that perceived availability of cannabis in northern Italy was higher than expected from the other indicators. Although GSI showed weaker associations with cannabis use than perceived availability, the findings underline the external validity and usefulness of search engine query data as external criteria. The findings suggest an acceptable relative comparability of national (provincial) prevalence

  11. Simultaneous Estimation and Validation of Atorvastatin Calcium and Aspirin in Combined Capsule Dosage Form by RP HPLC Method

    Directory of Open Access Journals (Sweden)

    B. V. Suma

    2012-01-01

    Full Text Available A new simple, specific, precise and accurate revere phase liquid chromatography method has been developed for estimation of atorvastatin calcium (AST and ASPIRIN (ASP simultaneously in a combined capsule dosage forms. The chromatographic separation was achieved on a 5 – micron C 18 column (250x 4.6mm using a mobile phase consisting of a mixture of Acetonitrile: Ammonium Acetate buffer 0.02M (68:32 pH 4.5. The flow rate was maintained at 0.8 ml/min. The detection of the constituents was done using UV detector at 245 nm for AST and ASP. The retention time of AST and ASP were found be 4.5915 ± 0.0031 min and 3.282 ±0.0024 min respectively. The developed method was validated for accuracy, linearity, precision, limit of detection (LOD and limit of quantification (LOQ and robustness as per the ICH guidelines.

  12. Validation of the method for the simultaneous estimation of bioactive marker gallic acid and quercetin in Abutilon indicum by HPTLC

    Directory of Open Access Journals (Sweden)

    Md. Sarfaraj Hussain

    2012-05-01

    Full Text Available Objective: To establish and validate an simultaneous estimation of the two biomarker compounds gallic acid (GA and quercetin (QE from methanolic extract of Abutilon indicum (AI. Methods: Chromatography was performed on aluminium foil-backed silica gel 60 F254 HPTLC plates with the binary mobile phase toluene-ethyl acetate-formic acid (5:4:1, v/v/v. Ultraviolet detection was performed densitometrically at the maximum absorbance wavelength, 270nm. The method was validated for precision, recovery, robustness, specificity, and detection and quantification limits, in accordance with ICH guidelines. Results: The system was found to give compact spots for GA and QE (Rf value of 0.31 and 0.50, respectively. The limit of detection (23 and 41 ng band-1 limit of quantification (69 and 123 ng band-1, recovery (99.4-99.9 and 98.7-99.4%, and precision (≤ 1.98 and 1.97 were satisfactory for gallic acid and quercetin respectively. Linearity range for GA and QE were 100-1000 (r2= 0.9991 and 150-900 ng band-1 (r2= 0.9956 and the contents estimated as 0.69% ± 0.01% and 0.57% ± 0.01% w/w respectively. Conclusions: This simple, precise and accurate method gave good resolution from other constituents present in the extract. The method has been successfully applied in the analysis and routine quality control of herbal material and formulations containing AI.

  13. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry – BREALD-30

    Science.gov (United States)

    Junkes, Monica C.; Fraiz, Fabian C.; Sardenberg, Fernanda; Lee, Jessica Y.; Paiva, Saul M.; Ferreira, Fernanda M.

    2015-01-01

    Objective The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. Methods After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. Results The BREALD-30 demonstrated good internal reliability. Cronbach’s alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent’s perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent’s perception regarding his/her child's oral health remained significant in the multivariate analysis. Conclusion The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil. PMID:26158724

  14. Developing and Validating Path-Dependent Uncertainty Estimates for use with the Regional Seismic Travel Time (RSTT) Model

    Science.gov (United States)

    Begnaud, M. L.; Anderson, D. N.; Phillips, W. S.; Myers, S. C.; Ballard, S.

    2016-12-01

    The Regional Seismic Travel Time (RSTT) tomography model has been developed to improve travel time predictions for regional phases (Pn, Sn, Pg, Lg) in order to increase seismic location accuracy, especially for explosion monitoring. The RSTT model is specifically designed to exploit regional phases for location, especially when combined with teleseismic arrivals. The latest RSTT model (version 201404um) has been released (http://www.sandia.gov/rstt). Travel time uncertainty estimates for RSTT are determined using one-dimensional (1D), distance-dependent error models, that have the benefit of being very fast to use in standard location algorithms, but do not account for path-dependent variations in error, and structural inadequacy of the RSTTT model (e.g., model error). Although global in extent, the RSTT tomography model is only defined in areas where data exist. A simple 1D error model does not accurately model areas where RSTT has not been calibrated. We are developing and validating a new error model for RSTT phase arrivals by mathematically deriving this multivariate model directly from a unified model of RSTT embedded into a statistical random effects model that captures distance, path and model error effects. An initial method developed is a two-dimensional path-distributed method using residuals. The goals for any RSTT uncertainty method are for it to be both readily useful for the standard RSTT user as well as improve travel time uncertainty estimates for location. We have successfully tested using the new error model for Pn phases and will demonstrate the method and validation of the error model for Sn, Pg, and Lg phases.

  15. Design and validation of new genotypic tools for easy and reliable estimation of HIV tropism before using CCR5 antagonists.

    Science.gov (United States)

    Poveda, Eva; Seclén, Eduardo; González, María del Mar; García, Federico; Chueca, Natalia; Aguilera, Antonio; Rodríguez, Jose Javier; González-Lahoz, Juan; Soriano, Vincent

    2009-05-01

    Genotypic tools may allow easier and less expensive estimation of HIV tropism before prescription of CCR5 antagonists compared with the Trofile assay (Monogram Biosciences, South San Francisco, CA, USA). Paired genotypic and Trofile results were compared in plasma samples derived from the maraviroc expanded access programme (EAP) in Europe. A new genotypic approach was built to improve the sensitivity to detect X4 variants based on an optimization of the webPSSM algorithm. Then, the new tool was validated in specimens from patients included in the ALLEGRO trial, a multicentre study conducted in Spain to assess the prevalence of R5 variants in treatment-experienced HIV patients. A total of 266 specimens from the maraviroc EAP were tested. Overall geno/pheno concordance was above 72%. A high specificity was generally seen for the detection of X4 variants using genotypic tools (ranging from 58% to 95%), while sensitivity was low (ranging from 31% to 76%). The PSSM score was then optimized to enhance the sensitivity to detect X4 variants changing the original threshold for R5 categorization. The new PSSM algorithms, PSSM(X4R5-8) and PSSM(SINSI-6.4), considered as X4 all V3 scoring values above -8 or -6.4, respectively, increasing the sensitivity to detect X4 variants up to 80%. The new algorithms were then validated in 148 specimens derived from patients included in the ALLEGRO trial. The sensitivity/specificity to detect X4 variants was 93%/69% for PSSM(X4R5-8) and 93%/70% for PSSM(SINSI-6.4). PSSM(X4R5-8) and PSSM(SINSI-6.4) may confidently assist therapeutic decisions for using CCR5 antagonists in HIV patients, providing an easier and rapid estimation of tropism in clinical samples.

  16. Development and Validation of RP-HPLC Method for Simultaneous Estimation of Ramipril, Aspirin and Atorvastatin in Pharmaceutical Preparations

    Directory of Open Access Journals (Sweden)

    Rajesh Sharma

    2012-01-01

    Full Text Available A simple, sensitive, accurate and rapid reverse phase high performance liquid chromatographic method is developed for the simultaneous estimation of ramipril, aspirin and atorvastatin in pharmaceutical preparations. Chromatography was performed on a 25cm×4.6 mm i.d, 5µm particle, C18 column with Mixture of (A acetonitrile methanol (65:35 and (B 10 mM sodium dihydrogen phosphate monohydrate (NaH2PO4.H2O buffer and mixture of A:B (60:40 v/v adjusted to pH 3.0 with o-phosphoric acid (5%v/v was used as a mobile phase at a flow rate of 1.5 ml min-1. UV detection was performed at 230 nm. Total run time was less then 12 min; retention time for Ramipril, aspirin and Atorvastatin were 3.620, 4.920 min and 11.710 min respectively. The method was validated for accuracy, precision, linearity, specificity and sensitivity in accordance with ICH guidelines. Validation revealed that the method is specific, rapid, accurate, precise, reliable, and reproducible. Calibration plots were linear over the concentration ranges 05-50 µg mL-1 for Ramipril, 05-100 µgmL-1 for aspirin and 02-20 µg mL-1 for atorvastatin. Limits of detection were 0.014, 0.10 and 0.0095 ng mL-1 limits of quantification were 0.043, 0.329 and 0.029 ng mL-1 for ramipril aspirin and atorvastatin respectively. The high recovery and low coefficients of variation confirm the suitability of the method for simultaneous analysis of the all three drugs in the dosage forms. The validated method was successfully used for quantitative analysis of marketed pharmaceutical preparations.

  17. Primary Sclerosing Cholangitis Risk Estimate Tool (PREsTo) Predicts Outcomes in PSC: A Derivation & Validation Study Using Machine Learning.

    Science.gov (United States)

    Eaton, John E; Vesterhus, Mette; McCauley, Bryan M; Atkinson, Elizabeth J; Schlicht, Erik M; Juran, Brian D; Gossard, Andrea A; LaRusso, Nicholas F; Gores, Gregory J; Karlsen, Tom H; Lazaridis, Konstantinos N

    2018-05-09

    Improved methods are needed to risk stratify and predict outcomes in patients with primary sclerosing cholangitis (PSC). Therefore, we sought to derive and validate a new prediction model and compare its performance to existing surrogate markers. The model was derived using 509 subjects from a multicenter North American cohort and validated in an international multicenter cohort (n=278). Gradient boosting, a machine based learning technique, was used to create the model. The endpoint was hepatic decompensation (ascites, variceal hemorrhage or encephalopathy). Subjects with advanced PSC or cholangiocarcinoma at baseline were excluded. The PSC risk estimate tool (PREsTo) consists of 9 variables: bilirubin, albumin, serum alkaline phosphatase (SAP) times the upper limit of normal (ULN), platelets, AST, hemoglobin, sodium, patient age and the number of years since PSC was diagnosed. Validation in an independent cohort confirms PREsTo accurately predicts decompensation (C statistic 0.90, 95% confidence interval (CI) 0.84-0.95) and performed well compared to MELD score (C statistic 0.72, 95% CI 0.57-0.84), Mayo PSC risk score (C statistic 0.85, 95% CI 0.77-0.92) and SAP statistic 0.65, 95% CI 0.55-0.73). PREsTo continued to be accurate among individuals with a bilirubin statistic 0.90, 95% CI 0.82-0.96) and when the score was re-applied at a later course in the disease (C statistic 0.82, 95% CI 0.64-0.95). PREsTo accurately predicts hepatic decompensation in PSC and exceeds the performance among other widely available, noninvasive prognostic scoring systems. This article is protected by copyright. All rights reserved. © 2018 by the American Association for the Study of Liver Diseases.

  18. Global Validation of MODIS Atmospheric Profile-Derived Near-Surface Air Temperature and Dew Point Estimates

    Science.gov (United States)

    Famiglietti, C.; Fisher, J.; Halverson, G. H.

    2017-12-01

    This study validates a method of remote sensing near-surface meteorology that vertically interpolates MODIS atmospheric profiles to surface pressure level. The extraction of air temperature and dew point observations at a two-meter reference height from 2001 to 2014 yields global moderate- to fine-resolution near-surface temperature distributions that are compared to geographically and temporally corresponding measurements from 114 ground meteorological stations distributed worldwide. This analysis is the first robust, large-scale validation of the MODIS-derived near-surface air temperature and dew point estimates, both of which serve as key inputs in models of energy, water, and carbon exchange between the land surface and the atmosphere. Results show strong linear correlations between remotely sensed and in-situ near-surface air temperature measurements (R2 = 0.89), as well as between dew point observations (R2 = 0.77). Performance is relatively uniform across climate zones. The extension of mean climate-wise percent errors to the entire remote sensing dataset allows for the determination of MODIS air temperature and dew point uncertainties on a global scale.

  19. Validation of myocardial blood flow estimation with nitrogen-13 ammonia PET by the argon inert gas technique in humans

    International Nuclear Information System (INIS)

    Kotzerke, J.; Glatting, G.; Neumaier, B.; Reske, S.N.; Hoff, J. van den; Hoeher, M.; Woehrle, J. n

    2001-01-01

    We simultaneously determined global myocardial blood flow (MBF) by the argon inert gas technique and by nitrogen-13 ammonia positron emission tomography (PET) to validate PET-derived MBF values in humans. A total of 19 patients were investigated at rest (n=19) and during adenosine-induced hyperaemia (n=16). Regional coronary artery stenoses were ruled out by angiography. The argon inert gas method uses the difference of arterial and coronary sinus argon concentrations during inhalation of a mixture of 75% argon and 25% oxygen to estimate global MBF. It can be considered as valid as the microspheres technique, which, however, cannot be applied in humans. Dynamic PET was performed after injection of 0.8±0.2 GBq 13 N-ammonia and MBF was calculated applying a two-tissue compartment model. MBF values derived from the argon method at rest and during the hyperaemic state were 1.03±0.24 ml min -1 g -1 and 2.64±1.02 ml min -1 g -1 , respectively. MBF values derived from ammonia PET at rest and during hyperaemia were 0.95±0.23 ml min -1 g -1 and 2.44±0.81 ml min -1 g -1 , respectively. The correlation between the two methods was close (y=0.92x+0.14, r=0.96; P 13 N-ammonia PET. (orig.)

  20. PIG's Speed Estimated with Pressure Transducers and Hall Effect Sensor: An Industrial Application of Sensors to Validate a Testing Laboratory.

    Science.gov (United States)

    Lima, Gustavo F; Freitas, Victor C G; Araújo, Renan P; Maitelli, André L; Salazar, Andrés O

    2017-09-15

    The pipeline inspection using a device called Pipeline Inspection Gauge (PIG) is safe and reliable when the PIG is at low speeds during inspection. We built a Testing Laboratory, containing a testing loop and supervisory system to study speed control techniques for PIGs. The objective of this work is to present and validate the Testing Laboratory, which will allow development of a speed controller for PIGs and solve an existing problem in the oil industry. The experimental methodology used throughout the project is also presented. We installed pressure transducers on pipeline outer walls to detect the PIG's movement and, with data from supervisory, calculated an average speed of 0.43 m/s. At the same time, the electronic board inside the PIG received data from odometer and calculated an average speed of 0.45 m/s. We found an error of 4.44%, which is experimentally acceptable. The results showed that it is possible to successfully build a Testing Laboratory to detect the PIG's passage and estimate its speed. The validation of the Testing Laboratory using data from the odometer and its auxiliary electronic was very successful. Lastly, we hope to develop more research in the oil industry area using this Testing Laboratory.

  1. MODIS Observation of Aerosols over Southern Africa During SAFARI 2000: Data, Validation, and Estimation of Aerosol Radiative Forcing

    Science.gov (United States)

    Ichoku, Charles; Kaufman, Yoram; Remer, Lorraine; Chu, D. Allen; Mattoo, Shana; Tanre, Didier; Levy, Robert; Li, Rong-Rong; Kleidman, Richard; Lau, William K. M. (Technical Monitor)

    2001-01-01

    Aerosol properties, including optical thickness and size parameters, are retrieved operationally from the MODIS sensor onboard the Terra satellite launched on 18 December 1999. The predominant aerosol type over the Southern African region is smoke, which is generated from biomass burning on land and transported over the southern Atlantic Ocean. The SAFARI-2000 period experienced smoke aerosol emissions from the regular biomass burning activities as well as from the prescribed burns administered on the auspices of the experiment. The MODIS Aerosol Science Team (MAST) formulates and implements strategies for the retrieval of aerosol products from MODIS, as well as for validating and analyzing them in order to estimate aerosol effects in the radiative forcing of climate as accurately as possible. These activities are carried out not only from a global perspective, but also with a focus on specific regions identified as having interesting characteristics, such as the biomass burning phenomenon in southern Africa and the associated smoke aerosol, particulate, and trace gas emissions. Indeed, the SAFARI-2000 aerosol measurements from the ground and from aircraft, along with MODIS, provide excellent data sources for a more intensive validation and a closer study of the aerosol characteristics over Southern Africa. The SAFARI-2000 ground-based measurements of aerosol optical thickness (AOT) from both the automatic Aerosol Robotic Network (AERONET) and handheld Sun photometers have been used to validate MODIS retrievals, based on a sophisticated spatio-temporal technique. The average global monthly distribution of aerosol from MODIS has been combined with other data to calculate the southern African aerosol daily averaged (24 hr) radiative forcing over the ocean for September 2000. It is estimated that on the average, for cloud free conditions over an area of 9 million square kin, this predominantly smoke aerosol exerts a forcing of -30 W/square m C lose to the terrestrial

  2. Development and Validation of Spectrophotometric Methods for Simultaneous Estimation of Valsartan and Hydrochlorothiazide in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    Monika L. Jadhav

    2014-01-01

    Full Text Available Two UV-spectrophotometric methods have been developed and validated for simultaneous estimation of valsartan and hydrochlorothiazide in a tablet dosage form. The first method employed solving of simultaneous equations based on the measurement of absorbance at two wavelengths, 249.4 nm and 272.6 nm, λmax for valsartan and hydrochlorothiazide, respectively. The second method was absorbance ratio method, which involves formation of Q-absorbance equation at 258.4 nm (isoabsorptive point and also at 272.6 nm (λmax of hydrochlorothiazide. The methods were found to be linear between the range of 5–30 µg/mL for valsartan and 4–24 μg/mL for hydrochlorothiazide using 0.1 N NaOH as solvent. The mean percentage recovery was found to be 100.20% and 100.19% for the simultaneous equation method and 98.56% and 97.96% for the absorbance ratio method, for valsartan and hydrochlorothiazide, respectively, at three different levels of standard additions. The precision (intraday, interday of methods was found within limits (RSD<2%. It could be concluded from the results obtained in the present investigation that the two methods for simultaneous estimation of valsartan and hydrochlorothiazide in tablet dosage form are simple, rapid, accurate, precise and economical and can be used, successfully, in the quality control of pharmaceutical formulations and other routine laboratory analysis.

  3. Development and Validation of a Calculator for Estimating the Probability of Urinary Tract Infection in Young Febrile Children.

    Science.gov (United States)

    Shaikh, Nader; Hoberman, Alejandro; Hum, Stephanie W; Alberty, Anastasia; Muniz, Gysella; Kurs-Lasky, Marcia; Landsittel, Douglas; Shope, Timothy

    2018-06-01

    Accurately estimating the probability of urinary tract infection (UTI) in febrile preverbal children is necessary to appropriately target testing and treatment. To develop and test a calculator (UTICalc) that can first estimate the probability of UTI based on clinical variables and then update that probability based on laboratory results. Review of electronic medical records of febrile children aged 2 to 23 months who were brought to the emergency department of Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania. An independent training database comprising 1686 patients brought to the emergency department between January 1, 2007, and April 30, 2013, and a validation database of 384 patients were created. Five multivariable logistic regression models for predicting risk of UTI were trained and tested. The clinical model included only clinical variables; the remaining models incorporated laboratory results. Data analysis was performed between June 18, 2013, and January 12, 2018. Documented temperature of 38°C or higher in children aged 2 months to less than 2 years. With the use of culture-confirmed UTI as the main outcome, cutoffs for high and low UTI risk were identified for each model. The resultant models were incorporated into a calculation tool, UTICalc, which was used to evaluate medical records. A total of 2070 children were included in the study. The training database comprised 1686 children, of whom 1216 (72.1%) were female and 1167 (69.2%) white. The validation database comprised 384 children, of whom 291 (75.8%) were female and 200 (52.1%) white. Compared with the American Academy of Pediatrics algorithm, the clinical model in UTICalc reduced testing by 8.1% (95% CI, 4.2%-12.0%) and decreased the number of UTIs that were missed from 3 cases to none. Compared with empirically treating all children with a leukocyte esterase test result of 1+ or higher, the dipstick model in UTICalc would have reduced the number of treatment delays by 10.6% (95% CI

  4. Estimation of nonpaternity in the Mexican population of Nuevo Leon: a validation study with blood group markers.

    Science.gov (United States)

    Cerda-Flores, R M; Barton, S A; Marty-Gonzalez, L F; Rivas, F; Chakraborty, R

    1999-07-01

    A method for estimating the general rate of nonpaternity in a population was validated using phenotype data on seven blood groups (A1A2BO, MNSs, Rh, Duffy, Lutheran, Kidd, and P) on 396 mother, child, and legal father trios from Nuevo León, Mexico. In all, 32 legal fathers were excluded as the possible father based on genetic exclusions at one or more loci (combined average exclusion probability of 0.694 for specific mother-child phenotype pairs). The maximum likelihood estimate of the general nonpaternity rate in the population was 0.118 +/- 0.020. The nonpaternity rates in Nuevo León were also seen to be inversely related with the socioeconomic status of the families, i.e., the highest in the low and the lowest in the high socioeconomic class. We further argue that with the moderately low (69.4%) power of exclusion for these seven blood group systems, the traditional critical values of paternity index (PI > or = 19) were not good indicators of true paternity, since a considerable fraction (307/364) of nonexcluded legal fathers had a paternity index below 19 based on the seven markers. Implications of these results in the context of genetic-epidemiological studies as well as for detection of true fathers for child-support adjudications are discussed, implying the need to employ a battery of genetic markers (possibly DNA-based tests) that yield a higher power of exclusion. We conclude that even though DNA markers are more informative, the probabilistic approach developed here would still be needed to estimate the true rate of nonpaternity in a population or to evaluate the precision of detecting true fathers.

  5. Validating accelerometry estimates of energy expenditure across behaviours using heart rate data in a free-living seabird.

    Science.gov (United States)

    Hicks, Olivia; Burthe, Sarah; Daunt, Francis; Butler, Adam; Bishop, Charles; Green, Jonathan A

    2017-05-15

    Two main techniques have dominated the field of ecological energetics: the heart rate and doubly labelled water methods. Although well established, they are not without their weaknesses, namely expense, intrusiveness and lack of temporal resolution. A new technique has been developed using accelerometers; it uses the overall dynamic body acceleration (ODBA) of an animal as a calibrated proxy for energy expenditure. This method provides high-resolution data without the need for surgery. Significant relationships exist between the rate of oxygen consumption ( V̇ O 2 ) and ODBA in controlled conditions across a number of taxa; however, it is not known whether ODBA represents a robust proxy for energy expenditure consistently in all natural behaviours and there have been specific questions over its validity during diving, in diving endotherms. Here, we simultaneously deployed accelerometers and heart rate loggers in a wild population of European shags ( Phalacrocorax aristotelis ). Existing calibration relationships were then used to make behaviour-specific estimates of energy expenditure for each of these two techniques. Compared with heart rate-derived estimates, the ODBA method predicts energy expenditure well during flight and diving behaviour, but overestimates the cost of resting behaviour. We then combined these two datasets to generate a new calibration relationship between ODBA and V̇ O 2  that accounts for this by being informed by heart rate-derived estimates. Across behaviours we found a good relationship between ODBA and V̇ O 2 Within individual behaviours, we found useable relationships between ODBA and V̇ O 2  for flight and resting, and a poor relationship during diving. The error associated with these new calibration relationships mostly originates from the previous heart rate calibration rather than the error associated with the ODBA method. The equations provide tools for understanding how energy constrains ecology across the complex behaviour

  6. A Strain-Based Method to Detect Tires’ Loss of Grip and Estimate Lateral Friction Coefficient from Experimental Data by Fuzzy Logic for Intelligent Tire Development

    Directory of Open Access Journals (Sweden)

    Jorge Yunta

    2018-02-01

    Full Text Available Tires are a key sub-system of vehicles that have a big responsibility for comfort, fuel consumption and traffic safety. However, current tires are just passive rubber elements which do not contribute actively to improve the driving experience or vehicle safety. The lack of information from the tire during driving gives cause for developing an intelligent tire. Therefore, the aim of the intelligent tire is to monitor tire working conditions in real-time, providing useful information to other systems and becoming an active system. In this paper, tire tread deformation is measured to provide a strong experimental base with different experiments and test results by means of a tire fitted with sensors. Tests under different working conditions such as vertical load or slip angle have been carried out with an indoor tire test rig. The experimental data analysis shows the strong relation that exists between lateral force and the maximum tensile and compressive strain peaks when the tire is not working at the limit of grip. In the last section, an estimation system from experimental data has been developed and implemented in Simulink to show the potential of strain sensors for developing intelligent tire systems, obtaining as major results a signal to detect tire’s loss of grip and estimations of the lateral friction coefficient.

  7. A Strain-Based Method to Detect Tires' Loss of Grip and Estimate Lateral Friction Coefficient from Experimental Data by Fuzzy Logic for Intelligent Tire Development.

    Science.gov (United States)

    Yunta, Jorge; Garcia-Pozuelo, Daniel; Diaz, Vicente; Olatunbosun, Oluremi

    2018-02-06

    Tires are a key sub-system of vehicles that have a big responsibility for comfort, fuel consumption and traffic safety. However, current tires are just passive rubber elements which do not contribute actively to improve the driving experience or vehicle safety. The lack of information from the tire during driving gives cause for developing an intelligent tire. Therefore, the aim of the intelligent tire is to monitor tire working conditions in real-time, providing useful information to other systems and becoming an active system. In this paper, tire tread deformation is measured to provide a strong experimental base with different experiments and test results by means of a tire fitted with sensors. Tests under different working conditions such as vertical load or slip angle have been carried out with an indoor tire test rig. The experimental data analysis shows the strong relation that exists between lateral force and the maximum tensile and compressive strain peaks when the tire is not working at the limit of grip. In the last section, an estimation system from experimental data has been developed and implemented in Simulink to show the potential of strain sensors for developing intelligent tire systems, obtaining as major results a signal to detect tire's loss of grip and estimations of the lateral friction coefficient.

  8. A new method to estimate left ventricular circumferential midwall systolic function by standard echocardiography: Concordance between models and validation by speckle tracking.

    Science.gov (United States)

    Ballo, Piercarlo; Nistri, Stefano; Bocelli, Arianna; Mele, Donato; Dini, Frank L; Galderisi, Maurizio; Zuppiroli, Alfredo; Mondillo, Sergio

    2016-01-15

    Assessment of left ventricular circumferential (LVcirc) systolic function by standard echocardiography can be performed by estimating midwall fractional shortening (mFS) and stress-corrected mFS (ScmFS). Their determination is based on spherical or cylindrical LV geometric models, which often yield discrepant values. We developed a new model based on a more realistic truncated ellipsoid (TE) LV shape, and explored the concordance between models among hypertensive patients. We also compared the relationships of different mFS and ScmFS estimates with indexes of LVcirc systolic strain. In 364 hypertensive subjects, mFS was determined using the spherical (mFSspher), cylindrical (mFScyl), and TE model (mFSTE). Corresponding values of ScmFSspher, ScmFScyl, and ScmFSTE were obtained. Global circumferential strain (GCS) and systolic strain rate (GCSR) were also measured by speckle tracking. The three models showed poor concordance for the estimation of mFS, with average differences ranging between 11% and 30% and wide limits of agreement. Similar results were found for ScmFS, where reclassification rates for the identification of abnormal LVcirc systolic function ranged between 18% and 29%. When tested against strain indexes, mFSTE and ScmFSTE showed the best correlations (R=0.81 and R=0.51, p<0.0001 for both) with GCS and GCSR. Multivariable analysis confirmed that mFSTE and ScmFSTE showed the strongest independent associations with LVcirc strain measures. Substantial discrepancies in LVcirc midwall systolic indexes exist between different models, supporting the need of model-specific normative data. The use of the TE model might provide indexes that show the best associations with established strain measures of LVcirc systolic function. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Direct restoration modalities of fractured central maxillary incisors: A multi-levels validated finite elements analysis with in vivo strain measurements.

    Science.gov (United States)

    Davide, Apicella; Raffaella, Aversa; Marco, Tatullo; Michele, Simeone; Syed, Jamaluddin; Massimo, Marrelli; Marco, Ferrari; Antonio, Apicella

    2015-12-01

    To quantify the influence of fracture geometry and restorative materials rigidity on the stress intensity and distribution of restored fractured central maxillary incisors (CMI) with particular investigation of the adhesive interfaces. Ancillary objectives are to present an innovative technology to measure the in vivo strain state of sound maxillary incisors and to present the collected data. A validation experimental biomechanics approach has been associated to finite element analysis. FEA models consisted of CMI, periodontal ligament and the corresponding alveolar bone process. Three models were created representing different orientation of the fracture planes. Three different angulations of the fracture plane in buccal-palatal direction were modeled: the fracture plane perpendicular to the long axis in the buccal-palatal direction (0°); the fracture plane inclined bucco-palatally in apical-coronal direction (-30°); the fracture plane inclined palatal-buccally in apical-coronal direction (+30°). First set of computing runs was performed for in vivo FE-model validation purposes. In the second part, a 50N force was applied on the buccal aspect of the CMI models. Ten patients were selected and subjected to the strain measurement of CMI under controlled loading conditions. The main differences were noticed in the middle and incisal thirds of incisors crowns, due to the presence of the incisal portion restoration. The stress intensity in -30° models is increased in the enamel structure close to the restoration, due to a thinning of the remaining natural tissues. The rigidity of the restoring material slightly reduces such phenomenon. -30° model exhibits the higher interfacial stress in the adhesive layer with respect to +30° and 0° models. The lower stress intensity was noticed in the 0° models, restoration material rigidity did not influenced the interfacial stress state in 0° models. On the contrary, material rigidity influenced the interfacial stress state

  10. Validation and Refinement of Prediction Models to Estimate Exercise Capacity in Cancer Survivors Using the Steep Ramp Test.

    Science.gov (United States)

    Stuiver, Martijn M; Kampshoff, Caroline S; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J M; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M

    2017-11-01

    To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo 2peak ) and peak power output (W peak ). Cross-sectional study. Multicenter. Cancer survivors (N=283) in 2 randomized controlled exercise trials. Not applicable. Prediction model accuracy was assessed by intraclass correlation coefficients (ICCs) and limits of agreement (LOA). Multiple linear regression was used for model extension. Clinical performance was judged by the percentage of accurate endurance exercise prescriptions. ICCs of SRT-predicted Vo 2peak and W peak with these values as obtained by the cardiopulmonary exercise test were .61 and .73, respectively, using the previously published prediction models. 95% LOA were ±705mL/min with a bias of 190mL/min for Vo 2peak and ±59W with a bias of 5W for W peak . Modest improvements were obtained by adding body weight and sex to the regression equation for the prediction of Vo 2peak (ICC, .73; 95% LOA, ±608mL/min) and by adding age, height, and sex for the prediction of W peak (ICC, .81; 95% LOA, ±48W). Accuracy of endurance exercise prescription improved from 57% accurate prescriptions to 68% accurate prescriptions with the new prediction model for W peak . Predictions of Vo 2peak and W peak based on the SRT are adequate at the group level, but insufficiently accurate in individual patients. The multivariable prediction model for W peak can be used cautiously (eg, supplemented with a Borg score) to aid endurance exercise prescription. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  11. Validating the use of 137Cs and 210Pbex measurements to estimate rates of soil loss from cultivated land in southern Italy

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.

    2012-01-01

    Soil erosion represents an important threat to the long-term sustainability of agriculture and forestry in many areas of the world, including southern Italy. Numerous models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution, based on the local topography, hydrometeorology, soil type and land management. However, there remains an important need for empirical measurements to provide a basis for validating and calibrating such models and prediction procedures as well as to support specific investigations and experiments. In this context, erosion plots provide useful information on gross rates of soil loss, but are unable to document the efficiency of the onward transfer of the eroded sediment within a field and towards the stream system, and thus net rates of soil loss from larger areas. The use of environmental radionuclides, particularly caesium-137 ( 137 Cs) and excess lead-210 ( 210 Pb ex ), as a means of estimating rates of soil erosion and deposition has attracted increasing attention in recent years and the approach has now been recognised as possessing several important advantages. In order to provide further confirmation of the validity of the estimates of longer-term erosion and soil redistribution rates provided by 137 Cs and 210 Pb ex measurements, there is a need for studies aimed explicitly at validating the results obtained. In this context, the authors directed attention to the potential offered by a set of small erosion plots located near Reggio Calabria in southern Italy, for validating estimates of soil loss provided by 137 Cs and 210 Pb ex measurements. A preliminary assessment suggested that, notwithstanding the limitations and constraints involved, a worthwhile investigation aimed at validating the use of 137 Cs and 210 Pb ex measurements to estimate rates of soil loss from cultivated land could be undertaken. The results demonstrate a close consistency between the measured rates of soil loss and

  12. Reliability and validity of the Turkish version of the Rapid Estimate of Adult Literacy in Dentistry (TREALD-30).

    Science.gov (United States)

    Peker, Kadriye; Köse, Taha Emre; Güray, Beliz; Uysal, Ömer; Erdem, Tamer Lütfi

    2017-04-01

    To culturally adapt the Turkish version of Rapid Estimate of Adult Literacy in Dentistry (TREALD-30) for Turkish-speaking adult dental patients and to evaluate its psychometric properties. After translation and cross-cultural adaptation, TREALD-30 was tested in a sample of 127 adult patients who attended a dental school clinic in Istanbul. Data were collected through clinical examinations and self-completed questionnaires, including TREALD-30, the Oral Health Impact Profile (OHIP), the Rapid Estimate of Adult Literacy in Medicine (REALM), two health literacy screening questions, and socio-behavioral characteristics. Psychometric properties were examined using Classical Test Theory (CTT) and Rasch analysis. Internal consistency (Cronbach's Alpha = 0.91) and test-retest reliability (Intraclass correlation coefficient = 0.99) were satisfactory for TREALD-30. It exhibited good convergent and predictive validity. Monthly family income, years of education, dental flossing, health literacy, and health literacy skills were found as stronger predictors of patients'oral health literacy (OHL). Confirmatory factor analysis (CFA) confirmed a two-factor model. The Rasch model explained 37.9% of the total variance in this dataset. In addition, TREALD-30 had eleven misfitting items, which indicated evidence of multidimensionality. The reliability indeces provided in Rasch analysis (person separation reliability = 0.91 and expected-a-posteriori/plausible reliability = 0.94) indicated that TREALD-30 had acceptable reliability. TREALD-30 showed satisfactory psychometric properties. It may be used to identify patients with low OHL. Socio-demographic factors, oral health behaviors and health literacy skills should be taken into account when planning future studies to assess the OHL in both clinical and community settings.

  13. Phytochemical investigation and simultaneous estimation of bioactive lupeol and stigmasterol in Abutilon indicum by validated HPTLC method

    Directory of Open Access Journals (Sweden)

    Md. Sarfaraj Hussain

    2014-05-01

    Full Text Available Objective: To perform a simultaneous quantitative estimation of two biologically active triterpenoid compounds lupeol and a steroid compound, stigmasterol, in Abutilon indicum (A. indicum using high-performance thin-layer chromatography (HPTLC. Methods: TLC aluminum plates precoated with silica-gel 60 F254 (20 cm×10 cm were used with a mobile phase of toluene-methanol-formic acid (7.0:2.7:0.3, v/v/v and densitometric determination of these compounds was carried out at 530 nm in reflectance/absorbance mode. Results: Compact bands for lupeol and stigmasterol were obtained at Rf 0.52±0.02 and 0.28±0.05. The limit of detection (45 and 18 ng/band, limit of quantification (135 and 54 ng/band, recovery (98.2%-99.7% and 97.2%-99.6% and precision (≤2.18 and 1.91 were satisfactory for lupeol and stigmasterol respectively. Linearity range for lupeol and stigmasterol were 100-1000 (r 2 =0.999 4 and 50-500 ng/band (r 2 =0.994 1 and the contents were estimated as (0.59±0.10% and (0.83±0.10% w/ w respectively. The total phenolic, flavonoid, proanthocyanidin, alkaloidal and saponin contents of methanolic extract of A. indicum were also measured in this work. According to International Conference on Harmonization (ICH guidelines, the method was validated for linearity, precision, accuracy, and recovery, limit of detection, limit of quantification, specificity, and robustness. Conclusions: The HPTLC method was found to be reproducible, accurate, and precise and could detect these two compounds at nanogram level from the A. indicum.

  14. Development and validation of polar RP-HPLC method for screening for ectoine high-yield strains in marine bacteria with green chemistry.

    Science.gov (United States)

    Chen, Jun; Chen, Jianwei; Wang, Sijia; Zhou, Guangmin; Chen, Danqing; Zhang, Huawei; Wang, Hong

    2018-04-02

    A novel, green, rapid, and precise polar RP-HPLC method has been successfully developed and screened for ectoine high-yield strain in marine bacteria. Ectoine is a polar and extremely useful solute which allows microorganisms to survive in extreme environmental salinity. This paper describes a polar-HPLC method employed polar RP-C18 (5 μm, 250 × 4.6 mm) using pure water as the mobile phase and a column temperature of 30 °C, coupled with a flow rate at 1.0 mL/min and detected under a UV detector at wavelength of 210 nm. Our method validation demonstrates excellent linearity (R 2  = 0.9993), accuracy (100.55%), and a limit of detection LOQ and LOD of 0.372 and 0.123 μgmL -1 , respectively. These results clearly indicate that the developed polar RP-HPLC method for the separation and determination of ectoine is superior to earlier protocols.

  15. Numerical experiment to estimate the validity of negative ion diagnostic using photo-detachment combined with Langmuir probing

    Energy Technology Data Exchange (ETDEWEB)

    Oudini, N. [Laboratoire des plasmas de décharges, Centre de Développement des Technologies Avancées, Cité du 20 Aout BP 17 Baba Hassen, 16081 Algiers (Algeria); Sirse, N.; Ellingboe, A. R. [Plasma Research Laboratory, School of Physical Sciences and NCPST, Dublin City University, Dublin 9 (Ireland); Benallal, R. [Unité de Recherche Matériaux et Energies Renouvelables, BP 119, Université Abou Bekr Belkaïd, Tlemcen 13000 (Algeria); Taccogna, F. [Istituto di Metodologie Inorganiche e di Plasmi, CNR, via Amendola 122/D, 70126 Bari (Italy); Aanesland, A. [Laboratoire de Physique des Plasmas, (CNRS, Ecole Polytechnique, Sorbonne Universités, UPMC Univ Paris 06, Univ Paris-Sud), École Polytechnique, 91128 Palaiseau Cedex (France); Bendib, A. [Laboratoire d' Electronique Quantique, Faculté de Physique, USTHB, El Alia BP 32, Bab Ezzouar, 16111 Algiers (Algeria)

    2015-07-15

    This paper presents a critical assessment of the theory of photo-detachment diagnostic method used to probe the negative ion density and electronegativity α = n{sub -}/n{sub e}. In this method, a laser pulse is used to photo-detach all negative ions located within the electropositive channel (laser spot region). The negative ion density is estimated based on the assumption that the increase of the current collected by an electrostatic probe biased positively to the plasma is a result of only the creation of photo-detached electrons. In parallel, the background electron density and temperature are considered as constants during this diagnostics. While the numerical experiments performed here show that the background electron density and temperature increase due to the formation of an electrostatic potential barrier around the electropositive channel. The time scale of potential barrier rise is about 2 ns, which is comparable to the time required to completely photo-detach the negative ions in the electropositive channel (∼3 ns). We find that neglecting the effect of the potential barrier on the background plasma leads to an erroneous determination of the negative ion density. Moreover, the background electron velocity distribution function within the electropositive channel is not Maxwellian. This is due to the acceleration of these electrons through the electrostatic potential barrier. In this work, the validity of the photo-detachment diagnostic assumptions is questioned and our results illustrate the weakness of these assumptions.

  16. Validity of the tritiated thymidine method for estimating bacterial growth rates: measurement of isotope dilution during DNA synthesis

    International Nuclear Information System (INIS)

    Pollard, P.C.; Moriarty, D.J.W.

    1984-01-01

    The rate of tritiated thymidine incorporation into DNA was used to estimate bacterial growth rates in aquatic environments. To be accurate, the calculation of growth rates has to include a factor for the dilution of isotope before incorporation. The validity of an isotope dilution analysis to determine this factor was verified in experiments reported here with cultures of a marine bacterium growing in a chemostat. Growth rates calculated from data on chemostat dilution rates and cell density agreed well with rates calculated by tritiated thymidine incorporation into DNA and isotope dilution analysis. With sufficiently high concentrations of exogenous thymidine, de novo synthesis of deoxythymidine monophosphate was inhibited, thereby preventing the endogenous dilution of isoope. The thymidine technique was also shown to be useful for measuring growth rates of mixed suspensions of bacteria growing anaerobically. Thymidine was incorporated into the DNA of a range of marine pseudomonads that were investigated. Three species did not take up thymidine. The common marine cyanobacterium Synechococcus species did not incorporate thymidine into DNA

  17. Field-level validation of a CLIMEX model for Cactoblastis cactorum (Lepidoptera: Pyralidae) using estimated larval growth rates.

    Science.gov (United States)

    Legaspi, Benjamin C; Legaspi, Jesusa Crisostomo

    2010-04-01

    Invasive pests, such as the cactus moth, Cactoblastis cactorum (Berg) (Lepidoptera: Pyralidae), have not reached equilibrium distributions and present unique opportunities to validate models by comparing predicted distributions with eventual realized geographic ranges. A CLIMEX model was developed for C. cactorum. Model validation was attempted at the global scale by comparing worldwide distribution against known occurrence records and at the field scale by comparing CLIMEX "growth indices" against field measurements of larval growth. Globally, CLIMEX predicted limited potential distribution in North America (from the Caribbean Islands to Florida, Texas, and Mexico), Africa (South Africa and parts of the eastern coast), southern India, parts of Southeast Asia, and the northeastern coast of Australia. Actual records indicate the moth has been found in the Caribbean (Antigua, Barbuda, Montserrat Saint Kitts and Nevis, Cayman Islands, and U.S. Virgin Islands), Cuba, Bahamas, Puerto Rico, southern Africa, Kenya, Mexico, and Australia. However, the model did not predict that distribution would extend from India to the west into Pakistan. In the United States, comparison of the predicted and actual distribution patterns suggests that the moth may be close to its predicted northern range along the Atlantic coast. Parts of Texas and most of Mexico may be vulnerable to geographic range expansion of C. cactorum. Larval growth rates in the field were estimated by measuring differences in head capsules and body lengths of larval cohorts at weekly intervals. Growth indices plotted against measures of larval growth rates compared poorly when CLIMEX was run using the default historical weather data. CLIMEX predicted a single period conducive to insect development, in contrast to the three generations observed in the field. Only time and more complete records will tell whether C. cactorum will extend its geographical distribution to regions predicted by the CLIMEX model. In terms

  18. Estimation of residual stress in cold rolled iron-disks from strain measurements on the high resolution Fourier diffractometer

    International Nuclear Information System (INIS)

    Aksenov, V.L.; Balagurov, A.M.; Taran, Yu.V.

    1995-01-01

    The results of estimating residual stresses in cold rolled iron disks by measurements with the high resolution Fourier diffractometer (HRFD) at the IBR-2 pulsed reactor are presented. These measurements were made for calibration of magnetic and ultrasonic measurements carried out at the Fraunhofer-Institute for Nondestructive Testing in Saarbrucken (Germany). The tested objects were cold rolled steel disks of 2.5 mm thickness and diameter of about 500 mm used for forming small, gas pressure tanks. Neutron diffraction experiments were carried out at the scattering angle 2θ=+152 d eg with resolution Δd/d=1.5·10 -3 . The gauge volume was chosen according to the magnetic measurements lateral resolution 20x20 mm 2 . In the nearest future the neutron diffraction measurements with cold rolled iron disks at the scattering angle 2θ=±90 0 are planned. Also the texture analysis will be included in the Rietveld refinement procedure for more correct calculation of residual stress fields in the cold rolled materials. 8 refs., 10 figs., 1 tab

  19. Validity of a food frequency questionnaire to estimate long-chain polyunsaturated fatty acid intake among Japanese women in early and late pregnancy.

    Science.gov (United States)

    Kobayashi, Minatsu; Jwa, Seung Chik; Ogawa, Kohei; Morisaki, Naho; Fujiwara, Takeo

    2017-01-01

    The relative validity of food frequency questionnaires for estimating long-chain polyunsaturated fatty acid (LC-PUFA) intake among pregnant Japanese women is currently unclear. The aim of this study was to verify the external validity of a food frequency questionnaire, originally developed for non-pregnant adults, to assess the dietary intake of LC-PUFA using dietary records and serum phospholipid levels among Japanese women in early and late pregnancy. A validation study involving 188 participants in early pregnancy and 169 participants in late pregnancy was conducted. Intake LC-PUFA was estimated using a food frequency questionnaire and evaluated using a 3-day dietary record and serum phospholipid concentrations in both early and late pregnancy. The food frequency questionnaire provided estimates of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) intake with higher precision than dietary records in both early and late pregnancy. Significant correlations were observed for LC-PUFA intake estimated using dietary records in both early and late pregnancy, particularly for EPA and DHA (correlation coefficients ranged from 0.34 to 0.40, p food frequency questionnaire, which was originally designed for non-pregnant adults and was evaluated in this study against dietary records and biological markers, has good validity for assessing LC-PUFA intake, especially EPA and DHA intake, among Japanese women in early and late pregnancy. Copyright © 2016 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  20. External validation of a forest inventory and analysis volume equation and comparisons with estimates from multiple stem-profile models

    Science.gov (United States)

    Christopher M. Oswalt; Adam M. Saunders

    2009-01-01

    Sound estimation procedures are desideratum for generating credible population estimates to evaluate the status and trends in resource conditions. As such, volume estimation is an integral component of the U.S. Department of Agriculture, Forest Service, Forest Inventory and Analysis (FIA) program's reporting. In effect, reliable volume estimation procedures are...

  1. Using Clinical Factors and Mammographic Breast Density to Estimate Breast Cancer Risk: Development and Validation of a New Predictive Model

    Science.gov (United States)

    Tice, Jeffrey A.; Cummings, Steven R.; Smith-Bindman, Rebecca; Ichikawa, Laura; Barlow, William E.; Kerlikowske, Karla

    2009-01-01

    Background Current models for assessing breast cancer risk are complex and do not include breast density, a strong risk factor for breast cancer that is routinely reported with mammography. Objective To develop and validate an easy-to-use breast cancer risk prediction model that includes breast density. Design Empirical model based on Surveillance, Epidemiology, and End Results incidence, and relative hazards from a prospective cohort. Setting Screening mammography sites participating in the Breast Cancer Surveillance Consortium. Patients 1 095 484 women undergoing mammography who had no previous diagnosis of breast cancer. Measurements Self-reported age, race or ethnicity, family history of breast cancer, and history of breast biopsy. Community radiologists rated breast density by using 4 Breast Imaging Reporting and Data System categories. Results During 5.3 years of follow-up, invasive breast cancer was diagnosed in 14 766 women. The breast density model was well calibrated overall (expected–observed ratio, 1.03 [95% CI, 0.99 to 1.06]) and in racial and ethnic subgroups. It had modest discriminatory accuracy (concordance index, 0.66 [CI, 0.65 to 0.67]). Women with low-density mammograms had 5-year risks less than 1.67% unless they had a family history of breast cancer and were older than age 65 years. Limitation The model has only modest ability to discriminate between women who will develop breast cancer and those who will not. Conclusion A breast cancer prediction model that incorporates routinely reported measures of breast density can estimate 5-year risk for invasive breast cancer. Its accuracy needs to be further evaluated in independent populations before it can be recommended for clinical use. PMID:18316752

  2. Systematized water content calculation in cartilage using T1-mapping MR estimations: design and validation of a mathematical model.

    Science.gov (United States)

    Shiguetomi-Medina, J M; Ramirez-Gl, J L; Stødkilde-Jørgensen, H; Møller-Madsen, B

    2017-09-01

    Up to 80 % of cartilage is water; the rest is collagen fibers and proteoglycans. Magnetic resonance (MR) T1-weighted measurements can be employed to calculate the water content of a tissue using T1 mapping. In this study, a method that translates T1 values into water content data was tested statistically. To develop a predictive equation, T1 values were obtained for tissue-mimicking gelatin samples. 1.5 T MRI was performed using inverse angle phase and an inverse sequence at 37 (±0.5) °C. Regions of interest were manually delineated and the mean T1 value was estimated in arbitrary units. Data were collected and modeled using linear regression. To validate the method, articular cartilage from six healthy pigs was used. The experiment was conducted in accordance with the Danish Animal Experiment Committee. Double measurements were performed for each animal. Ex vivo, all water in the tissue was extracted by lyophilization, thus allowing the volume of water to be measured. This was then compared with the predicted water content via Lin's concordance correlation coefficient at the 95 % confidence level. The mathematical model was highly significant when compared to a null model (p < 0.0001). 97.3 % of the variation in water content can be explained by absolute T1 values. Percentage water content could be predicted as 0.476 + (T1 value) × 0.000193 × 100 %. We found that there was 98 % concordance between the actual and predicted water contents. The results of this study demonstrate that MR data can be used to predict percentage water contents of cartilage samples. 3 (case-control study).

  3. Rapid validated HPTLC method for estimation of piperine and piperlongumine in root of Piper longum extract and its commercial formulation

    Directory of Open Access Journals (Sweden)

    Anagha A. Rajopadhye

    2012-12-01

    Full Text Available Piperine and piperlongumine, alkaloids having diverse biological activities, commonly occur in roots of Piper longum L., Piperaceae, which have high commercial, economical and medicinal value. In present study, rapid, validated HPTLC method has been established for the determination of piperine and piperlongumine in methanolic root extract and its commercial formulation 'Mahasudarshan churna®' using ICH guidelines. The use of Accelerated Solvent Extraction (ASE as an alternative to conventional techniques has been explored. The methanol extracts of root, its formulation and both standard solutions were applied on silica gel F254 HPTLC plates. The plates were developed in Twin chamber using mobile phase toluene: ethyl acetate (6:4, v/v and scanned at 342 and 325 nm (λmax of piperine and piperlongumine, respectively using Camag TLC scanner 3 with CATS 4 software. A linear relationship was obtained between response (peak area and amount of piperine and piperlongumine in the range of 20-100 and 30-150 ng/spot, respectively; the correlation coefficient was 0.9957 and 0.9941 respectively. Sharp, symmetrical and well resolved peaks of piperine and piperlongumine spots resolved at Rf 0.51 and 0.74, respectively from other components of the sample extracts. The HPTLC method showed good linearity, recovery and high precision of both markers. Extraction of plant using ASE and rapid HPTLC method provides a new and powerful approach to estimate piperine and piperlongumine as phytomarkers in the extract as well as its commercial formulations for routine quality control.

  4. Transferability of Skills: Convergent, Postdictive, Criterion-Related, and Construct Validation of Cross-Job Retraining Time Estimates

    National Research Council Canada - National Science Library

    Kavanagh, Michael

    1997-01-01

    ... (job learning difficulty and cross-AFS differences in aptitude requirements), (b) XJRThs exhibited some postdictive validity when evaluated against Airman Retraining Program Survey retraining ease criteria, (c...

  5. Variables influencing wearable sensor outcome estimates in individuals with stroke and incomplete spinal cord injury: a pilot investigation validating two research grade sensors.

    Science.gov (United States)

    Jayaraman, Chandrasekaran; Mummidisetty, Chaithanya Krishna; Mannix-Slobig, Alannah; McGee Koch, Lori; Jayaraman, Arun

    2018-03-13

    Monitoring physical activity and leveraging wearable sensor technologies to facilitate active living in individuals with neurological impairment has been shown to yield benefits in terms of health and quality of living. In this context, accurate measurement of physical activity estimates from these sensors are vital. However, wearable sensor manufacturers generally only provide standard proprietary algorithms based off of healthy individuals to estimate physical activity metrics which may lead to inaccurate estimates in population with neurological impairment like stroke and incomplete spinal cord injury (iSCI). The main objective of this cross-sectional investigation was to evaluate the validity of physical activity estimates provided by standard proprietary algorithms for individuals with stroke and iSCI. Two research grade wearable sensors used in clinical settings were chosen and the outcome metrics estimated using standard proprietary algorithms were validated against designated golden standard measures (Cosmed K4B2 for energy expenditure and metabolic equivalent and manual tallying for step counts). The influence of sensor location, sensor type and activity characteristics were also studied. 28 participants (Healthy (n = 10); incomplete SCI (n = 8); stroke (n = 10)) performed a spectrum of activities in a laboratory setting using two wearable sensors (ActiGraph and Metria-IH1) at different body locations. Manufacturer provided standard proprietary algorithms estimated the step count, energy expenditure (EE) and metabolic equivalent (MET). These estimates were compared with the estimates from gold standard measures. For verifying validity, a series of Kruskal Wallis ANOVA tests (Games-Howell multiple comparison for post-hoc analyses) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with the designated gold standard measurements. The sensor type, sensor location

  6. Validating the use of 137Cs and 210Pbex measurements to estimate rates of soil loss from cultivated land in southern Italy.

    Science.gov (United States)

    Porto, Paolo; Walling, Des E

    2012-04-01

    Soil erosion represents an important threat to the long-term sustainability of agriculture and forestry in many areas of the world, including southern Italy. Numerous models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution, based on the local topography, hydrometeorology, soil type and land management. However, there remains an important need for empirical measurements to provide a basis for validating and calibrating such models and prediction procedures as well as to support specific investigations and experiments. In this context, erosion plots provide useful information on gross rates of soil loss, but are unable to document the efficiency of the onward transfer of the eroded sediment within a field and towards the stream system, and thus net rates of soil loss from larger areas. The use of environmental radionuclides, particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)), as a means of estimating rates of soil erosion and deposition has attracted increasing attention in recent years and the approach has now been recognised as possessing several important advantages. In order to provide further confirmation of the validity of the estimates of longer-term erosion and soil redistribution rates provided by (137)Cs and (210)Pb(ex) measurements, there is a need for studies aimed explicitly at validating the results obtained. In this context, the authors directed attention to the potential offered by a set of small erosion plots located near Reggio Calabria in southern Italy, for validating estimates of soil loss provided by (137)Cs and (210)Pb(ex) measurements. A preliminary assessment suggested that, notwithstanding the limitations and constraints involved, a worthwhile investigation aimed at validating the use of (137)Cs and (210)Pb(ex) measurements to estimate rates of soil loss from cultivated land could be undertaken. The results demonstrate a close consistency between the measured rates of soil

  7. Using plot experiments to test the validity of mass balance models employed to estimate soil redistribution rates from 137Cs and 210Pbex measurements

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.

    2012-01-01

    Information on rates of soil loss from agricultural land is a key requirement for assessing both on-site soil degradation and potential off-site sediment problems. Many models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution as a function of the local topography, hydrometeorology, soil type and land management, but empirical data remain essential for validating and calibrating such models and prediction procedures. Direct measurements using erosion plots are, however, costly and the results obtained relate to a small enclosed area, which may not be representative of the wider landscape. In recent years, the use of fallout radionuclides and more particularly caesium-137 ( 137 Cs) and excess lead-210 ( 210 Pb ex ) has been shown to provide a very effective means of documenting rates of soil loss and soil and sediment redistribution in the landscape. Several of the assumptions associated with the theoretical conversion models used with such measurements remain essentially unvalidated. This contribution describes the results of a measurement programme involving five experimental plots located in southern Italy, aimed at validating several of the basic assumptions commonly associated with the use of mass balance models for estimating rates of soil redistribution on cultivated land from 137 Cs and 210 Pb ex measurements. Overall, the results confirm the general validity of these assumptions and the importance of taking account of the fate of fresh fallout. However, further work is required to validate the conversion models employed in using fallout radionuclide measurements to document soil redistribution in the landscape and this could usefully direct attention to different environments and to the validation of the final estimates of soil redistribution rate as well as the assumptions of the models employed. - Highlights: ► Soil erosion is an important threat to the long-term sustainability of agriculture.

  8. COMBINING LIDAR ESTIMATES OF BIOMASS AND LANDSAT ESTIMATES OF STAND AGE FOR SPATIALLY EXTENSIVE VALIDATION OF MODELED FOREST PRODUCTIVITY. (R828309)

    Science.gov (United States)

    Extensive estimates of forest productivity are required to understand the relationships between shifting land use, changing climate and carbon storage and fluxes. Aboveground net primary production of wood (NPPAw) is a major component of total NPP and...

  9. Method for the validation and uncertainty estimation of tocopherol analysis applied to soybean oil with addition of spices and TBHQ

    Directory of Open Access Journals (Sweden)

    da Silva, M. G.

    2013-09-01

    Full Text Available The tocopherol contents of refined soybean oil with the addition of rosemary, oregano, garlic, annatto seeds and TBHQ was evaluated during storage at 25 °C and 35 °C for twelve months, in comparison with a control soybean oil without the antioxidant addition. The method proposed to assess the tocopherol content was validated and the uncertainty estimation was determined. The method presented adequate linearity and precision, accuracy between 93% and 103% and expanded uncertainty of 2%. The contents of α-, γ- and δ-tocopherols of all the tested soybean oils remained constant during the storage at 25 °C and 35 °C regardless of antioxidant addition, while β-tocopherol content decreased. The addition of a mixture of rosemary, oregano, garlic and annatto seeds increased the concentration of γ- and δ-tocopherol. The oil with spices presented a similar behavior to that of the oil with the addition of TBHQ.La concentración de tocoferoles en aceite de soja refinado (muestra control, aceite de soja adicionado de romero, orégano, ajo, semilla de achiote y TBHQ fueron cuantificados durante el almacenamiento durante 12 meses a 25°C y 35°C. El método propuesto para medir tocoferoles fue validado y determinada la incertidumbre. Este método presentó linealidad y precisión adecuadas, exactitud entre 93% y 103% además de una incertidumbre expandida de 2%. Las cantidades de α-, γ- y δ-tocoferol en el aceite de soja refinado, aceite de soja adicionado de condimentos y aceite de soja adicionado con TBHQ se mantuvieron constantes durante el almacenamiento a 25°C y 35°C con excepción del β-tocoferol el cual disminuyó. El aceite de soja adicionado de condimentos (romero, orégano, ajo, y semilla de achiote presentó mayores concentraciones de γ- y δ-tocoferol en comparación con el aceite de soja refinado utilizado como control. El aceite de soja adicionado de condimentos presentó un comportamiento semejante al aceite de soja adicionado

  10. The Cross-Calibration of Spectral Radiances and Cross-Validation of CO2 Estimates from GOSAT and OCO-2

    Directory of Open Access Journals (Sweden)

    Fumie Kataoka

    2017-11-01

    Full Text Available The Greenhouse gases Observing SATellite (GOSAT launched in January 2009 has provided radiance spectra with a Fourier Transform Spectrometer for more than eight years. The Orbiting Carbon Observatory 2 (OCO-2 launched in July 2014, collects radiance spectra using an imaging grating spectrometer. Both sensors observe sunlight reflected from Earth’s surface and retrieve atmospheric carbon dioxide (CO2 concentrations, but use different spectrometer technologies, observing geometries, and ground track repeat cycles. To demonstrate the effectiveness of satellite remote sensing for CO2 monitoring, the GOSAT and OCO-2 teams have worked together pre- and post-launch to cross-calibrate the instruments and cross-validate their retrieval algorithms and products. In this work, we first compare observed radiance spectra within three narrow bands centered at 0.76, 1.60 and 2.06 µm, at temporally coincident and spatially collocated points from September 2014 to March 2017. We reconciled the differences in observation footprints size, viewing geometry and associated differences in surface bidirectional reflectance distribution function (BRDF. We conclude that the spectral radiances measured by the two instruments agree within 5% for all bands. Second, we estimated mean bias and standard deviation of column-averaged CO2 dry air mole fraction (XCO2 retrieved from GOSAT and OCO-2 from September 2014 to May 2016. GOSAT retrievals used Build 7.3 (V7.3 of the Atmospheric CO2 Observations from Space (ACOS algorithm while OCO-2 retrievals used Version 7 of the OCO-2 retrieval algorithm. The mean biases and standard deviations are −0.57 ± 3.33 ppm over land with high gain, −0.17 ± 1.48 ppm over ocean with high gain and −0.19 ± 2.79 ppm over land with medium gain. Finally, our study is complemented with an analysis of error sources: retrieved surface pressure (Psurf, aerosol optical depth (AOD, BRDF and surface albedo inhomogeneity. We found no change in XCO2

  11. Validation of the finite element simulation to estimate the rolling resistance of a non-driving wheel with experimental tests

    Directory of Open Access Journals (Sweden)

    N Dibagar

    2015-09-01

    Full Text Available Introduction: Encountering soil from the viewpoint of management and product manufacturing has always been considered important, and an attempt is always made hat the tools and contrasting methods of soil be designed in such a way that itself prevents, as much as possible, the destructive consequences or energy waste that include economical or environmental limitations. Enhancing the soil encountering methods, quality reformation, and its related equipment, requires performing reliable tests in actual soil conditions. Considering the complexity and variety of variables in soil and machine contrast, this is a hard task. Hence, the numeral simulations are the key of all optimizations that illustrate efficient models by removing the costly farm tests and reducing research time. Tire is one of the main factors engaged with soil, and it is one of those tools that are discussable in both farms, and software environments. Despite the complexities in soil behavior, and tire geometry, modeling, tire movement on the soil has been the researchers’ objective from the past. Materials and methods: A non-linear finite element (FE model of the interaction of a non-driving tire with soil surface was developed to investigate the influence of the forward speed, tire inflation pressure and vertical load on rolling resistance using ABAQUS/Explicit code. In this research numerical and experimental tests were done under different conditions in order to estimate tire rolling resistance. In numerical tests, the soil part was simulated as a one-layer viscous-elastic material with a Drucker-Prager model by considering realistic soil properties. These properties included elastic and plastic properties which were obtained in the soil laboratory using relevant tests. The soil samples were prepared from the soil which was inside the soil bin. The same soil was utilized in experimental tests. Finite strain hyper elasticity model is developed to model nearly incompressible

  12. A mathematical method for verifying the validity of measured information about the flows of energy resources based on the state estimation theory

    Science.gov (United States)

    Pazderin, A. V.; Sof'in, V. V.; Samoylenko, V. O.

    2015-11-01

    Efforts aimed at improving energy efficiency in all branches of the fuel and energy complex shall be commenced with setting up a high-tech automated system for monitoring and accounting energy resources. Malfunctions and failures in the measurement and information parts of this system may distort commercial measurements of energy resources and lead to financial risks for power supplying organizations. In addition, measurement errors may be connected with intentional distortion of measurements for reducing payment for using energy resources on the consumer's side, which leads to commercial loss of energy resource. The article presents a universal mathematical method for verifying the validity of measurement information in networks for transporting energy resources, such as electricity and heat, petroleum, gas, etc., based on the state estimation theory. The energy resource transportation network is represented by a graph the nodes of which correspond to producers and consumers, and its branches stand for transportation mains (power lines, pipelines, and heat network elements). The main idea of state estimation is connected with obtaining the calculated analogs of energy resources for all available measurements. Unlike "raw" measurements, which contain inaccuracies, the calculated flows of energy resources, called estimates, will fully satisfy the suitability condition for all state equations describing the energy resource transportation network. The state equations written in terms of calculated estimates will be already free from residuals. The difference between a measurement and its calculated analog (estimate) is called in the estimation theory an estimation remainder. The obtained large values of estimation remainders are an indicator of high errors of particular energy resource measurements. By using the presented method it is possible to improve the validity of energy resource measurements, to estimate the transportation network observability, to eliminate

  13. Development of a Reference Data Set (RDS) for dental age estimation (DAE) and testing of this with a separate Validation Set (VS) in a southern Chinese population.

    Science.gov (United States)

    Jayaraman, Jayakumar; Wong, Hai Ming; King, Nigel M; Roberts, Graham J

    2016-10-01

    Many countries have recently experienced a rapid increase in the demand for forensic age estimates of unaccompanied minors. Hong Kong is a major tourist and business center where there has been an increase in the number of people intercepted with false travel documents. An accurate estimation of age is only possible when a dataset for age estimation that has been derived from the corresponding ethnic population. Thus, the aim of this study was to develop and validate a Reference Data Set (RDS) for dental age estimation for southern Chinese. A total of 2306 subjects were selected from the patient archives of a large dental hospital and the chronological age for each subject was recorded. This age was assigned to each specific stage of dental development for each tooth to create a RDS. To validate this RDS, a further 484 subjects were randomly chosen from the patient archives and their dental age was assessed based on the scores from the RDS. Dental age was estimated using meta-analysis command corresponding to random effects statistical model. Chronological age (CA) and Dental Age (DA) were compared using the paired t-test. The overall difference between the chronological and dental age (CA-DA) was 0.05 years (2.6 weeks) for males and 0.03 years (1.6 weeks) for females. The paired t-test indicated that there was no statistically significant difference between the chronological and dental age (p > 0.05). The validated southern Chinese reference dataset based on dental maturation accurately estimated the chronological age. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  14. Integral validation of the effective beta parameter for the MOX reactors and incinerators; Validation integrale des estimations du parametre beta effectif pour les reacteurs Mox et incinerateurs

    Energy Technology Data Exchange (ETDEWEB)

    Zammit-Averlant, V

    1998-11-19

    {beta}{sub eff}, which represents the effective delayed neutron fraction, is an important parameter for the reactor nominal working as well as for studies of its behaviour in accidental situation. In order to improve the safety of nuclear reactors, we propose here to validate its calculation by using the ERANOS code with ERALIB1 library and by taking into account all the fission process physics through the {nu} energy dependence. To validate the quality of this calculation formalism, we calculated uncertainties as precisely as possible. The experimental values of {beta}{sub eff}, as well their uncertainties, have also been re-evaluated for consistency, because these `experimental` values actually contain a calculated component. We therefore obtained an entirely coherent set of calculated and measured {beta}{sub eff}. The comparative study of the calculated and measured values pointed out that the JEF2.2 {nu}{sub d} are already sufficient because the (E-C)/C are inferior to 3 % in average and in their uncertainly bars. The experimental uncertainties, even if lightly superior to those previously edited, remain inferior to the uncertainties of the calculated values. This allowed us to fit {nu}{sub d} with {beta}{sub eff}. This adjustment has brought an additional improvement on the recommendations of the {nu}{sub d} average values, for the classical scheme (thermal energy, fast energy) and for the new scheme which explains the {nu}{sub d} energy dependence. {beta}{sub eff}, for MOX or UOX fuel assemblies in thermal or fast configurations, can therefore be obtained with an uncertainty due to the nuclear data of about 2.0 %. (author) 110 refs.

  15. Using plot experiments to test the validity of mass balance models employed to estimate soil redistribution rates from 137Cs and 210Pb(ex) measurements.

    Science.gov (United States)

    Porto, Paolo; Walling, Des E

    2012-10-01

    Information on rates of soil loss from agricultural land is a key requirement for assessing both on-site soil degradation and potential off-site sediment problems. Many models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution as a function of the local topography, hydrometeorology, soil type and land management, but empirical data remain essential for validating and calibrating such models and prediction procedures. Direct measurements using erosion plots are, however, costly and the results obtained relate to a small enclosed area, which may not be representative of the wider landscape. In recent years, the use of fallout radionuclides and more particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)) has been shown to provide a very effective means of documenting rates of soil loss and soil and sediment redistribution in the landscape. Several of the assumptions associated with the theoretical conversion models used with such measurements remain essentially unvalidated. This contribution describes the results of a measurement programme involving five experimental plots located in southern Italy, aimed at validating several of the basic assumptions commonly associated with the use of mass balance models for estimating rates of soil redistribution on cultivated land from (137)Cs and (210)Pb(ex) measurements. Overall, the results confirm the general validity of these assumptions and the importance of taking account of the fate of fresh fallout. However, further work is required to validate the conversion models employed in using fallout radionuclide measurements to document soil redistribution in the landscape and this could usefully direct attention to different environments and to the validation of the final estimates of soil redistribution rate as well as the assumptions of the models employed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Development and Validation of UV-Visible Spectrophotometric Methods for Simultaneous Estimation of Thiocolchicoside and Dexketoprofen in Bulk and Tablet Dosage Form

    OpenAIRE

    M. T. Harde; S. B. Jadhav; D. L. Dharam; P. D. Chaudhari

    2012-01-01

    Development and validation of two simple, accurate, precise and economical UV Spectrophotometric methods for simultaneous estimation of Thiocolchicoside and Dexketoprofen in bulk and in tablet dosage form. The methods employed were Method-1 Absorbance correction method and Method-2 First order derivative spectroscopic method. In method-1 Absorbance is measured at two wavelengths 370nm at which Dexketoprofen has no absorbance and 255nm at which both the drug have considerable absorbance. In me...

  17. Validation of a novel modified wall motion score for estimation of left ventricular ejection fraction in ischemic and non-ischemic cardiomyopathy

    Energy Technology Data Exchange (ETDEWEB)

    Scholl, David, E-mail: David.Scholl@utoronto.ca [Imaging Research Laboratories, Robarts Research Institute, London, Ontario (Canada); Kim, Han W., E-mail: hanwkim@gmail.com [Duke Cardiovascular Magnetic Resonance Center, Division of Cardiology, Duke University, NC (United States); Shah, Dipan, E-mail: djshah@tmhs.org [The Methodist DeBakey Heart Center, Houston, TX (United States); Fine, Nowell M., E-mail: nowellfine@gmail.com [Division of Cardiology, Department of Medicine, Schulich School of Medicine and Dentistry, University of Western Ontario (Canada); Tandon, Shruti, E-mail: standon4@uwo.ca [Division of Cardiology, Department of Medicine, Schulich School of Medicine and Dentistry, University of Western Ontario (Canada); Thompson, Terry, E-mail: thompson@lawsonimaging.ca [Lawson Health Research Institute, London, Ontario (Canada); Department of Medical Biophysics, University of Western Ontario, London, Ontario (Canada); Drangova, Maria, E-mail: mdrangov@imaging.robarts.ca [Imaging Research Laboratories, Robarts Research Institute, London, Ontario (Canada); Department of Medical Biophysics, University of Western Ontario, London, Ontario (Canada); White, James A., E-mail: jwhite@imaging.robarts.ca [Division of Cardiology, Department of Medicine, Schulich School of Medicine and Dentistry, University of Western Ontario (Canada); Lawson Health Research Institute, London, Ontario (Canada); Imaging Research Laboratories, Robarts Research Institute, London, Ontario (Canada)

    2012-08-15

    Background: Visual determination of left ventricular ejection fraction (LVEF) by segmental scoring may be a practical alternative to volumetric analysis of cine magnetic resonance imaging (MRI). The accuracy and reproducibility of this approach for has not been described. The purpose of this study was to validate a novel segmental visual scoring method for LVEF estimation using cine MRI. Methods: 362 patients with known or suspected cardiomyopathy were studied. A modified wall motion score (mWMS) was used to blindly score the wall motion of all cardiac segments from cine MRI imaging. The same datasets were subjected to blinded volumetric analysis using endocardial contour tracing. The population was then separated into a model cohort (N = 181) and validation cohort (N = 181), with the former used to derive a regression equation of mWMS versus true volumetric LVEF. The validation cohort was then used to test the accuracy of this regression model to estimate the true LVEF from a visually determined mWMS. Reproducibility testing of mWMS scoring was performed upon a randomly selected sample of 20 cases. Results: The regression equation relating mWMS to true LVEF in the model cohort was: LVEF = 54.23 - 0.5761 Multiplication-Sign mWMS. In the validation cohort this equation produced a strong correlation between mWMS-derived LVEF and true volumetric LVEF (r = 0.89). Bland and Altman analysis showed no systematic bias in the LVEF estimated using the mWMS (-0.3231%, 95% limits of agreement -12.22% to 11.58%). Inter-observer and intra-observer reproducibility was excellent (r = 0.93 and 0.97, respectively). Conclusion: The mWMS is a practical tool for reporting regional wall motion and provides reproducible estimates of LVEF from cine MRI.

  18. Développement et validation de NESSIE: un outil d'estimation de performances multi-critères pour systèmes-sur-puce.

    OpenAIRE

    Richard, Aliénor

    2010-01-01

    The work presented in this thesis aims at validating an original multicriteria performances estimation tool, NESSIE, dedicated to the prediction of performances to accelerate the design of electronic embedded systems. This tool has been developed in a previous thesis to cope with the limitations of existing design tools and offers a new solution to face the growing complexity of the current applications and electronic platforms and the multiple constraints they are subjected to. More precisel...

  19. The Potential of The Synergy of Sunphotometer and Lidar Data to Validate Vertical Profiles of The Aerosol Mass Concentration Estimated by An Air Quality Model

    Directory of Open Access Journals (Sweden)

    Siomos N.

    2016-01-01

    Full Text Available Vertical profiles of the aerosol mass concentration derived by the Lidar/Radiometer Inversion Code (LIRIC, that uses combined sunphotometer and lidar data, were used in order to validate the aerosol mass concentration profiles estimated by the air quality model CAMx. Lidar and CIMEL measurements performed at the Laboratory of Atmospheric Physics of the Aristotle University of Thessaloniki, Greece (40.5N, 22.9E from the period 2013-2014 were used in this study.

  20. Validating fatty acid intake as estimated by an FFQ: how does the 24 h recall perform as reference method compared with the duplicate portion?

    Science.gov (United States)

    Trijsburg, Laura; de Vries, Jeanne Hm; Hollman, Peter Ch; Hulshof, Paul Jm; van 't Veer, Pieter; Boshuizen, Hendriek C; Geelen, Anouk

    2018-05-08

    To compare the performance of the commonly used 24 h recall (24hR) with the more distinct duplicate portion (DP) as reference method for validation of fatty acid intake estimated with an FFQ. Intakes of SFA, MUFA, n-3 fatty acids and linoleic acid (LA) were estimated by chemical analysis of two DP and by on average five 24hR and two FFQ. Plasma n-3 fatty acids and LA were used to objectively compare ranking of individuals based on DP and 24hR. Multivariate measurement error models were used to estimate validity coefficients and attenuation factors for the FFQ with the DP and 24hR as reference methods. Wageningen, the Netherlands. Ninety-two men and 106 women (aged 20-70 years). Validity coefficients for the fatty acid estimates by the FFQ tended to be lower when using the DP as reference method compared with the 24hR. Attenuation factors for the FFQ tended to be slightly higher based on the DP than those based on the 24hR as reference method. Furthermore, when using plasma fatty acids as reference, the DP showed comparable to slightly better ranking of participants according to their intake of n-3 fatty acids (0·33) and n-3:LA (0·34) than the 24hR (0·22 and 0·24, respectively). The 24hR gives only slightly different results compared with the distinctive but less feasible DP, therefore use of the 24hR seems appropriate as the reference method for FFQ validation of fatty acid intake.

  1. Ecosystem services - from assessements of estimations to quantitative, validated, high-resolution, continental-scale mapping via airborne LIDAR

    Science.gov (United States)

    Zlinszky, András; Pfeifer, Norbert

    2016-04-01

    service potential" which is the ability of the local ecosystem to deliver various functions (water retention, carbon storage etc.), but can't quantify how much of these are actually used by humans or what the estimated monetary value is. Due to its ability to measure both terrain relief and vegetation structure in high resolution, airborne LIDAR supports direct quantification of the properties of an ecosystem that lead to it delivering a given service (such as biomass, water retention, micro-climate regulation or habitat diversity). In addition, its high resolution allows direct calibration with field measurements: routine harvesting-based ecological measurements, local biodiversity indicator surveys or microclimate recordings all take place at the human scale and can be directly linked to the local value of LIDAR-based indicators at meter resolution. Therefore, if some field measurements with standard ecological methods are performed on site, the accuracy of LIDAR-based ecosystem service indicators can be rigorously validated. With this conceptual and technical approach high resolution ecosystem service assessments can be made with well established credibility. These would consolidate the concept of ecosystem services and support both scientific research and evidence-based environmental policy at local and - as data coverage is continually increasing - continental scale.

  2. Validation databases for simulation models: aboveground biomass and net primary productive, (NPP) estimation using eastwide FIA data

    Science.gov (United States)

    Jennifer C. Jenkins; Richard A. Birdsey

    2000-01-01

    As interest grows in the role of forest growth in the carbon cycle, and as simulation models are applied to predict future forest productivity at large spatial scales, the need for reliable and field-based data for evaluation of model estimates is clear. We created estimates of potential forest biomass and annual aboveground production for the Chesapeake Bay watershed...

  3. Validity of food frequency questionnaire-based estimates of long-term long-chain n-3 polyunsaturated fatty acid intake.

    Science.gov (United States)

    Wallin, Alice; Di Giuseppe, Daniela; Burgaz, Ann; Håkansson, Niclas; Cederholm, Tommy; Michaëlsson, Karl; Wolk, Alicja

    2014-01-01

    To evaluate how long-term dietary intake of long-chain n-3 polyunsaturated fatty acids (LCn-3 PUFAs), estimated by repeated food frequency questionnaires (FFQs) over 15 years, is correlated with LCn-3 PUFAs in adipose tissue (AT). Subcutaneous adipose tissue was obtained in 2003-2004 (AT-03) from 239 randomly selected women, aged 55-75 years, after completion of a 96-item FFQ (FFQ-03). All participants had previously returned an identical FFQ in 1997 (FFQ-97) and a 67-item version in 1987-1990 (FFQ-87). Pearson product-moment correlations were used to evaluate associations between intake of total and individual LCn-3 PUFAs as estimated by the three FFQ assessments and AT-03 content (% of total fatty acids). FFQ-estimated mean relative intake of LCn-3 PUFAs (% of total fat intake) increased between all three assessments (FFQ-87, 0.55 ± 0.34; FFQ-97, 0.74 ± 0.64; FFQ-03, 0.88 ± 0.56). Validity, in terms of Pearson correlations between FFQ-03 estimates and AT-03 content, was 0.41 (95% CI 0.30-0.51) for total LCn-3 PUFA and ranged from 0.29 to 0.48 for individual fatty acids; lower correlation was observed among participants with higher percentage body fat. With regard to long-term intake estimates, past dietary intake was also correlated with AT-03 content, with correlation coefficients in the range of 0.21-0.33 and 0.21-0.34 for FFQ-97 and FFQ-87, respectively. The correlations were improved by using average estimates from two or more FFQ assessments. Exclusion of fish oil supplement users (14%) did not alter the correlations. These data indicate reasonable validity of FFQ-based estimates of long-term (up to 15 years) LCn-3 PUFA intake, justifying their use in studies of diet-disease associations.

  4. Concurrent validity and reliability of torso-worn inertial measurement unit for jump power and height estimation.

    Science.gov (United States)

    Rantalainen, Timo; Gastin, Paul B; Spangler, Rhys; Wundersitz, Daniel

    2018-09-01

    The purpose of the present study was to evaluate the concurrent validity and test-retest repeatability of torso-worn IMU-derived power and jump height in a counter-movement jump test. Twenty-seven healthy recreationally active males (age, 21.9 [SD 2.0] y, height, 1.76 [0.7] m, mass, 73.7 [10.3] kg) wore an IMU and completed three counter-movement jumps a week apart. A force platform and a 3D motion analysis system were used to concurrently measure the jumps and subsequently derive power and jump height (based on take-off velocity and flight time). The IMU significantly overestimated power (mean difference = 7.3 W/kg; P jump heights exhibited poorer concurrent validity (ICC = 0.72 to 0.78) and repeatability (ICC = 0.68) than flight-time-derived jump heights, which exhibited excellent validity (ICC = 0.93 to 0.96) and reliability (ICC = 0.91). Since jump height and power are closely related, and flight-time-derived jump height exhibits excellent concurrent validity and reliability, flight-time-derived jump height could provide a more desirable measure compared to power when assessing athletic performance in a counter-movement jump with IMUs.

  5. A validated risk score to estimate mortality risk in patients with dementia and pneumonia: barriers to clinical impact

    NARCIS (Netherlands)

    van der Steen, J.T.; Albers, G.; Strunk, E.; Muller, M.T.; Ribbe, M.W.

    2011-01-01

    Background: The clinical impact of risk score use in end-of-life settings is unknown, with reports limited to technical properties. Methods: We conducted a mixed-methods study to evaluate clinical impact of a validated mortality risk score aimed at informing prognosis and supporting clinicians in

  6. Validation of five minimally obstructive methods to estimate physical activity energy expenditure in young adults in semi-standardized settings

    DEFF Research Database (Denmark)

    Schneller, Mikkel Bo; Pedersen, Mogens Theisen; Gupta, Nidhi

    2015-01-01

    We compared the accuracy of five objective methods, including two newly developed methods combining accelerometry and activity type recognition (Acti4), against indirect calorimetry, to estimate total energy expenditure (EE) of different activities in semi-standardized settings. Fourteen particip...

  7. Reproducibility and relative validity of a food frequency questionnaire to estimate intake of dietary phylloquinone and menaquinones

    Science.gov (United States)

    Background: Several observational studies have investigated the relation of dietary phylloquinone and menaquinone intake with occurrence of chronic diseases. Most of these studies relied on food frequency questionnaires (FFQ) to estimate the intake of phylloquinone and menaquinones. However, none of...

  8. Estimation of hand hygiene opportunities on an adult medical ward using 24-hour camera surveillance: validation of the HOW2 Benchmark Study.

    Science.gov (United States)

    Diller, Thomas; Kelly, J William; Blackhurst, Dawn; Steed, Connie; Boeker, Sue; McElveen, Danielle C

    2014-06-01

    We previously published a formula to estimate the number of hand hygiene opportunities (HHOs) per patient-day using the World Health Organization's "Five Moments for Hand Hygiene" methodology (HOW2 Benchmark Study). HHOs can be used as a denominator for calculating hand hygiene compliance rates when product utilization data are available. This study validates the previously derived HHO estimate using 24-hour video surveillance of health care worker hand hygiene activity. The validation study utilized 24-hour video surveillance recordings of 26 patients' hospital stays to measure the actual number of HHOs per patient-day on a medicine ward in a large teaching hospital. Statistical methods were used to compare these results to those obtained by episodic observation of patient activity in the original derivation study. Total hours of data collection were 81.3 and 1,510.8, resulting in 1,740 and 4,522 HHOs in the derivation and validation studies, respectively. Comparisons of the mean and median HHOs per 24-hour period did not differ significantly. HHOs were 71.6 (95% confidence interval: 64.9-78.3) and 73.9 (95% confidence interval: 69.1-84.1), respectively. This study validates the HOW2 Benchmark Study and confirms that expected numbers of HHOs can be estimated from the unit's patient census and patient-to-nurse ratio. These data can be used as denominators in calculations of hand hygiene compliance rates from electronic monitoring using the "Five Moments for Hand Hygiene" methodology. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  9. Relative Validity and Reproducibility of an Interviewer Administered 14-Item FFQ to Estimate Flavonoid Intake Among Older Adults with Mild-Moderate Dementia.

    Science.gov (United States)

    Kent, Katherine; Charlton, Karen

    2017-01-01

    There is a large burden on researchers and participants when attempting to accurately measure dietary flavonoid intake using dietary assessment. Minimizing participant and researcher burden when collecting dietary data may improve the validity of the results, especially in older adults with cognitive impairment. A short 14-item food frequency questionnaire (FFQ) to measure flavonoid intake, and flavonoid subclasses (anthocyanins, flavan-3-ols, flavones, flavonols, and flavanones) was developed and assessed for validity and reproducibility against a 24-hour recall. Older adults with mild-moderate dementia (n = 49) attended two interviews 12 weeks apart. With the assistance of a family carer, a 24-h recall was collected at the first interview, and the flavonoid FFQ was interviewer-administered at both time-points. Validity and reproducibility was assessed using the Wilcoxon signed-rank sum test, Spearman's correlation coefficient, Bland-Altman Plots, and Cohen's kappa. Mean flavonoid intake was determined (FFQ1 = 795 ± 492.7 mg/day, 24-h recall = 515.6 ± 384.3 mg/day). Tests of validity indicated the FFQ was better at estimating total flavonoid intake than individual flavonoid subclasses compared with the 24-h recall. There was a significant difference in total flavonoid intake estimates between the FFQ and the 24-h recall (Wilcoxon signed-rank sum p Wilcoxon signed-rank sum test showed no significant difference, Spearman's correlation coefficient indicated excellent reliability (r = 0.75, p < 0.001), Bland-Altman plots visually showed small, nonsignificant bias and wide limits of agreement, and Cohen's kappa indicated fair agreement (κ = 0.429, p < 0.001). A 14-item FFQ developed to easily measure flavonoid intake in older adults with dementia demonstrates fair validity against a 24-h recall and good reproducibility.

  10. The German version of the Expanded Prostate Cancer Index Composite (EPIC): translation, validation and minimal important difference estimation.

    Science.gov (United States)

    Umbehr, Martin H; Bachmann, Lucas M; Poyet, Cedric; Hammerer, Peter; Steurer, Johann; Puhan, Milo A; Frei, Anja

    2018-02-20

    No official German translation exists for the 50-item Expanded Prostate Cancer Index Composite (EPIC), and no minimal important difference (MID) has been established yet. The aim of the study was to translate and validate a German version of the EPIC with cultural adaptation to the different German speaking countries and to establish the MID. We translated and culturally adapted the EPIC into German. For validation, we included a consecutive subsample of 92 patients with localized prostate cancer undergoing radical prostatectomy who participated the Prostate Cancer Outcomes Cohort. Baseline and follow-up assessments took place before and six weeks after prostatectomy in 2010 and 2011. We assessed the EPIC, EORTC QLQ-PR25, Feeling Thermometer, SF-36 and a global rating of health state change variable. We calculated the internal consistency, test-retest reliability, construct validity, responsiveness and MID. For most EPIC domains and subscales, our a priori defined criteria for reliability were fulfilled (construct reliability: Cronbach's alpha 0.7-0.9; test-retest reliability: intraclass-correlation coefficient ≥ 0.7). Cross-sectional and longitudinal correlations between EPIC and EORTC QLQ-PR25 domains ranged from 0.14-0.79, and 0.06-0.5 and 0.08-0.72 for Feeling Thermometer and SF-36, respectively. We established MID values of 10, 4, 12, and 6 for the urinary, bowel, sexual and hormonal domain. The German version of the EPIC is reliable, responsive and valid to measure HRQL in prostate cancer patients and is now available in German language. With the suggested MID we provide interpretation to what extent changes in HRQL are clinically relevant for patients. Hence, study results are of interest beyond German speaking countries.

  11. The validity of anthropometric leg muscle volume estimation across a wide spectrum: from able-bodied adults to individuals with a spinal cord injury.

    Science.gov (United States)

    Layec, Gwenael; Venturelli, Massimo; Jeong, Eun-Kee; Richardson, Russell S

    2014-05-01

    The assessment of muscle volume, and changes over time, have significant clinical and research-related implications. Methods to assess muscle volume vary from simple and inexpensive to complex and expensive. Therefore this study sought to examine the validity of muscle volume estimated simply by anthropometry compared with the more complex proton magnetic resonance imaging ((1)H-MRI) across a wide spectrum of individuals including those with a spinal cord injury (SCI), a group recognized to exhibit significant muscle atrophy. Accordingly, muscle volume of the thigh and lower leg of eight subjects with a SCI and eight able-bodied subjects (controls) was determined by anthropometry and (1)H-MRI. With either method, muscle volumes were significantly lower in the SCI compared with the controls (P muscle volume were strongly correlated to the values assessed by (1)H-MRI in both the thigh (r(2) = 0.89; P muscle volume compared with (1)H-MRI in both the thigh (mean bias = 2407cm(3)) and the lower (mean bias = 170 cm(3)) leg. Thus with an appropriate correction for this systemic overestimation, muscle volume estimated from anthropometric measurements is a valid approach and provides acceptable accuracy across a spectrum of adults with normal muscle mass to a SCI and severe muscle atrophy. In practical terms this study provides the formulas that add validity to the already simple and inexpensive anthropometric approach to assess muscle volume in clinical and research settings.

  12. The relative validity and repeatability of an FFQ for estimating intake of zinc and its absorption modifiers in young and older Saudi adults.

    Science.gov (United States)

    Alsufiani, Hadeil M; Yamani, Fatmah; Kumosani, Taha A; Ford, Dianne; Mathers, John C

    2015-04-01

    To assess the relative validity and repeatability of a sixty-four-item FFQ for estimating dietary intake of Zn and its absorption modifiers in Saudi adults. In addition, we used the FFQ to investigate the effect of age and gender on these intakes. To assess validity, all participants completed the FFQ (FFQ1) and a 3 d food record. After 1 month, the FFQ was administered for a second time (FFQ2) to assess repeatability. Jeddah, Saudi Arabia. One hundred males and females aged 20-30 years and 60-70 years participated. Mean intakes of Zn and protein from FFQ1 were significantly higher than those from the food record while there were no detectable differences between tools for measurement of phytic acid intake. Estimated intakes of Zn, protein and phytate by both approaches were strongly correlated (Prange of intakes while for Zn and phytic acid, the difference increased with increasing mean intake. Zn and protein intakes from FFQ1 and FFQ2 were highly correlated (r>0·68, Padults consumed less Zn and protein compared with young adults. Intakes of all dietary components were lower in females than in males. The FFQ developed and tested in the current study demonstrated reasonable relative validity and high repeatability and was capable of detecting differences in intakes between age and gender groups.

  13. Air temperature estimation with MSG-SEVIRI data: Calibration and validation of the TVX algorithm for the Iberian Peninsula

    DEFF Research Database (Denmark)

    Nieto Solana, Hector; Sandholt, Inge; Aguado, Inmaculada

    2011-01-01

    Air temperature can be estimated from remote sensing by combining information in thermal infrared and optical wavelengths. The empirical TVX algorithm is based on an estimated linear relationship between observed Land Surface Temperature (LST) and a Spectral Vegetation Index (NDVI). Air temperature...... variation, land cover, landscape heterogeneity and topography. Results showed that the new calibrated NDVImax perform well, with a Mean Absolute Error ranging between 2.8 °C and 4 °C. In addition, vegetation-specific NDVImax improve the accuracy compared with a unique NDVImax....

  14. Optimisation and Validation of the ARAMIS Digital Image Correlation System for Use in Large-scale High-strain-rate Events

    Science.gov (United States)

    2013-08-01

    enamel paint. Under extreme plastic deformation, the relative deformation of the coating could cause the coating to separate resulting in loss of...point for one to be found. If a discontinuity, such as a crack , occurs through the object separating speckle pattern, then the strain data will only

  15. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    International Nuclear Information System (INIS)

    Kindberg, Katarina; Haraldsson, Henrik; Sigfridsson, Andreas; Engvall, Jan; Ingels, Neil B Jr; Ebbers, Tino; Karlsson, Matts

    2012-01-01

    The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts

  16. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Kindberg Katarina

    2012-04-01

    Full Text Available Abstract Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE, make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts.

  17. Improved GRACE regional mass balance estimates of the Greenland ice sheet cross-validated with the input-output method

    NARCIS (Netherlands)

    Xu, Zheng; Schrama, Ernst J. O.; van der Wal, Wouter; van den Broeke, Michiel; Enderlin, Ellyn M.

    2016-01-01

    In this study, we use satellite gravimetry data from the Gravity Recovery and Climate Experiment (GRACE) to estimate regional mass change of the Greenland ice sheet (GrIS) and neighboring glaciated regions using a least squares inversion approach. We also consider results from the input–output

  18. Improved GRACE regional mass balance estimates of the Greenland ice sheet cross-validated with the input–output method

    NARCIS (Netherlands)

    Xu, Z.; Schrama, E.J.O.; van der Wal, W.; van den Broeke, MR; Enderlin, EM

    2016-01-01

    In this study, we use satellite gravimetry data from the Gravity Recovery and Climate Experiment (GRACE) to estimate regional mass change of the Greenland ice sheet (GrIS) and neighboring glaciated regions using a least squares inversion approach. We also consider results from the input–output

  19. Validation of abundance estimates from mark-recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    Science.gov (United States)

    Amanda E. Rosenberger; Jason B. Dunham

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln–Peterson mark–recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams....

  20. A validated calculator to estimate risk of cesarean after an induction of labor with an unfavorable cervix.

    Science.gov (United States)

    Levine, Lisa D; Downes, Katheryne L; Parry, Samuel; Elovitz, Michal A; Sammel, Mary D; Srinivas, Sindhu K

    2018-02-01

    Induction of labor occurs in >20% of pregnancies, which equates to approximately 1 million women undergoing an induction in the United States annually. Regardless of how common inductions are, our ability to predict induction success is limited. Although multiple risk factors for a failed induction have been identified, risk factors alone are not enough to quantify an actual risk of cesarean for an individual woman undergoing a cesarean. The objective of this study was to derive and validate a prediction model for cesarean after induction with an unfavorable cervix and to create a Web-based calculator to assist in patient counseling. Derivation and validation of a prediction model for cesarean delivery after induction was performed as part of a planned secondary analysis of a large randomized trial. A predictive model for cesarean delivery was derived using multivariable logistic regression from a large randomized trial on induction methods (n = 491) that took place from 2013 through 2015 at an academic institution. Full-term (≥37 weeks) women carrying a singleton gestation with intact membranes and an unfavorable cervix (Bishop score ≤6 and dilation ≤2 cm) undergoing an induction were included in this trial. Both nulliparous and multiparous women were included. Women with a prior cesarean were excluded. Refinement of the prediction model was performed using an observational cohort of women from the same institution who underwent an induction (n = 364) during the trial period. An external validation was performed utilizing a publicly available database (Consortium for Safe Labor) that includes information for >200,000 deliveries from 19 hospitals across the United States from 2002 through 2008. After applying the same inclusion and exclusion criteria utilized in the derivation cohort, a total of 8466 women remained for analysis. The discriminative power of each model was assessed using a bootstrap, bias-corrected area under the curve. The cesarean delivery

  1. Isokinetic strength assessment offers limited predictive validity for detecting risk of future hamstring strain in sport: a systematic review and meta-analysis.

    Science.gov (United States)

    Green, Brady; Bourne, Matthew N; Pizzari, Tania

    2018-03-01

    To examine the value of isokinetic strength assessment for predicting risk of hamstring strain injury, and to direct future research into hamstring strain injuries. Systematic review. Database searches for Medline, CINAHL, Embase, AMED, AUSPORT, SPORTDiscus, PEDro and Cochrane Library from inception to April 2017. Manual reference checks, ahead-of-press and citation tracking. Prospective studies evaluating isokinetic hamstrings, quadriceps and hip extensor strength testing as a risk factor for occurrence of hamstring muscle strain. Independent search result screening. Risk of bias assessment by independent reviewers using Quality in Prognosis Studies tool. Best evidence synthesis and meta-analyses of standardised mean difference (SMD). Twelve studies were included, capturing 508 hamstring strain injuries in 2912 athletes. Isokinetic knee flexor, knee extensor and hip extensor outputs were examined at angular velocities ranging 30-300°/s, concentric or eccentric, and relative (Nm/kg) or absolute (Nm) measures. Strength ratios ranged between 30°/s and 300°/s. Meta-analyses revealed a small, significant predictive effect for absolute (SMD=-0.16, P=0.04, 95% CI -0.31 to -0.01) and relative (SMD=-0.17, P=0.03, 95% CI -0.33 to -0.014) eccentric knee flexor strength (60°/s). No other testing speed or strength ratio showed statistical association. Best evidence synthesis found over half of all variables had moderate or strong evidence for no association with future hamstring injury. Despite an isolated finding for eccentric knee flexor strength at slow speeds, the role and application of isokinetic assessment for predicting hamstring strain risk should be reconsidered, particularly given costs and specialised training required. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Effect of cyclic plastic pre-strain on low cycle fatigue life

    International Nuclear Information System (INIS)

    Kanno, Satoshi; Nakane, Motoki; Yorikawa, Morio; Takagi, Yoshio

    2010-01-01

    In order to evaluate structural integrity of nuclear components subjected large seismic load which produce locally plastic strain, low cycle fatigue life was examined using cyclic plastic pre-strained materials of austenitic steel (SUS316, SUS316L, SUS304TP: JIS (Japanese Industrial Standards)) and ferritic steel (SFVQ1A, STS480, STPT410, SFVC2B, SS400: JIS). It was not found that cyclic plastic pre-strain up to range of 16%, 2.5 times affected on low cycle fatigue life. The validity of existing procedure of fatigue life estimation based on usage factor was confirmed when large seismic load brought nuclear materials cyclic plastic strain. (author)

  3. Validating a mass balance accounting approach to using 7Be measurements to estimate event-based erosion rates over an extended period at the catchment scale

    Science.gov (United States)

    Porto, Paolo; Walling, Des E.; Cogliandro, Vanessa; Callegari, Giovanni

    2016-07-01

    Use of the fallout radionuclides cesium-137 and excess lead-210 offers important advantages over traditional methods of quantifying erosion and soil redistribution rates. However, both radionuclides provide information on longer-term (i.e., 50-100 years) average rates of soil redistribution. Beryllium-7, with its half-life of 53 days, can provide a basis for documenting short-term soil redistribution and it has been successfully employed in several studies. However, the approach commonly used introduces several important constraints related to the timing and duration of the study period. A new approach proposed by the authors that overcomes these constraints has been successfully validated using an erosion plot experiment undertaken in southern Italy. Here, a further validation exercise undertaken in a small (1.38 ha) catchment is reported. The catchment was instrumented to measure event sediment yields and beryllium-7 measurements were employed to document the net soil loss for a series of 13 events that occurred between November 2013 and June 2015. In the absence of significant sediment storage within the catchment's ephemeral channel system and of a significant contribution from channel erosion to the measured sediment yield, the estimates of net soil loss for the individual events could be directly compared with the measured sediment yields to validate the former. The close agreement of the two sets of values is seen as successfully validating the use of beryllium-7 measurements and the new approach to obtain estimates of net soil loss for a sequence of individual events occurring over an extended period at the scale of a small catchment.

  4. Development and validation of a food photography manual, as a tool for estimation of food portion size in epidemiological dietary surveys in Tunisia

    Directory of Open Access Journals (Sweden)

    Mongia Bouchoucha

    2016-08-01

    Full Text Available Background: Estimation of food portion sizes has always been a challenge in dietary studies on free-living individuals. The aim of this work was to develop and validate a food photography manual to improve the accuracy of the estimated size of consumed food portions. Methods: A manual was compiled from digital photos of foods commonly consumed by the Tunisian population. The food was cooked and weighed before taking digital photographs of three portion sizes. The manual was validated by comparing the method of 24-hour recall (using photos to the reference method [food weighing (FW]. In both the methods, the comparison focused on food intake amounts as well as nutritional issues. Validity was assessed by Bland–Altman limits of agreement. In total, 31 male and female volunteers aged 9–89 participated in the study. Results: We focused on eight food categories and compared their estimated amounts (using the 24-hour recall method to those actually consumed (using FW. Animal products and sweets were underestimated, whereas pasta, bread, vegetables, fruits, and dairy products were overestimated. However, the difference between the two methods is not statistically significant except for pasta (p<0.05 and dairy products (p<0.05. The coefficient of correlation between the two methods is highly significant, ranging from 0.876 for pasta to 0.989 for dairy products. Nutrient intake calculated for both methods showed insignificant differences except for fat (p<0.001 and dietary fiber (p<0.05. A highly significant correlation was observed between the two methods for all micronutrients. The test agreement highlights the lack of difference between the two methods. Conclusion: The difference between the 24-hour recall method using digital photos and the weighing method is acceptable. Our findings indicate that the food photography manual can be a useful tool for quantifying food portion sizes in epidemiological dietary surveys.

  5. Development and validation of a food photography manual, as a tool for estimation of food portion size in epidemiological dietary surveys in Tunisia.

    Science.gov (United States)

    Bouchoucha, Mongia; Akrout, Mouna; Bellali, Hédia; Bouchoucha, Rim; Tarhouni, Fadwa; Mansour, Abderraouf Ben; Zouari, Béchir

    2016-01-01

    Estimation of food portion sizes has always been a challenge in dietary studies on free-living individuals. The aim of this work was to develop and validate a food photography manual to improve the accuracy of the estimated size of consumed food portions. A manual was compiled from digital photos of foods commonly consumed by the Tunisian population. The food was cooked and weighed before taking digital photographs of three portion sizes. The manual was validated by comparing the method of 24-hour recall (using photos) to the reference method [food weighing (FW)]. In both the methods, the comparison focused on food intake amounts as well as nutritional issues. Validity was assessed by Bland-Altman limits of agreement. In total, 31 male and female volunteers aged 9-89 participated in the study. We focused on eight food categories and compared their estimated amounts (using the 24-hour recall method) to those actually consumed (using FW). Animal products and sweets were underestimated, whereas pasta, bread, vegetables, fruits, and dairy products were overestimated. However, the difference between the two methods is not statistically significant except for pasta (p<0.05) and dairy products (p<0.05). The coefficient of correlation between the two methods is highly significant, ranging from 0.876 for pasta to 0.989 for dairy products. Nutrient intake calculated for both methods showed insignificant differences except for fat (p<0.001) and dietary fiber (p<0.05). A highly significant correlation was observed between the two methods for all micronutrients. The test agreement highlights the lack of difference between the two methods. The difference between the 24-hour recall method using digital photos and the weighing method is acceptable. Our findings indicate that the food photography manual can be a useful tool for quantifying food portion sizes in epidemiological dietary surveys.

  6. Development and validation of a stability-indicating RP–HPLC method for estimation of atazanavir sulfate in bulk

    Directory of Open Access Journals (Sweden)

    S. Dey

    2017-04-01

    Full Text Available A stability-indicating reverse phase–high performance liquid chromatography (RP–HPLC method was developed and validated for the determination of atazanavir sulfate in tablet dosage forms using C18 column Phenomenix (250 mm×4.6 mm, 5 μm with a mobile phase consisting of 900 mL of HPLC grade methanol and 100 mL of water of HPLC grade. The pH was adjusted to 3.55 with acetic acid. The mobile phase was sonicated for 10 min and filtered through a 0.45 μm membrane filter at a flow rate of 0.5 mL/min. The detection was carried out at 249 nm and retention time of atazanavir sulfate was found to be 8.323 min. Linearity was observed from 10 to 90 μg/mL (coefficient of determination R2 was 0.999 with equation, y=23.427x+37.732. Atazanavir sulfate was subjected to stress conditions including acidic, alkaline, oxidation, photolysis and thermal degradation, and the results showed that it was more sensitive towards acidic degradation. The method was validated as per ICH guidelines.

  7. Development and validation of reversed-phase HPLC gradient method for the estimation of efavirenz in plasma.

    Directory of Open Access Journals (Sweden)

    Shweta Gupta

    Full Text Available Efavirenz is an anti-viral agent of non-nucleoside reverse transcriptase inhibitor category used as a part of highly active retroviral therapy for the treatment of infections of human immune deficiency virus type-1. A simple, sensitive and rapid reversed-phase high performance liquid chromatographic gradient method was developed and validated for the determination of efavirenz in plasma. The method was developed with high performance liquid chromatography using Waters X-Terra Shield, RP18 50 x 4.6 mm, 3.5 μm column and a mobile phase consisting of phosphate buffer pH 3.5 and Acetonitrile. The elute was monitored with the UV-Visible detector at 260 nm with a flow rate of 1.5 mL/min. Tenofovir disoproxil fumarate was used as internal standard. The method was validated for linearity, precision, accuracy, specificity, robustness and data obtained were statistically analyzed. Calibration curve was found to be linear over the concentration range of 1-300 μg/mL. The retention times of efavirenz and tenofovir disoproxil fumarate (internal standard were 5.941 min and 4.356 min respectively. The regression coefficient value was found to be 0.999. The limit of detection and the limit of quantification obtained were 0.03 and 0.1 μg/mL respectively. The developed HPLC method can be useful for quantitative pharmacokinetic parameters determination of efavirenz in plasma.

  8. Stability indicating method development and validation of assay method for the estimation of rizatriptan benzoate in tablet

    Directory of Open Access Journals (Sweden)

    Chandrashekhar K. Gadewar

    2017-05-01

    Full Text Available A simple, sensitive, precise and specific high performance liquid chromatography method was developed and validated for the determination of rizatriptan in rizatriptan benzoate tablet. The separation was carried out by using a mobile phase consisting of acetonitrile: pH 3.4 phosphate buffer in ratio of 20:80. The column used was Zorbax SB CN 250 mm × 4.6 mm, 5 μ with a flow rate of 1 ml/min using UV detection at 225 nm. The retention time of rizatriptan and benzoic acid was found to be 4.751 and 8.348 min respectively. A forced degradation study of rizatriptan benzoate in its tablet form was conducted under the condition of hydrolysis, oxidation, thermal and photolysis. Rizatriptan was found to be stable in basic buffer while in acidic buffer was found to be degraded (water bath at 60 °C for 15 min. The detector response of rizatriptan is directly proportional to concentration ranging from 30% to 160% of test concentration i.e. 15.032 to 80.172 mcg/ml. Results of analysis were validated statistically and by recovery studies (mean recovery = 99.44. The result of the study showed that the proposed method is simple, rapid, precise and accurate, which is useful for the routine determination of rizatriptan in pharmaceutical dosage forms.

  9. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    Science.gov (United States)

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  10. Validating the absolute reliability of a fat free mass estimate equation in hemodialysis patients using near-infrared spectroscopy.

    Science.gov (United States)

    Kono, Kenichi; Nishida, Yusuke; Moriyama, Yoshihumi; Taoka, Masahiro; Sato, Takashi

    2015-06-01

    The assessment of nutritional states using fat free mass (FFM) measured with near-infrared spectroscopy (NIRS) is clinically useful. This measurement should incorporate the patient's post-dialysis weight ("dry weight"), in order to exclude the effects of any change in water mass. We therefore used NIRS to investigate the regression, independent variables, and absolute reliability of FFM in dry weight. The study included 47 outpatients from the hemodialysis unit. Body weight was measured before dialysis, and FFM was measured using NIRS before and after dialysis treatment. Multiple regression analysis was used to estimate the FFM in dry weight as the dependent variable. The measured FFM before dialysis treatment (Mw-FFM), and the difference between measured and dry weight (Mw-Dw) were independent variables. We performed Bland-Altman analysis to detect errors between the statistically estimated FFM and the measured FFM after dialysis treatment. The multiple regression equation to estimate the FFM in dry weight was: Dw-FFM = 0.038 + (0.984 × Mw-FFM) + (-0.571 × [Mw-Dw]); R(2)  = 0.99). There was no systematic bias between the estimated and the measured values of FFM in dry weight. Using NIRS, FFM in dry weight can be calculated by an equation including FFM in measured weight and the difference between the measured weight and the dry weight. © 2015 The Authors. Therapeutic Apheresis and Dialysis © 2015 International Society for Apheresis.

  11. Validation of attenuation, beam blockage, and calibration estimation methods using two dual polarization X band weather radars

    Science.gov (United States)

    Diederich, M.; Ryzhkov, A.; Simmer, C.; Mühlbauer, K.

    2011-12-01

    The amplitude a of radar wave reflected by meteorological targets can be misjudged due to several factors. At X band wavelength, attenuation of the radar beam by hydro meteors reduces the signal strength enough to be a significant source of error for quantitative precipitation estimation. Depending on the surrounding orography, the radar beam may be partially blocked when scanning at low elevation angles, and the knowledge of the exact amount of signal loss through beam blockage becomes necessary. The phase shift between the radar signals at horizontal and vertical polarizations is affected by the hydrometeors that the beam travels through, but remains unaffected by variations in signal strength. This has allowed for several ways of compensating for the attenuation of the signal, and for consistency checks between these variables. In this study, we make use of several weather radars and gauge network measuring in the same area to examine the effectiveness of several methods of attenuation and beam blockage corrections. The methods include consistency checks of radar reflectivity and specific differential phase, calculation of beam blockage using a topography map, estimating attenuation using differential propagation phase, and the ZPHI method proposed by Testud et al. in 2000. Results show the high effectiveness of differential phase in estimating attenuation, and potential of the ZPHI method to compensate attenuation, beam blockage, and calibration errors.

  12. Temperature based validation of the analytical model for the estimation of the amount of heat generated during friction stir welding

    Directory of Open Access Journals (Sweden)

    Milčić Dragan S.

    2012-01-01

    Full Text Available Friction stir welding is a solid-state welding technique that utilizes thermomechanical influence of the rotating welding tool on parent material resulting in a monolith joint - weld. On the contact of welding tool and parent material, significant stirring and deformation of parent material appears, and during this process, mechanical energy is partially transformed into heat. Generated heat affects the temperature of the welding tool and parent material, thus the proposed analytical model for the estimation of the amount of generated heat can be verified by temperature: analytically determined heat is used for numerical estimation of the temperature of parent material and this temperature is compared to the experimentally determined temperature. Numerical solution is estimated using the finite difference method - explicit scheme with adaptive grid, considering influence of temperature on material's conductivity, contact conditions between welding tool and parent material, material flow around welding tool, etc. The analytical model shows that 60-100% of mechanical power given to the welding tool is transformed into heat, while the comparison of results shows the maximal relative difference between the analytical and experimental temperature of about 10%.

  13. Estimation of Resting Energy Expenditure: Validation of Previous and New Predictive Equations in Obese Children and Adolescents.

    Science.gov (United States)

    Acar-Tek, Nilüfer; Ağagündüz, Duygu; Çelik, Bülent; Bozbulut, Rukiye

    2017-08-01

    Accurate estimation of resting energy expenditure (REE) in childrenand adolescents is important to establish estimated energy requirements. The aim of the present study was to measure REE in obese children and adolescents by indirect calorimetry method, compare these values with REE values estimated by equations, and develop the most appropriate equation for this group. One hundred and three obese children and adolescents (57 males, 46 females) between 7 and 17 years (10.6 ± 2.19 years) were recruited for the study. REE measurements of subjects were made with indirect calorimetry (COSMED, FitMatePro, Rome, Italy) and body compositions were analyzed. In females, the percentage of accurate prediction varied from 32.6 (World Health Organization [WHO]) to 43.5 (Molnar and Lazzer). The bias for equations was -0.2% (Kim), 3.7% (Molnar), and 22.6% (Derumeaux-Burel). Kim's (266 kcal/d), Schmelzle's (267 kcal/d), and Henry's equations (268 kcal/d) had the lowest root mean square error (RMSE; respectively 266, 267, 268 kcal/d). The equation that has the highest RMSE values among female subjects was the Derumeaux-Burel equation (394 kcal/d). In males, when the Institute of Medicine (IOM) had the lowest accurate prediction value (12.3%), the highest values were found using Schmelzle's (42.1%), Henry's (43.9%), and Müller's equations (fat-free mass, FFM; 45.6%). When Kim and Müller had the smallest bias (-0.6%, 9.9%), Schmelzle's equation had the smallest RMSE (331 kcal/d). The new specific equation based on FFM was generated as follows: REE = 451.722 + (23.202 * FFM). According to Bland-Altman plots, it has been found out that the new equations are distributed randomly in both males and females. Previously developed predictive equations mostly provided unaccurate and biased estimates of REE. However, the new predictive equations allow clinicians to estimate REE in an obese children and adolescents with sufficient and acceptable accuracy.

  14. Continent-Wide Estimates of Antarctic Strain Rates from Landsat 8-Derived Velocity Grids and Their Application to Ice Shelf Studies

    Science.gov (United States)

    Alley, K. E.; Scambos, T.; Anderson, R. S.; Rajaram, H.; Pope, A.; Haran, T.

    2017-12-01

    Strain rates are fundamental measures of ice flow used in a wide variety of glaciological applications including investigations of bed properties, calculations of basal mass balance on ice shelves, application to Glen's flow law, and many other studies. However, despite their extensive application, strain rates are calculated using widely varying methods and length scales, and the calculation details are often not specified. In this study, we compare the results of nominal and logarithmic strain-rate calculations based on a satellite-derived velocity field of the Antarctic ice sheet generated from Landsat 8 satellite data. Our comparison highlights the differences between the two commonly used approaches in the glaciological literature. We evaluate the errors introduced by each code and their impacts on the results. We also demonstrate the importance of choosing and specifying a length scale over which strain-rate calculations are made, which can have large local impacts on other derived quantities such as basal mass balance on ice shelves. We present strain-rate data products calculated using an approximate viscous length-scale with satellite observations of ice velocity for the Antarctic continent. Finally, we explore the applications of comprehensive strain-rate maps to future ice shelf studies, including investigations of ice fracture, calving patterns, and stability analyses.

  15. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    Science.gov (United States)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the

  16. Differences in the validity of a visual estimation method for determining patients' meal intake between various meal types and supplied food items.

    Science.gov (United States)

    Kawasaki, Yui; Akamatsu, Rie; Tamaura, Yuki; Sakai, Masashi; Fujiwara, Keiko; Tsutsuura, Satomi

    2018-02-12

    The aim of this study was to examine differences in the validity of a visual estimation method for determining patients' meal intake between various meal types and supplied food items in hospitals and to find factors influencing the validity of a visual estimation method. There are two procedures by which we obtained the information on dietary intake of the patients in these hospitals. These are both by visual assessment from the meal trays at the time of their clearing, by the attending nursing staff and by weighing conducted by researchers. The following criteria are set for the target trays: A) standard or therapeutic meals, which are monitored by a doctor, for energy and/or protein and/or sodium; B) regular, bite-sized, minced and pureed meal texture, and C) half-portion meals. Visual assessment results were tested for their validity by comparing with the corresponding results of weighing. Differences between these two methods indicated the estimated and absolute values of nutrient intake. A total of 255 (76.1%) trays were included in the analysis out of the 335 possible trays and the results indicated that the energy consumption estimates by visual or weighing procedures are not significantly different (412 ± 173 kcal, p = 0.15). However, the mean protein consumption was significantly different (16.3 ± 6.7 g/tray, p food items were significantly misestimated for energy intake (66 ± 58 kcal/tray) compared to trays with no additions (32 ± 39 kcal/tray, p food items were significantly associated with increased odds of a difference between the two methods (OR: 3.84; 95% confidence interval [CI]: 1.07-13.85). There were high correlations between the visual estimation method and the weighing method measuring patients' dietary intake for various meal types and textures, except for meals with added supplied food items. Nursing staff need to be attentive to supplied food items. Copyright © 2018 Elsevier Ltd and European Society for Clinical

  17. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach

    Directory of Open Access Journals (Sweden)

    Nan Xu

    2017-05-01

    Full Text Available Resting-state functional MRI (rs-fMRI is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD signal from different regions of interest (ROIs. However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1 Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2 On simulated data designed to display the “common driver” problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3 On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain.

  18. Validity of eyeball estimation for range of motion during the cervical flexion rotation test compared to an ultrasound-based movement analysis system.

    Science.gov (United States)

    Schäfer, Axel; Lüdtke, Kerstin; Breuel, Franziska; Gerloff, Nikolas; Knust, Maren; Kollitsch, Christian; Laukart, Alex; Matej, Laura; Müller, Antje; Schöttker-Königer, Thomas; Hall, Toby

    2018-08-01

    Headache is a common and costly health problem. Although pathogenesis of headache is heterogeneous, one reported contributing factor is dysfunction of the upper cervical spine. The flexion rotation test (FRT) is a commonly used diagnostic test to detect upper cervical movement impairment. The aim of this cross-sectional study was to investigate concurrent validity of detecting high cervical ROM impairment during the FRT by comparing measurements established by an ultrasound-based system (gold standard) with eyeball estimation. Secondary aim was to investigate intra-rater reliability of FRT ROM eyeball estimation. The examiner (6 years experience) was blinded to the data from the ultrasound-based device and to the symptoms of the patients. FRT test result (positive or negative) was based on visual estimation of range of rotation less than 34° to either side. Concurrently, range of rotation was evaluated using the ultrasound-based device. A total of 43 subjects with headache (79% female), mean age of 35.05 years (SD 13.26) were included. According to the International Headache Society Classification 23 subjects had migraine, 4 tension type headache, and 16 multiple headache forms. Sensitivity and specificity were 0.96 and 0.89 for combined rotation, indicating good concurrent reliability. The area under the ROC curve was 0.95 (95% CI 0.91-0.98) for rotation to both sides. Intra-rater reliability for eyeball estimation was excellent with Fleiss Kappa 0.79 for right rotation and left rotation. The results of this study indicate that the FRT is a valid and reliable test to detect impairment of upper cervical ROM in patients with headache.

  19. Validation of Left Atrial Volume Estimation by Left Atrial Diameter from the Parasternal Long-Axis View.

    Science.gov (United States)

    Canciello, Grazia; de Simone, Giovanni; Izzo, Raffaele; Giamundo, Alessandra; Pacelli, Filomena; Mancusi, Costantino; Galderisi, Maurizio; Trimarco, Bruno; Losi, Maria-Angela

    2017-03-01

    Measurement of left atrial (LA) volume (LAV) is recommended for quantification of LA size. Only LA anteroposterior diameter (LAd) is available in a number of large cohorts, trials, or registries. The aim of this study was to evaluate whether LAV may be reasonably estimated from LAd. One hundred forty consecutive patients referred to our outpatient clinics were prospectively enrolled to measure LAd from the long-axis view on two-dimensional echocardiography. LA orthogonal dimensions were also taken from apical four- and two-chamber views. LAV was measured using the Simpson, area-length, and ellipsoid (LAV e ) methods. The first 70 patients were the learning series and the last 70 the testing series (TeS). In the learning series, best-fitting regression analysis of LAV-LAd was run using all LAV methods, and the highest values of F were chosen among the regression equations. In the TeS, the best-fitting regressions were used to estimate LAV from LAd. In the learning series, the best-fitting regression was linear for the Spearman method (r 2  = 0.62, F = 111.85, P = .0001) and area-length method (r 2  = 0.62, F = 112.24, P = .0001) and powered for the LAV e method (r 2  = 0.81, F = 288.41, P = .0001). In the TeS, the r 2 value for LAV prediction was substantially better using the LAV e method (r 2  = 0.89) than the Simpson (r 2  = 0.72) or area-length (r 2  = 0.70) method, as was the intraclass correlation (ρ = 0.96 vs ρ = 0.89 and ρ = 0.89, respectively). In the TeS, the sensitivity and specificity of LA dilatation by the estimated LAV e method were 87% and 90%, respectively. LAV can be estimated from LAd using a nonlinear equation with an elliptical model. The proposed method may be used in retrospective analysis of existing data sets in which determination of LAV was not programmed. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  20. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques

    International Nuclear Information System (INIS)

    Pontailler, J.-Y.; Hymus, G.J.; Drake, B.G.

    2003-01-01

    This study took place in an evergreen scrub oak ecosystem in Florida. Vegetation reflectance was measured in situ with a laboratory-made sensor in the red (640-665 nm) and near-infrared (750-950 nm) bands to calculate the normalized difference vegetation index (NDVI) and derive the leaf area index (LAI). LAI estimates from this technique were compared with two other nondestructive techniques, intercepted photosynthetically active radiation (PAR) and hemispherical photographs, in four contrasting 4 m 2 plots in February 2000 and two 4m 2 plots in June 2000. We used Beer's law to derive LAI from PAR interception and gap fraction distribution to derive LAI from photographs. The plots were harvested manually after the measurements to determine a 'true' LAI value and to calculate a light extinction coefficient (k). The technique based on Beer's law was affected by a large variation of the extinction coefficient, owing to the larger impact of branches in winter when LAI was low. Hemispherical photographs provided satisfactory estimates, slightly overestimated in winter because of the impact of branches or underestimated in summer because of foliage clumping. NDVI provided the best fit, showing only saturation in the densest plot (LAI = 3.5). We conclude that in situ measurement of NDVI is an accurate and simple technique to nondestructively assess LAI in experimental plots or in crops if saturation remains acceptable. (author)

  1. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques

    Energy Technology Data Exchange (ETDEWEB)

    Pontailler, J.-Y. [Univ. Paris-Sud XI, Dept. d' Ecophysiologie Vegetale, Orsay Cedex (France); Hymus, G.J.; Drake, B.G. [Smithsonian Environmental Research Center, Kennedy Space Center, Florida (United States)

    2003-06-01

    This study took place in an evergreen scrub oak ecosystem in Florida. Vegetation reflectance was measured in situ with a laboratory-made sensor in the red (640-665 nm) and near-infrared (750-950 nm) bands to calculate the normalized difference vegetation index (NDVI) and derive the leaf area index (LAI). LAI estimates from this technique were compared with two other nondestructive techniques, intercepted photosynthetically active radiation (PAR) and hemispherical photographs, in four contrasting 4 m{sup 2} plots in February 2000 and two 4m{sup 2} plots in June 2000. We used Beer's law to derive LAI from PAR interception and gap fraction distribution to derive LAI from photographs. The plots were harvested manually after the measurements to determine a 'true' LAI value and to calculate a light extinction coefficient (k). The technique based on Beer's law was affected by a large variation of the extinction coefficient, owing to the larger impact of branches in winter when LAI was low. Hemispherical photographs provided satisfactory estimates, slightly overestimated in winter because of the impact of branches or underestimated in summer because of foliage clumping. NDVI provided the best fit, showing only saturation in the densest plot (LAI = 3.5). We conclude that in situ measurement of NDVI is an accurate and simple technique to nondestructively assess LAI in experimental plots or in crops if saturation remains acceptable. (author)

  2. Development and validation of a new technique for estimating a minimum postmortem interval using adult blow fly (Diptera: Calliphoridae) carcass attendance.

    Science.gov (United States)

    Mohr, Rachel M; Tomberlin, Jeffery K

    2015-07-01

    Understanding the onset and duration of adult blow fly activity is critical to accurately estimating the period of insect activity or minimum postmortem interval (minPMI). Few, if any, reliable techniques have been developed and consequently validated for using adult fly activity to determine a minPMI. In this study, adult blow flies (Diptera: Calliphoridae) of Cochliomyia macellaria and Chrysomya rufifacies were collected from swine carcasses in rural central Texas, USA, during summer 2008 and Phormia regina and Calliphora vicina in the winter during 2009 and 2010. Carcass attendance patterns of blow flies were related to species, sex, and oocyte development. Summer-active flies were found to arrive 4-12 h after initial carcass exposure, with both C. macellaria and C. rufifacies arriving within 2 h of one another. Winter-active flies arrived within 48 h of one another. There was significant difference in degree of oocyte development on each of the first 3 days postmortem. These frequency differences allowed a minPMI to be calculated using a binomial analysis. When validated with seven tests using domestic and feral swine and human remains, the technique correctly estimated time of placement in six trials.

  3. Validation of a novel protocol for calculating estimated energy requirements and average daily physical activity ratio for the US population: 2005-2006.

    Science.gov (United States)

    Archer, Edward; Hand, Gregory A; Hébert, James R; Lau, Erica Y; Wang, Xuewen; Shook, Robin P; Fayad, Raja; Lavie, Carl J; Blair, Steven N

    2013-12-01

    To validate the PAR protocol, a novel method for calculating population-level estimated energy requirements (EERs) and average physical activity ratio (APAR), in a nationally representative sample of US adults. Estimates of EER and APAR values were calculated via a factorial equation from a nationally representative sample of 2597 adults aged 20 and 74 years (US National Health and Nutrition Examination Survey; data collected between January 1, 2005, and December 31, 2006). Validation of the PAR protocol-derived EER (EER(PAR)) values was performed via comparison with values from the Institute of Medicine EER equations (EER(IOM)). The correlation between EER(PAR) and EER(IOM) was high (0.98; Pmen to 148 kcal/d (5.7% higher) in obese women. The 2005-2006 EERs for the US population were 2940 kcal/d for men and 2275 kcal/d for women and ranged from 3230 kcal/d in obese (BMI ≥30) men to 2026 kcal/d in normal weight (BMI women. There were significant inverse relationships between APAR and both obesity and age. For men and women, the APAR values were 1.53 and 1.52, respectively. Obese men and women had lower APAR values than normal weight individuals (P¼.023 and P¼.015, respectively) [corrected], and younger individuals had higher APAR values than older individuals (Pphysical activity and health. Copyright © 2013 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  4. Convergent validity between a discrete choice experiment and a direct, open-ended method: comparison of preferred attribute levels and willingness to pay estimates.

    Science.gov (United States)

    Marjon van der Pol; Shiell, Alan; Au, Flora; Johnston, David; Tough, Suzanne

    2008-12-01

    The Discrete Choice Experiment (DCE) has become increasingly popular as a method for eliciting patient or population preferences. If DCE estimates are to inform health policy, it is crucial that the answers they provide are valid. Convergent validity is tested in this paper by comparing the results of a DCE exercise with the answers obtained from direct, open-ended questions. The two methods are compared in terms of preferred attribute levels and willingness to pay (WTP) values. Face-to-face interviews were held with 292 women in Calgary, Canada. Similar values were found between the two methods with respect to preferred levels for two out of three of the attributes examined. The DCE predicted less well for levels outside the range than for levels inside the range reaffirming the importance of extensive piloting to ensure appropriate level range in DCEs. The mean WTP derived from the open-ended question was substantially lower than the mean derived from the DCE. However, the two sets of willingness to pay estimates were consistent with each other in that individuals who were willing to pay more in the open-ended question were also willing to pay more in the DCE. The difference in mean WTP values between the two approaches (direct versus DCE) demonstrates the importance of continuing research into the different biases present across elicitation methods.

  5. Reproducibility and validity of the food frequency questionnaire for estimating habitual dietary intake in children and adolescents

    Science.gov (United States)

    2011-01-01

    Background A previous study reported the development a 75-item food frequency questionnaire for Japanese children (CFFQ). The first aim was to examine the reproducibility and validity of the CFFQ in order to assess dietary intake among two groups; 3-11 year old children (YC group) and 12-16 year old children (AD group). The second aim was to use the CFFQ and the FFQ for adults (AFFQ), and to determine which was better suited for assessing the intake of children in each group. Methods A total of the 103 children participated in this study. The interval between the first CFFQ and AFFQ and the second CFFQ and AFFQ was one month. Four weighted dietary records (WDRs) were conducted once a week. Pearson's correlation coefficients between the first and second FFQs were calculated to test the reproducibility of each FFQ. Pearson's correlation coefficients between WDRs and the second FFQ were calculated for the unadjusted value and sex-, age-, and energy-adjusted values to determine the validity of each FFQ. Results The final number of subjects participating in the analysis was 89. The median correlation coefficients between the first and second CFFQs and AFFQs were 0.76 and 0.73, respectively. There was some over/underestimation of nutrients in the CFFQ of the YC group and in the AFFQ of the AD group. The medians of the sex-, age-, and energy-adjusted correlation coefficients were not different between the YC and AD groups for each FFQ. The correlation coefficient in sex-, age-, and energy-adjusted value revealed that the largest number of subject with high (0.50 or more) value was obtained by the CFFQ in the YC group. Conclusions This study indicated that the CFFQ might be a useful tool for assessing habitual dietary intake of children in the YC group. Although the CFFQ agreed moderately with habitual intake, it was found to underestimate intake in theAD group. However, for the AFFQ, the ability to rank habitual intake was low. Therefore, it is necessary to develop a new

  6. Validation of an extraction paper chromatography (EPC) technique for estimation of trace levels of 90Sr in 90Y solutions obtained from 90Sr/90Y generator systems

    International Nuclear Information System (INIS)

    Usha Pandey; Yogendra Kumar; Ashutosh Dash

    2014-01-01

    While the extraction paper chromatography (EPC) technique constitutes a novel paradigm for the determination of few Becquerels of 90 Sr in MBq quantities of 90 Y obtained from 90 Sr/ 90 Y generator, validation of the technique is essential to ensure its usefulness as a real time analytical tool. With a view to explore the relevance and applicability of EPC technique as a real time quality control (QC) technique for the routine estimation of 90 Sr content in generator produced 90 Y, a systematic validation study was carried out diligently not only to establish its worthiness but also to broaden its horizon. The ability of the EPC technique to separate trace amounts of Sr 2+ in the presence of large amounts of Y 3+ was verified. The specificity of the technique for Y 3+ was demonstrated with 90 Y obtained by neutron irradiation. The method was validated under real experimental conditions and compared with a QC method described in US Pharmacopeia for detection of 90 Sr levels in 90 Y radiopharmaceuticals. (author)

  7. Response of insect relative growth rate to temperature and host-plant phenology: estimation and validation from field data.

    Directory of Open Access Journals (Sweden)

    Mamadou Ciss

    Full Text Available Between 1975 to 2011, aphid Relative Growth Rates (RGR were modelled as a function of mean outdoor temperature and host plant phenology. The model was applied to the grain aphid Sitobion avenae using data on aphid counts in winter wheat at two different climate regions in France (oceanic climate, Rennes (western France; continental climate, Paris. Mean observed aphid RGR was higher in Paris compared to the Rennes region. RGR increased with mean temperature, which is explained by aphid reproduction, growth and development being dependent on ambient temperature. From the stem extension to the heading stage in wheat, there was either a plateau in RGR values (Rennes or an increase with a maximum at heading (Paris due to high intrinsic rates of increase in aphids and also to aphid immigration. From the wheat flowering to the ripening stage, RGR decreased in both regions due to the low intrinsic rate of increase in aphids and high emigration rate linked to reduced nutrient quality in maturing wheat. The model validation process showed that the fitted models have more predictive power in the Paris region than in the Rennes region.

  8. Validating automated kidney stone volumetry in computed tomography and mathematical correlation with estimated stone volume based on diameter.

    Science.gov (United States)

    Wilhelm, Konrad; Miernik, Arkadiusz; Hein, Simon; Schlager, Daniel; Adams, Fabian; Benndorf, Matthias; Fritz, Benjamin; Langer, Mathias; Hesse, Albrecht; Schoenthaler, Martin; Neubauer, Jakob

    2018-06-02

    To validate AutoMated UroLithiasis Evaluation Tool (AMULET) software for kidney stone volumetry and compare its performance to standard clinical practice. Maximum diameter and volume of 96 urinary stones were measured as reference standard by three independent urologists. The same stones were positioned in an anthropomorphic phantom and CT scans acquired in standard settings. Three independent radiologists blinded to the reference values took manual measurements of the maximum diameter and automatic measurements of maximum diameter and volume. An "expected volume" was calculated based on manual diameter measurements using the formula: V=4/3 πr³. 96 stones were analyzed in the study. We had initially aimed to assess 100. Nine were replaced during data acquisition due of crumbling and 4 had to be excluded because the automated measurement did not work. Mean reference maximum diameter was 13.3 mm (5.2-32.1 mm). Correlation coefficients among all measured outcomes were compared. The correlation between the manual and automatic diameter measurements to the reference was 0.98 and 0.91, respectively (pvolumetry is possible and significantly more accurate than diameter-based volumetric calculations. To avoid bias in clinical trials, size should be measured as volume. However, automated diameter measurements are not as accurate as manual measurements.

  9. Validation of Satellite Estimates (Tropical Rainfall Measuring Mission, TRMM for Rainfall Variability over the Pacific Slope and Coast of Ecuador

    Directory of Open Access Journals (Sweden)

    Bolívar Erazo

    2018-02-01

    Full Text Available A dense rain-gauge network within continental Ecuador was used to evaluate the quality of various products of rainfall data over the Pacific slope and coast of Ecuador (EPSC. A cokriging interpolation method is applied to the rain-gauge data yielding a gridded product at 5-km resolution covering the period 1965–2015. This product is compared with the Global Precipitation Climatology Centre (GPCC dataset, the Climatic Research Unit–University of East Anglia (CRU dataset, the Tropical Rainfall Measuring Mission (TRMM/TMPA 3B43 Version 7 dataset and the ERA-Interim Reanalysis. The analysis reveals that TRMM data show the most realistic features. The relative bias index (Rbias indicates that TRMM data is closer to the observations, mainly over lowlands (mean Rbias of 7% but have more limitations in reproducing the rainfall variability over the Andes (mean Rbias of −28%. The average RMSE and Rbias of 68.7 and −2.8% of TRMM are comparable with the GPCC (69.8 and 5.7% and CRU (102.3 and −2.3% products. This study also focuses on the rainfall inter-annual variability over the study region which experiences floods that have caused high economic losses during extreme El Niño events. Finally, our analysis evaluates the ability of TRMM data to reproduce rainfall events during El Niño years over the study area and the large basins of Esmeraldas and Guayas rivers. The results show that TRMM estimates report reasonable levels of heavy rainfall detection (for the extreme 1998 El Niño event over the EPSC and specifically towards the center-south of the EPSC (Guayas basin but present underestimations for the moderate El Niño of 2002–2003 event and the weak 2009–2010 event. Generally, the rainfall seasonal features, quantity and long-term climatology patterns are relatively well estimated by TRMM.

  10. Enhancing the Simplified Surface Energy Balance (SSEB) Approach for Estimating Landscape ET: Validation with the METRIC model

    Science.gov (United States)

    Senay, Gabriel B.; Budde, Michael E.; Verdin, James P.

    2011-01-01

    Evapotranspiration (ET) can be derived from satellite data using surface energy balance principles. METRIC (Mapping EvapoTranspiration at high Resolution with Internalized Calibration) is one of the most widely used models available in the literature to estimate ET from satellite imagery. The Simplified Surface Energy Balance (SSEB) model is much easier and less expensive to implement. The main purpose of this research was to present an enhanced version of the Simplified Surface Energy Balance (SSEB) model and to evaluate its performance using the established METRIC model. In this study, SSEB and METRIC ET fractions were compared using 7 Landsat images acquired for south central Idaho during the 2003 growing season. The enhanced SSEB model compared well with the METRIC model output exhibiting an r2 improvement from 0.83 to 0.90 in less complex topography (elevation less than 2000 m) and with an improvement of r2 from 0.27 to 0.38 in more complex (mountain) areas with elevation greater than 2000 m. Independent evaluation showed that both models exhibited higher variation in complex topographic regions, although more with SSEB than with METRIC. The higher ET fraction variation in the complex mountainous regions highlighted the difficulty of capturing the radiation and heat transfer physics on steep slopes having variable aspect with the simple index model, and the need to conduct more research. However, the temporal consistency of the results suggests that the SSEB model can be used on a wide range of elevation (more successfully up 2000 m) to detect anomalies in space and time for water resources management and monitoring such as for drought early warning systems in data scarce regions. SSEB has a potential for operational agro-hydrologic applications to estimate ET with inputs of surface temperature, NDVI, DEM and reference ET.

  11. Strain measurement based battery testing

    Science.gov (United States)

    Xu, Jeff Qiang; Steiber, Joe; Wall, Craig M.; Smith, Robert; Ng, Cheuk

    2017-05-23

    A method and system for strain-based estimation of the state of health of a battery, from an initial state to an aged state, is provided. A strain gauge is applied to the battery. A first strain measurement is performed on the battery, using the strain gauge, at a selected charge capacity of the battery and at the initial state of the battery. A second strain measurement is performed on the battery, using the strain gauge, at the selected charge capacity of the battery and at the aged state of the battery. The capacity degradation of the battery is estimated as the difference between the first and second strain measurements divided by the first strain measurement.

  12. Development and validation of risk prediction equations to estimate future risk of blindness and lower limb amputation in patients with diabetes: cohort study.

    Science.gov (United States)

    Hippisley-Cox, Julia; Coupland, Carol

    2015-11-11

    Is it possible to develop and externally validate risk prediction equations to estimate the 10 year risk of blindness and lower limb amputation in patients with diabetes aged 25-84 years? This was a prospective cohort study using routinely collected data from general practices in England contributing to the QResearch and Clinical Practice Research Datalink (CPRD) databases during the study period 1998-2014. The equations were developed using 763 QResearch practices (n=454,575 patients with diabetes) and validated in 254 different QResearch practices (n=142,419) and 357 CPRD practices (n=206,050). Cox proportional hazards models were used to derive separate risk equations for blindness and amputation in men and women that could be evaluated at 10 years. Measures of calibration and discrimination were calculated in the two validation cohorts. Risk prediction equations to quantify absolute risk of blindness and amputation in men and women with diabetes have been developed and externally validated. In the QResearch derivation cohort, 4822 new cases of lower limb amputation and 8063 new cases of blindness occurred during follow-up. The risk equations were well calibrated in both validation cohorts. Discrimination was good in men in the external CPRD cohort for amputation (D statistic 1.69, Harrell's C statistic 0.77) and blindness (D statistic 1.40, Harrell's C statistic 0.73), with similar results in women and in the QResearch validation cohort. The algorithms are based on variables that patients are likely to know or that are routinely recorded in general practice computer systems. They can be used to identify patients at high risk for prevention or further assessment. Limitations include lack of formally adjudicated outcomes, information bias, and missing data. Patients with type 1 or type 2 diabetes are at increased risk of blindness and amputation but generally do not have accurate assessments of the magnitude of their individual risks. The new algorithms calculate

  13. Development and validation of an HPTLC method for the simultaneous estimation of Clonazepam and Paroxetine hydrochloride using a DOE approach

    Directory of Open Access Journals (Sweden)

    Purvi Shah

    2017-01-01

    Full Text Available The present study examines simultaneous multiple response optimization using Derringer's desirability function for the development of an HPTLC method to detect Clonazepam and Paroxetine hydrochloride in pharmaceutical dosage form. Central composite design (CCD was used to optimize the chromatographic conditions for HPTLC. The independent variables used for the optimization were the n-butanol content in the mobile phase, the chamber saturation time and the distance travelled. HPTLC separation was performed on aluminium plates pre-coated with silica gel 60 F254 as the stationary phase using n-butanol:glacial acetic acid:water (9:2:0.5% v/v/v as the mobile phase. Quantification was achieved based on a densitometric analysis of Clonazepam and Paroxetine hydrochloride over the concentration range of 40–240 ng/band and 300–1800 ng/band, respectively, at 288 nm. The method yielded compact and well-resolved bands at Rf of 0.77 ± 0.02 and 0.34 ± 0.02 for Clonazepam and Paroxetine hydrochloride, respectively. The linear regression analysis for the calibration plots produced r2 = 0.9958 and r2 = 0.9989 for Clonazepam and Paroxetine hydrochloride, respectively. The precision, accuracy, robustness, specificity, limit of detection and limit of quantitation of the method were validated according to the ICH guidelines. The factors evaluated in the robustness test were determined to have an insignificant effect on the selected responses. The results indicate that the method is suitable for the routine quality control testing of marketed tablet formulations.

  14. Development and validation of spectrophotometric methods for simultaneous estimation of citicoline and piracetam in tablet dosage form

    Directory of Open Access Journals (Sweden)

    Akhila Sivadas

    2013-01-01

    Full Text Available Context: Citicoline (CN and piracetam (PM combination in tablet formulation is newly introduced in market. It is necessary to develop suitable quality control methods for rapid and accurate determination of these drugs. Aim: The study aimed to develop the methods for simultaneous determination of CN and PM in combined dosage form. Materials and Methods: The first method was developed by formation and solving simultaneous equations using 280.3 and 264.1 nm as two analytical wavelengths. Second method was absorbance ratio in which wavelengths selected were 256.6 nm as its absorptive point and 280.3 nm as λmax of CN. According to International Conference on Harmonization (ICH norm, the parameters - linearity, precision, and accuracy were studied. The methods were validated statistically and by recovery studies. Results: Both the drugs obeyed Beer-Lambert′s law at the selected wavelengths in concentration range of 5-13 μg/ml for CN and 10-22 μg/ml for PM. The percentage of CN and PM in marketed tablet formulation was found to be 99.006 ± 0.173 and 99.257 ± 0.613, respectively; by simultaneous equation method. For Q-Absorption ratio method the percentage of CN and PM was found to be 99.078 ± 0.158 and 99.708 ± 0.838, respectively. Conclusions: The proposed methods were simple, reproducible, precise and robust. The methods can be successfully applied for routine analysis of tablets.

  15. Validating a High Performance Liquid Chromatography-Ion Chromatography (HPLC-IC) Method with Conductivity Detection After Chemical Suppression for Water Fluoride Estimation.

    Science.gov (United States)

    Bondu, Joseph Dian; Selvakumar, R; Fleming, Jude Joseph

    2018-01-01

    A variety of methods, including the Ion Selective Electrode (ISE), have been used for estimation of fluoride levels in drinking water. But as these methods suffer many drawbacks, the newer method of IC has replaced many of these methods. The study aimed at (1) validating IC for estimation of fluoride levels in drinking water and (2) to assess drinking water fluoride levels of villages in and around Vellore district using IC. Forty nine paired drinking water samples were measured using ISE and IC method (Metrohm). Water samples from 165 randomly selected villages in and around Vellore district were collected for fluoride estimation over 1 year. Standardization of IC method showed good within run precision, linearity and coefficient of variance with correlation coefficient R 2  = 0.998. The limit of detection was 0.027 ppm and limit of quantification was 0.083 ppm. Among 165 villages, 46.1% of the villages recorded water fluoride levels >1.00 ppm from which 19.4% had levels ranging from 1 to 1.5 ppm, 10.9% had recorded levels 1.5-2 ppm and about 12.7% had levels of 2.0-3.0 ppm. Three percent of villages had more than 3.0 ppm fluoride in the water tested. Most (44.42%) of these villages belonged to Jolarpet taluk with moderate to high (0.86-3.56 ppm) water fluoride levels. Ion Chromatography method has been validated and is therefore a reliable method in assessment of fluoride levels in the drinking water. While the residents of Jolarpet taluk (Vellore distict) are found to be at a high risk of developing dental and skeletal fluorosis.

  16. Part II: Strain- and sex-specific effects of adolescent exposure to THC on adult brain and behaviour: Variants of learning, anxiety and volumetric estimates.

    Science.gov (United States)

    Keeley, R J; Trow, J; Bye, C; McDonald, R J

    2015-07-15

    Marijuana is one of the most highly used psychoactive substances in the world, and its use typically begins during adolescence, a period of substantial brain development. Females across species appear to be more susceptible to the long-term consequences of marijuana use. Despite the identification of inherent differences between rat strains including measures of anatomy, genetics and behaviour, no studies to our knowledge have examined the long-term consequences of adolescent exposure to marijuana or its main psychoactive component, Δ(9)-tetrahydrocannabinol (THC), in males and females of two widely used rat strains: Long-Evans hooded (LER) and Wistar (WR) rats. THC was administered for 14 consecutive days following puberty onset, and once they reached adulthood, changes in behaviour and in the volume of associated brain areas were quantified. Rats were assessed in behavioural tests of motor, spatial and contextual learning, and anxiety. Some tasks showed effects of injection, since handled and vehicle groups were included as controls. Performance on all tasks, except motor learning, and the volume of associated brain areas were altered with injection or THC administration, although these effects varied by strain and sex group. Finally, analysis revealed treatment-specific correlations between performance and brain volumes. This study is the first of its kind to directly compare males and females of two rat strains for the long-term consequences of adolescent THC exposure. It highlights the importance of considering strain and identifies certain rat strains as susceptible or resilient to the effects of THC. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. A New Model of the Mean Albedo of the Earth: Estimation and Validation from the GRACE Mission and SLR Satellites.

    Science.gov (United States)

    Deleflie, F.; Sammuneh, M. A.; Coulot, D.; Pollet, A.; Biancale, R.; Marty, J. C.

    2017-12-01

    This talk provides new results of a study that we began last year, and that was the subject of a poster by the same authors presented during AGU FM 2016, entitled « Mean Effect of the Albedo of the Earth on Artificial Satellite Trajectories: an Update Over 2000-2015. »The emissivity of the Earth, split into a part in the visible domain (albedo) and the infrared domain (thermic emissivity), is at the origin of non gravitational perturbations on artificial satellite trajectories. The amplitudes and periods of these perturbations can be investigated if precise orbits can be carried out, and reveal some characteristics of the space environment where the satellite is orbiting. Analyzing the perturbations is, hence, a way to characterize how the energy from the Sun is re-emitted by the Earth. When led over a long period of time, such an approach enables to quantify the variations of the global radiation budget of the Earth.Additionally to the preliminary results presented last year, we draw an assessment of the validity of the mean model based on the orbits of the GRACE missions, and, to a certain extent, of some of the SLR satellite orbits. The accelerometric data of the GRACE satellites are used to evaluate the accuracy of the models accounting for non gravitational forces, and the ones induced by the albedo and the thermic emissivity in particular. Three data sets are used to investigate the mean effects on the orbit perturbations: Stephens tables (Stephens, 1980), ECMWF (European Centre for Medium-Range Weather Forecasts) data sets and CERES (Clouds and the Earth's Radiant Energy System) data sets (publickly available). From the trajectography point of view, based on post-fit residual analysis, we analyze what is the data set leading to the lowest residual level, to define which data set appears to be the most suitable one to derive a new « mean albedo model » from accelerometric data sets of the GRACE mission. The period of investigation covers the full GRACE

  18. Estimation of coolant void reactivity for CANDU-NG lattice using DRAGON and validation using MCNP5 and TRIPOLI-4.3

    International Nuclear Information System (INIS)

    Karthikeyan, R.; Tellier, R. L.; Hebert, A.

    2006-01-01

    The Coolant Void Reactivity (CVR) is an important safety parameter that needs to be estimated at the design stage of a nuclear reactor. It helps to have an a priori knowledge of the behavior of the system during a transient initiated by the loss of coolant. In the present paper, we have attempted to estimate the CVR for a CANDU New Generation (CANDU-NG) lattice, as proposed at an early stage of the Advanced CANDU Reactor (ACR) development. We have attempted to estimate the CVR with development version of the code DRAGON, using the method of characteristics. DRAGON has several advanced self-shielding models incorporated in it, each of them compatible with the method of characteristics. This study will bring to focus the performance of these self-shielding models, especially when there is voiding of such a tight lattice. We have also performed assembly calculations in 2 x 2 pattern for the CANDU-NG fuel, with special emphasis on checkerboard voiding. The results obtained have been validated against Monte Carlo codes MCNP5 and TRIPOLI-4.3. (authors)

  19. External validation of equations to estimate resting energy expenditure in 14952 adults with overweight and obesity and 1948 adults with normal weight from Italy.

    Science.gov (United States)

    Bedogni, Giorgio; Bertoli, Simona; Leone, Alessandro; De Amicis, Ramona; Lucchetti, Elisa; Agosti, Fiorenza; Marazzi, Nicoletta; Battezzati, Alberto; Sartorio, Alessandro

    2017-11-24

    We cross-validated 28 equations to estimate resting energy expenditure (REE) in a very large sample of adults with overweight or obesity. 14952 Caucasian men and women with overweight or obesity and 1498 with normal weight were studied. REE was measured using indirect calorimetry and estimated using two meta-regression equations and 26 other equations. The correct classification fraction (CCF) was defined as the fraction of subjects whose estimated REE was within 10% of measured REE. The highest CCF was 79%, 80%, 72%, 64%, and 63% in subjects with normal weight, overweight, class 1 obesity, class 2 obesity, and class 3 obesity, respectively. The Henry weight and height and Mifflin equations performed equally well with CCFs of 77% vs. 77% for subjects with normal weight, 80% vs. 80% for those with overweight, 72% vs. 72% for those with class 1 obesity, 64% vs. 63% for those with class 2 obesity, and 61% vs. 60% for those with class 3 obesity. The Sabounchi meta-regression equations offered an improvement over the above equations only for class 3 obesity (63%). The accuracy of REE equations decreases with increasing values of body mass index. The Henry weight & height and Mifflin equations are similarly accurate and the Sabounchi equations offer an improvement only in subjects with class 3 obesity. Copyright © 2017 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  20. Online dietary intake estimation: reproducibility and validity of the Food4Me food frequency questionnaire against a 4-day weighed food record.

    Science.gov (United States)

    Fallaize, Rosalind; Forster, Hannah; Macready, Anna L; Walsh, Marianne C; Mathers, John C; Brennan, Lorraine; Gibney, Eileen R; Gibney, Michael J; Lovegrove, Julie A

    2014-08-11

    Advances in nutritional assessment are continuing to embrace developments in computer technology. The online Food4Me food frequency questionnaire (FFQ) was created as an electronic system for the collection of nutrient intake data. To ensure its accuracy in assessing both nutrient and food group intake, further validation against data obtained using a reliable, but independent, instrument and assessment of its reproducibility are required. The aim was to assess the reproducibility and validity of the Food4Me FFQ against a 4-day weighed food record (WFR). Reproducibility of the Food4Me FFQ was assessed using test-retest methodology by asking participants to complete the FFQ on 2 occasions 4 weeks apart. To assess the validity of the Food4Me FFQ against the 4-day WFR, half the participants were also asked to complete a 4-day WFR 1 week after the first administration of the Food4Me FFQ. Level of agreement between nutrient and food group intakes estimated by the repeated Food4Me FFQ and the Food4Me FFQ and 4-day WFR were evaluated using Bland-Altman methodology and classification into quartiles of daily intake. Crude unadjusted correlation coefficients were also calculated for nutrient and food group intakes. In total, 100 people participated in the assessment of reproducibility (mean age 32, SD 12 years), and 49 of these (mean age 27, SD 8 years) also took part in the assessment of validity. Crude unadjusted correlations for repeated Food4Me FFQ ranged from .65 (vitamin D) to .90 (alcohol). The mean cross-classification into "exact agreement plus adjacent" was 92% for both nutrient and food group intakes, and Bland-Altman plots showed good agreement for energy-adjusted macronutrient intakes. Agreement between the Food4Me FFQ and 4-day WFR varied, with crude unadjusted correlations ranging from .23 (vitamin D) to .65 (protein, % total energy) for nutrient intakes and .11 (soups, sauces and miscellaneous foods) to .73 (yogurts) for food group intake. The mean cross

  1. Hunger and thirst numeric rating scales are not valid estimates for gastric content volumes: a prospective investigation in healthy children.

    Science.gov (United States)

    Buehrer, Sabin; Hanke, Ursula; Klaghofer, Richard; Fruehauf, Melanie; Weiss, Markus; Schmitz, Achim

    2014-03-01

    A rating scale for thirst and hunger was evaluated as a noninvasive, simple and commonly available tool to estimate preanesthetic gastric volume, a surrogate parameter for the risk of perioperative pulmonary aspiration, in healthy volunteer school age children. Numeric scales with scores from 0 to 10 combined with smileys to rate thirst and hunger were analyzed and compared with residual gastric volumes as measured by magnetic resonance imaging and fasting times in three settings: before and for 2 h after drinking clear fluid (group A, 7 ml/kg), before and for 4 vs 6 h after a light breakfast followed by clear fluid (7 ml/kg) after 2 vs 4 h (crossover, group B), and before and for 1 h after drinking clear fluid (crossover, group C, 7 vs 3 ml/kg). In 30 children aged 6.4-12.8 (median 9.8) years, participating on 1-5 (median two) study days, 496 sets of scores and gastric volumes were determined. Large inter- and intra-individual variations were seen at baseline and in response to fluid and food intake. Significant correlations were found between hunger and thirst ratings in all groups, with children generally being more hungry than thirsty. Correlations between scores and duration of fasting or gastric residual volumes were poor to moderate. Receiver operating characteristic (ROC) analysis revealed that thirst and hunger rating scales cannot predict gastric content. Hunger and thirst scores vary considerably inter- and intra-individually and cannot predict gastric volume, nor do they correlate with fasting times in school age children. © 2013 John Wiley & Sons Ltd.

  2. Estimation of the crystallographic strain limit during the reversible β ⇄ α″ martensitic transformation in titanium shape memory alloys

    Science.gov (United States)

    Zhukova, Yu. S.; Petrzhik, M. I.; Prokoshkin, S. D.

    2010-11-01

    Three methods are described to calculate the crystallographic strain limit that is determined by the maximum deformation of the crystal lattice in the reversible βbcc ⇄ α″orth martensitic transformation and ensures pseudoelastic deformation accumulation and shape recovery in Ti-Nb-Ta alloys.

  3. Attempt to validate breakpoint MIC values estimated from pharmacokinetic data obtained during oxolinic acid therapy of winter ulcer disease in Atlantic salmon ( Salmo salar )

    DEFF Research Database (Denmark)

    Coyne, R.; Bergh, Ø.; Samuelsen, O.

    2004-01-01

    Concentrations of oxolinic acid (OXA) were measured in the plasma, muscle, liver, and kidney of 48 Atlantic salmons (Salmo salar) 1 day after the end of an oral administration. OXA was administered over a period of 13 days to control an outbreak of winter ulcer disease in a commercial marine farm...... administration of OXA. A numerical description of the concentration of the antimicrobial agent achieved in therapy is necessary to determine the resistance or sensitivity of the bacteria involved in the infection. The degree of fish-to-fish variation in the concentrations of OXA, both within the healthy fish...... a useful parameter for describing the concentrations of agents achieved during therapy. The plasma data from this investigation were used to estimate clinically relevant breakpoint minimum inhibitory concentration (MIC) values. The validity of these breakpoint values was discussed with reference...

  4. Estimation and Validation of RapidEye-Based Time-Series of Leaf Area Index for Winter Wheat in the Rur Catchment (Germany

    Directory of Open Access Journals (Sweden)

    Muhammad Ali

    2015-03-01

    Full Text Available Leaf Area Index (LAI is an important variable for numerous processes in various disciplines of bio- and geosciences. In situ measurements are the most accurate source of LAI among the LAI measuring methods, but the in situ measurements have the limitation of being labor intensive and site specific. For spatial-explicit applications (from regional to continental scales, satellite remote sensing is a promising source for obtaining LAI with different spatial resolutions. However, satellite-derived LAI measurements using empirical models require calibration and validation with the in situ measurements. In this study, we attempted to validate a direct LAI retrieval method from remotely sensed images (RapidEye with in situ LAI (LAIdestr. Remote sensing LAI (LAIrapideye were derived using different vegetation indices, namely SAVI (Soil Adjusted Vegetation Index and NDVI (Normalized Difference Vegetation Index. Additionally, applicability of the newly available red-edge band (RE was also analyzed through Normalized Difference Red-Edge index (NDRE and Soil Adjusted Red-Edge index (SARE. The LAIrapideye obtained from vegetation indices with red-edge band showed better correlation with LAIdestr (r = 0.88 and Root Mean Square Devation, RMSD = 1.01 & 0.92. This study also investigated the need to apply radiometric/atmospheric correction methods to the time-series of RapidEye Level 3A data prior to LAI estimation. Analysis of the the RapidEye Level 3A data set showed that application of the radiometric/atmospheric correction did not improve correlation of the estimated LAI with in situ LAI.

  5. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  6. Validating the InterVA model to estimate the burden of mortality from verbal autopsy data: a population-based cross-sectional study.

    Directory of Open Access Journals (Sweden)

    Sebsibe Tadesse

    Full Text Available BACKGROUND: In countries with incomplete or no vital registration systems, verbal autopsy data are often reviewed by physicians in order to assign the probable cause of death. But in addition to being time and energy consuming, the method is liable to produce inconsistent results. The aim of this study is to validate the InterVA model for estimating the burden of mortality from verbal autopsy data by using physician review as a reference standard. METHODS AND FINDINGS: A population-based cross-sectional study was conducted from March to April, 2012. All adults aged ≥ 14 years and died between 01 January, 2010 and 15 February, 2012 were included in the study. The verbal autopsy interviews were reviewed by the InterVA model and physicians to estimate cause-specific mortality fractions. Cohen's kappa statistic, sensitivity, specificity, positive predictive value, and negative predictive value were applied to compare the agreement between the InterVA model and the physician review. A total of 408 adult deaths were studied. There was a general similarity and just slight differences between the InterVA model and the physicians in assigning cause-specific mortality. Both approaches showed an overall agreement in 298 (73% cases [kappa = 0.49, 95% CI: 0.37-0.60]. The observed sensitivities and specificities across causes of death categories varied from 13.3% to 81.9% and 77.7% to 99.5%, respectively. CONCLUSIONS: In understanding the burden of disease and setting health intervention priorities in areas that lack reliable vital registration systems, an accurate analysis of verbal autopsies is essential. Therefore, users should be aware of the suboptimal performance of the InterVA model. Similar validation studies need to be undertaken considering the limitation of the physician review as gold standard since physicians may misinterpret some of the verbal autopsy data and finally reach a wrong conclusion of the cause of death.

  7. Validating polyphenol intake estimates from a food-frequency questionnaire by using repeated 24-h dietary recalls and a unique method-of-triads approach with 2 biomarkers.

    Science.gov (United States)

    Burkholder-Cooley, Nasira M; Rajaram, Sujatha S; Haddad, Ella H; Oda, Keiji; Fraser, Gary E; Jaceldo-Siegl, Karen

    2017-03-01

    Background: The assessment of polyphenol intake in free-living subjects is challenging, mostly because of the difficulty in accurately measuring phenolic content and the wide presence of phenolics in foods. Objective: The aims of this study were to evaluate the validity of polyphenol intake estimated from food-frequency questionnaires (FFQs) by using the mean of 6 measurements of a 24-h dietary recall (24-HR) as a reference and to apply a unique method-of-triads approach to assess validity coefficients (VCs) between latent "true" dietary estimates, total urinary polyphenol (TUP) excretion, and a surrogate biomarker (plasma carotenoids). Design: Dietary intake data from 899 adults of the Adventist Health Study 2 (AHS-2; 43% blacks and 67% women) were obtained. Pearson correlation coefficients ( r ), corrected for attenuation from within-person variation in the recalls, were calculated between 24-HRs and FFQs and between 24-HRs and TUPs. VCs and 95% CIs between true intake and polyphenol intakes from FFQs, 24-HRs, and the biomarkers TUPs and plasma carotenoids were calculated. Results: Mean ± SD polyphenol intakes were 717 ± 646 mg/d from FFQs and 402 ± 345 mg/d from 24-HRs. The total polyphenol intake from 24-HRs was correlated with FFQs in crude ( r = 0.51, P < 0.001) and deattenuated ( r = 0.63; 95% CI: 0.61, 0.69) models . In the triad model, the VC between the FFQs and theoretical true intake was 0.46 (95% CI: 0.20, 0.93) and between 24-HRs and true intake was 0.61 (95% CI: 0.38, 1.00). Conclusions: The AHS-2 FFQ is a reasonable indicator of total polyphenol intake in the AHS-2 cohort. Urinary polyphenol excretion is limited by genetic variance, metabolism, and bioavailability and should be used in addition to rather than as a replacement for dietary intake assessment. © 2017 American Society for Nutrition.

  8. Development and Validation of a Rapid RP-UPLC Method for the Simultaneous Estimation of Bambuterol Hydrochloride and Montelukast Sodium from Tablets.

    Science.gov (United States)

    Yanamandra, R; Vadla, C S; Puppala, U M; Patro, B; Murthy, Y L N; Parimi, A R

    2012-03-01

    A rapid, simple, sensitive and selective analytical method was developed by using reverse phase ultra performance liquid chromatographic technique for the simultaneous estimation of bambuterol hydrochloride and montelukast sodium in combined tablet dosage form. The developed method is superior in technology to conventional high performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Elution time for the separation was 6 min and ultra violet detection was carried out at 210 nm. Efficient separation was achieved on BEH C18 sub-2-μm Acquity UPLC column using 0.025% (v/v) trifluoro acetic acid in water and acetonitrile as organic solvent in a linear gradient program. Resolutions between bambuterol hydrochloride and montelukast sodium were found to be more than 31. The active pharmaceutical ingredient was extracted from tablet dosage from using a mixture of methanol, acetonitrile and water as diluent. The calibration graphs were linear for bambuterol hydrochloride and montelukast sodium in the range of 6.25-37.5 μg/ml. The percentage recoveries for bambuterol hydrochloride and montelukast sodium were found to be in the range of 99.1-100.0% and 98.0-101.6%, respectively. The test solution was found to be stable for 7 days when stored in the refrigerator between 2-8°. Developed UPLC method was validated as per International Conference on Harmonization specifications for method validation. This method can be successfully employed for simultaneous estimation of bambuterol hydrochloride and montelukast sodium in bulk drugs and formulations.

  9. Estimation of soil erosion for a sustainable land use planning: RUSLE model validation by remote sensing data utilization in the Kalikonto watershed

    Directory of Open Access Journals (Sweden)

    C. Andriyanto

    2015-10-01

    Full Text Available Technology of Geographic Information Systems (GIS and Remote Sensing (RS are increasingly used for planning and natural resources management. GIS and RS is based on pixels is used as a tool of spatial modeling for predicting the erosion. One of the methods developed for predicting the erosion is a Revised Universal Soil Loss Equation (RUSLE. RUSLE is the method used for predicting the erosion associated with runoff gained from five parameters, namely: rain erosivity (R, soil erodibility (K, length of slopes (L, slope (S, and land management (CP. The main constraint encountered in the process of operating the GIS is the calculation of the slope length factor (L.This study was designed to create a plan of sustainable land use and low erosion through the RULSE erosion modeling by utilizing the remote sensing data. With this approach, this study was divided into three activities, namely (1 the preparation and analysis of spatial data for the determination of the parameters and estimating the erosion by using RUSLE models, (2 the validation and calibration of the model of RUSLE by measuring soil erosion at the scale of plots on the field, and (3 Creating a plan of sustainable land use and low erosion with RUSLE. The validation erosion shows the value of R2 = 0.56 and r = 0.74. Results of this study showed that the RUSLE model could be used in the Kalikonto watershed. The erosions at the value of the actual estimation, spatial Plan (RTRW and land capability class in the Kalikonto watershed were 72t / ha / year, 62 t / ha / year and 58 t / ha / year, respectively.

  10. Liver stiffness value-based risk estimation of late recurrence after curative resection of hepatocellular carcinoma: development and validation of a predictive model.

    Directory of Open Access Journals (Sweden)

    Kyu Sik Jung

    Full Text Available Preoperative liver stiffness (LS measurement using transient elastography (TE is useful for predicting late recurrence after curative resection of hepatocellular carcinoma (HCC. We developed and validated a novel LS value-based predictive model for late recurrence of HCC.Patients who were due to undergo curative resection of HCC between August 2006 and January 2010 were prospectively enrolled and TE was performed prior to operations by study protocol. The predictive model of late recurrence was constructed based on a multiple logistic regression model. Discrimination and calibration were used to validate the model.Among a total of 139 patients who were finally analyzed, late recurrence occurred in 44 patients, with a median follow-up of 24.5 months (range, 12.4-68.1. We developed a predictive model for late recurrence of HCC using LS value, activity grade II-III, presence of multiple tumors, and indocyanine green retention rate at 15 min (ICG R15, which showed fairly good discrimination capability with an area under the receiver operating characteristic curve (AUROC of 0.724 (95% confidence intervals [CIs], 0.632-0.816. In the validation, using a bootstrap method to assess discrimination, the AUROC remained largely unchanged between iterations, with an average AUROC of 0.722 (95% CIs, 0.718-0.724. When we plotted a calibration chart for predicted and observed risk of late recurrence, the predicted risk of late recurrence correlated well with observed risk, with a correlation coefficient of 0.873 (P<0.001.A simple LS value-based predictive model could estimate the risk of late recurrence in patients who underwent curative resection of HCC.

  11. Validation of a Novel Immunoline Assay for Patient Stratification according to Virulence of the Infecting Helicobacter pylori Strain and Eradication Status

    Directory of Open Access Journals (Sweden)

    Luca Formichella

    2017-01-01

    Full Text Available Helicobacter pylori infection shows a worldwide prevalence of around 50%. However, only a minority of infected individuals develop clinical symptoms or diseases. The presence of H. pylori virulence factors, such as CagA and VacA, has been associated with disease development, but assessment of virulence factor presence requires gastric biopsies. Here, we evaluate the H. pylori recomLine test for risk stratification of infected patients by comparing the test score and immune recognition of type I or type II strains defined by the virulence factors CagA, VacA, GroEL, UreA, HcpC, and gGT with patient’s disease status according to histology. Moreover, the immune responses of eradicated individuals from two different populations were analysed. Their immune response frequencies and intensities against all antigens except CagA declined below the detection limit. CagA was particularly long lasting in both independent populations. An isolated CagA band often represents past eradication with a likelihood of 88.7%. In addition, a high recomLine score was significantly associated with high-grade gastritis, atrophy, intestinal metaplasia, and gastric cancer. Thus, the recomLine is a sensitive and specific noninvasive test for detecting serum responses against H. pylori in actively infected and eradicated individuals. Moreover, it allows stratifying patients according to their disease state.

  12. Using administrative data to estimate time to breast cancer diagnosis and percent of screen-detected breast cancers – a validation study in Alberta, Canada.

    Science.gov (United States)

    Yuan, Y; Li, M; Yang, J; Winget, M

    2015-05-01

    Appropriate use of administrative data enables the assessment of care quality at the population level. Our objective was to develop/validate methods for assessing quality of breast cancer diagnostic care using administrative data, specifically by identifying relevant medical tests to estimate the percentage screen/symptom-detected cancers and time to diagnosis. Two databases were created for all women diagnosed with a first-ever breast cancer in years 2007-2010 in Alberta, Canada, with dates of medical tests received in years 2006-2010. One purchased database had test results and was used to determine the 'true' first relevant test of a cancer diagnosis. The other free administrative database had test types but no test results. Receiver operating characteristic curves and concordance rates were used to assess estimates of percent screen/symptom-detected breast cancers; Log-rank test was used to assess time to diagnosis obtained from the two databases. Using a look-back period of 4-6 months from cancer diagnosis to identify relevant tests resulted in over 94% concordance, sensitivity and specificity for classifying patients into screen/symptom-detected group; good agreement between the distributions of time to diagnosis was also achieved. Our findings support the use of administrative data to accurately identify relevant tests for assessing the quality of breast cancer diagnostic care. © 2014 John Wiley & Sons Ltd.

  13. Validation of a GC-MS method for the estimation of dithiocarbamate fungicide residues and safety evaluation of mancozeb in fruits and vegetables.

    Science.gov (United States)

    Mujawar, Sumaiyya; Utture, Sagar C; Fonseca, Eddie; Matarrita, Jessie; Banerjee, Kaushik

    2014-05-01

    A sensitive and rugged residue analysis method was validated for the estimation of dithiocarbamate fungicides in a variety of fruit and vegetable matrices. The sample preparation method involved reaction of dithiocarbamates with Tin(II) chloride in aqueous HCl. The CS2 produced was absorbed into an isooctane layer and estimated by GC-MS selected ion monitoring. Limit of quantification (LOQ) was ⩽40μgkg(-1) for grape, green chilli, tomato, potato, brinjal, pineapple and chayote and the recoveries were within 75-104% (RSD<15% at LOQ). The method could be satisfactorily applied for analysis of real world samples. Dissipation of mancozeb, the most-used dithiocarbamate fungicide, in field followed first+first order kinetics with pre-harvest intervals of 2 and 4days in brinjal, 7 and 10days in grapes and 0day in chilli at single and double dose of agricultural applications. Cooking practices were effective for removal of mancozeb residues from vegetables. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Cumulative Retrospective Exposure Assessment (REA) as a predictor of amphibole asbestos lung burden: validation procedures and results for industrial hygiene and pathology estimates.

    Science.gov (United States)

    Rasmuson, James O; Roggli, Victor L; Boelter, Fred W; Rasmuson, Eric J; Redinger, Charles F

    2014-01-01

    A detailed evaluation of the correlation and linearity of industrial hygiene retrospective exposure assessment (REA) for cumulative asbestos exposure with asbestos lung burden analysis (LBA) has not been previously performed, but both methods are utilized for case-control and cohort studies and other applications such as setting occupational exposure limits. (a) To correlate REA with asbestos LBA for a large number of cases from varied industries and exposure scenarios; (b) to evaluate the linearity, precision, and applicability of both industrial hygiene exposure reconstruction and LBA; and (c) to demonstrate validation methods for REA. A panel of four experienced industrial hygiene raters independently estimated the cumulative asbestos exposure for 363 cases with limited exposure details in which asbestos LBA had been independently determined. LBA for asbestos bodies was performed by a pathologist by both light microscopy and scanning electron microscopy (SEM) and free asbestos fibers by SEM. Precision, reliability, correlation and linearity were evaluated via intraclass correlation, regression analysis and analysis of covariance. Plaintiff's answers to interrogatories, work history sheets, work summaries or plaintiff's discovery depositions that were obtained in court cases involving asbestos were utilized by the pathologist to provide a summarized brief asbestos exposure and work history for each of the 363 cases. Linear relationships between REA and LBA were found when adjustment was made for asbestos fiber-type exposure differences. Significant correlation between REA and LBA was found with amphibole asbestos lung burden and mixed fiber-types, but not with chrysotile. The intraclass correlation coefficients (ICC) for the precision of the industrial hygiene rater cumulative asbestos exposure estimates and the precision of repeated laboratory analysis were found to be in the excellent range. The ICC estimates were performed independent of specific asbestos

  15. Genomic prediction using different estimation methodology, blending and cross-validation techniques for growth traits and visual scores in Hereford and Braford cattle.

    Science.gov (United States)

    Campos, G S; Reimann, F A; Cardoso, L L; Ferreira, C E R; Junqueira, V S; Schmidt, P I; Braccini Neto, J; Yokoo, M J I; Sollero, B P; Boligon, A A; Cardoso, F F

    2018-05-07

    The objective of the present study was to evaluate the accuracy and bias of direct and blended genomic predictions using different methods and cross-validation techniques for growth traits (weight and weight gains) and visual scores (conformation, precocity, muscling and size) obtained at weaning and at yearling in Hereford and Braford breeds. Phenotypic data contained 126,290 animals belonging to the Delta G Connection genetic improvement program, and a set of 3,545 animals genotyped with the 50K chip and 131 sires with the 777K. After quality control, 41,045 markers remained for all animals. An animal model was used to estimate (co)variances components and to predict breeding values, which were later used to calculate the deregressed estimated breeding values (DEBV). Animals with genotype and phenotype for the traits studied were divided into four or five groups by random and k-means clustering cross-validation strategies. The values of accuracy of the direct genomic values (DGV) were moderate to high magnitude for at weaning and at yearling traits, ranging from 0.19 to 0.45 for the k-means and 0.23 to 0.78 for random clustering among all traits. The greatest gain in relation to the pedigree BLUP (PBLUP) was 9.5% with the BayesB method with both the k-means and the random clustering. Blended genomic value accuracies ranged from 0.19 to 0.56 for k-means and from 0.21 to 0.82 for random clustering. The analyzes using the historical pedigree and phenotypes contributed additional information to calculate the GEBV and in general, the largest gains were for the single-step (ssGBLUP) method in bivariate analyses with a mean increase of 43.00% among all traits measured at weaning and of 46.27% for those evaluated at yearling. The accuracy values for the marker effects estimation methods were lower for k-means clustering, indicating that the training set relationship to the selection candidates is a major factor affecting accuracy of genomic predictions. The gains in

  16. Cell type specific DNA methylation in cord blood: A 450K-reference data set and cell count-based validation of estimated cell type composition.

    Science.gov (United States)

    Gervin, Kristina; Page, Christian Magnus; Aass, Hans Christian D; Jansen, Michelle A; Fjeldstad, Heidi Elisabeth; Andreassen, Bettina Kulle; Duijts, Liesbeth; van Meurs, Joyce B; van Zelm, Menno C; Jaddoe, Vincent W; Nordeng, Hedvig; Knudsen, Gunn Peggy; Magnus, Per; Nystad, Wenche; Staff, Anne Cathrine; Felix, Janine F; Lyle, Robert

    2016-09-01

    Epigenome-wide association studies of prenatal exposure to different environmental factors are becoming increasingly common. These studies are usually performed in umbilical cord blood. Since blood comprises multiple cell types with specific DNA methylation patterns, confounding caused by cellular heterogeneity is a major concern. This can be adjusted for using reference data consisting of DNA methylation signatures in cell types isolated from blood. However, the most commonly used reference data set is based on blood samples from adult males and is not representative of the cell type composition in neonatal cord blood. The aim of this study was to generate a reference data set from cord blood to enable correct adjustment of the cell type composition in samples collected at birth. The purity of the isolated cell types was very high for all samples (>97.1%), and clustering analyses showed distinct grouping of the cell types according to hematopoietic lineage. We explored whether this cord blood and the adult peripheral blood reference data sets impact the estimation of cell type composition in cord blood samples from an independent birth cohort (MoBa, n = 1092). This revealed significant differences for all cell types. Importantly, comparison of the cell type estimates against matched cell counts both in the cord blood reference samples (n = 11) and in another independent birth cohort (Generation R, n = 195), demonstrated moderate to high correlation of the data. This is the first cord blood reference data set with a comprehensive examination of the downstream application of the data through validation of estimated cell types against matched cell counts.

  17. Accuracy and Feasibility of Estimated Tumour Volumetry in Primary Gastric Gastrointestinal Stromal Tumours: Validation Using Semi-automated Technique in 127 Patients

    Science.gov (United States)

    Tirumani, Sree Harsha; Shinagare, Atul B.; O’Neill, Ailbhe C.; Nishino, Mizuki; Rosenthal, Michael H.; Ramaiya, Nikhil H.

    2015-01-01

    Objective To validate estimated tumour volumetry in primary gastric gastrointestinal stromal tumours (GISTs) using semi-automated volumetry. Materials and Methods In this IRB-approved retrospective study, we measured the three longest diameters in x, y, z axes on CTs of primary gastric GISTs in 127 consecutive patients (52 women, 75 men, mean age: 61 years) at our institute between 2000 and 2013. Segmented volumes (Vsegmented) were obtained using commercial software by two radiologists. Estimate volumes (V1–V6) were obtained using formulae for spheres and ellipsoids. Intra- and inter-observer agreement of Vsegmented and agreement of V1–6 with Vsegmented were analysed with concordance correlation coefficients (CCC) and Bland-Altman plots. Results Median Vsegmented and V1–V6 were 75.9 cm3, 124.9 cm3, 111.6 cm3, 94.0 cm3, 94.4cm3, 61.7 cm3 and 80.3 cm3 respectively. There was strong intra- and inter-observer agreement for Vsegmented. Agreement with Vsegmented was highest for V6 (scalene ellipsoid, x≠y≠z), with CCC of 0.96 [95%CI: 0.95–0.97]. Mean relative difference was smallest for V6 (0.6%), while it was −19.1% for V5, +14.5% for V4, +17.9% for V3, +32.6 % for V2 and +47% for V1. Conclusion Ellipsoidal approximations of volume using three measured axes may be used to closely estimate Vsegmented when semi-automated techniques are unavailable. PMID:25991487

  18. Real-Time Wing-Vortex and Pressure Distribution Estimation on Wings Via Displacements and Strains in Unsteady and Transitional Flight Conditions

    Science.gov (United States)

    2016-09-07

    only the membranal strain tensor ε one actually accounts only for the first fundamental form, and not for the second. As a matter of fact, the membrane...needed to account for the boundary condition u(ℓ) = 0. Note that two extreme cases could be wref/x ≡ 0, and wref/x ≡ ± √ 2ε (m) x . The latter yields...multibody-fluid dynamics simulation of flap- ping wings. In ASME IDETC/ CIE , Portland, OR, August 4–7 2013. ISBN 978-0-7918-5597-3. doi: 10.1115

  19. Estimates of evapotranspiration for riparian sites (Eucalyptus) in the Lower Murray -Darling Basin using ground validated sap flow and vegetation index scaling techniques

    Science.gov (United States)

    Doody, T.; Nagler, P. L.; Glenn, E. P.

    2014-12-01

    Water accounting is becoming critical globally, and balancing consumptive water demands with environmental water requirements is especially difficult in in arid and semi-arid regions. Within the Murray-Darling Basin (MDB) in Australia, riparian water use has not been assessed across broad scales. This study therefore aimed to apply and validate an existing U.S. riparian ecosystem evapotranspiration (ET) algorithm for the MDB river systems to assist water resource managers to quantify environmental water needs over wide ranges of niche conditions. Ground-based sap flow ET was correlated with remotely sensed predictions of ET, to provide a method to scale annual rates of water consumption by riparian vegetation over entire irrigation districts. Sap flux was measured at nine locations on the Murrumbidgee River between July 2011 and June 2012. Remotely sensed ET was calculated using a combination of local meteorological estimates of potential ET (ETo) and rainfall and MODIS Enhanced Vegetation Index (EVI) from selected 250 m resolution pixels. The sap flow data correlated well with MODIS EVI. Sap flow ranged from 0.81 mm/day to 3.60 mm/day and corresponded to a MODIS-based ET range of 1.43 mm/day to 2.42 mm/day. We found that mean ET across sites could be predicted by EVI-ETo methods with a standard error of about 20% across sites, but that ET at any given site could vary much more due to differences in aquifer and soil properties among sites. Water use was within range of that expected. We conclude that our algorithm developed for US arid land crops and riparian plants is applicable to this region of Australia. Future work includes the development of an adjusted algorithm using these sap flow validated results.

  20. Theoretically Guided Analytical Method Development and Validation for the Estimation of Rifampicin in a Mixture of Isoniazid and Pyrazinamide by UV Spectrophotometer.

    Science.gov (United States)

    Khan, Mohammad F; Rita, Shamima A; Kayser, Md Shahidulla; Islam, Md Shariful; Asad, Sharmeen; Bin Rashid, Ridwan; Bari, Md Abdul; Rahman, Muhammed M; Al Aman, D A Anwar; Setu, Nurul I; Banoo, Rebecca; Rashid, Mohammad A

    2017-01-01

    A simple, rapid, economic, accurate, and precise method for the estimation of rifampicin in a mixture of isoniazid and pyrazinamide by UV spectrophotometeric technique (guided by the theoretical investigation of physicochemical properties) was developed and validated. Theoretical investigations revealed that isoniazid and pyrazinamide both were freely soluble in water and slightly soluble in ethyl acetate whereas rifampicin was practically insoluble in water but freely soluble in ethyl acetate. This indicates that ethyl acetate is an effective solvent for the extraction of rifampicin from a water mixture of isoniazid and pyrazinamide. Computational study indicated that pH range of 6.0-8.0 would favor the extraction of rifampicin. Rifampicin is separated from isoniazid and pyrazinamide at pH 7.4 ± 0.1 by extracting with ethyl acetate. The ethyl acetate was then analyzed at λ max of 344.0 nm. The developed method was validated for linearity, accuracy and precision according to ICH guidelines. The proposed method exhibited good linearity over the concentration range of 2.5-35.0 μg/mL. The intraday and inter-day precision in terms of % RSD ranged from 1.09 to 1.70% and 1.63 to 2.99%, respectively. The accuracy (in terms of recovery) of the method varied from of 96.7 ± 0.9 to 101.1 ± 0.4%. The LOD and LOQ were found to be 0.83 and 2.52 μg/mL, respectively. In addition, the developed method was successfully applied to determine rifampicin combination (isoniazid and pyrazinamide) brands available in Bangladesh.

  1. Development and validation of a reversed-phase HPLC method for simultaneous estimation of clotrimazole and beclomethasone dipropionate in lotion and cream dosage form

    Directory of Open Access Journals (Sweden)

    Komal R Dhudashia

    2013-01-01

    Full Text Available Background: The combination of Clotrimazole and Beclomethasone dipropionate is used as anti-fungal and anti-inflammatory for external use in the form of cream and lotion. Aim: A simple, specific, economic, precise, and accurate reversed-phase high performance liquid chromatographic method development for the simultaneous estimation of clotrimazole (CT and beclomethasone dipropionate (BD in lotion and cream formulations. Materials and Methods: The chromatographic separation was achieved on a Kromasil C18 (150 mm × 4.6 mm, 5 μm analytical column. A mixture of acetonitrile-water (70:30, v/v was used as the mobile phase, at a flow rate of 1 ml/min and detector wavelength at 254 nm. The validation of the proposed method was carried out for specificity, linearity, accuracy, precision, limit of detection, limit of quantitation, and system suitability test as per ICH guideline. Results: The retention time of CT and BD was found to be 5.4 and 4 min, respectively. The linear dynamic ranges were from 2-16 μg/ml and 80-640 μg/ml for BD and CT, respectively. Limit of detection and quantification for BD were 0.039 and 0.12 μg/ml, for CT 1.24 and 3.77 μg/ml, respectively. Conclusions: The developed method was validated and found to be simple, specific, accurate and precise and can be used for routine quality control analysis of titled drugs in combination in lotion and cream formulation.

  2. A more sensitive, efficient and ISO 17025 validated Magnetic Capture real time PCR method for the detection of archetypal Toxoplasma gondii strains in meat.

    Science.gov (United States)

    Gisbert Algaba, Ignacio; Geerts, Manon; Jennes, Malgorzata; Coucke, Wim; Opsteegh, Marieke; Cox, Eric; Dorny, Pierre; Dierick, Katelijne; De Craeye, Stéphane

    2017-11-01

    Toxoplasma gondii is a globally prevalent, zoonotic parasite of major importance to public health. Various indirect and direct methods can be used for the diagnosis of toxoplasmosis. Whereas serological tests are useful to prove contact with the parasite has occurred, the actual presence of the parasite in the tissues of a seropositive animal is not demonstrated. For this, a bioassay is still the reference method. As an alternative, various PCR methods have been developed, but due to the limited amount of sample that can be tested, combined with a low tissue cyst density, those have proved to be insufficiently sensitive. A major improvement of the sensitivity was achieved with magnetic capture-based DNA extraction. By combining the hybridization of specific, biotinylated probes with the capture of those probes with streptavidin-coated paramagnetic beads, T. gondii DNA can selectively be "fished out" from a large volume of meat lysate. Still, several studies showed an insufficient sensitivity compared with the mouse bioassay. Here we present a method that is more sensitive (99% limit of detection: 65.4 tachyzoites per 100g of meat), economical and reliable (ISO 17025 validated) by adding a non-competitive PCR inhibition control (co-capture of cellular r18S) and making the release of the target DNA from the streptavidin-coated paramagnetic beads UV-dependent. The presented results demonstrate the potential of the modified Magnetic Capture real time PCR as a full alternative to the mouse bioassay for the screening of various types of tissues and meat, with the additional advantage of being quantitative. Copyright © 2017 Australian Society for Parasitology. Published by Elsevier Ltd. All rights reserved.

  3. GOCI Yonsei aerosol retrieval version 2 products: an improved algorithm and error analysis with uncertainty estimation from 5-year validation over East Asia

    Science.gov (United States)

    Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Holben, Brent; Eck, Thomas F.; Li, Zhengqiang; Song, Chul H.

    2018-01-01

    The Geostationary Ocean Color Imager (GOCI) Yonsei aerosol retrieval (YAER) version 1 algorithm was developed to retrieve hourly aerosol optical depth at 550 nm (AOD) and other subsidiary aerosol optical properties over East Asia. The GOCI YAER AOD had accuracy comparable to ground-based and other satellite-based observations but still had errors because of uncertainties in surface reflectance and simple cloud masking. In addition, near-real-time (NRT) processing was not possible because a monthly database for each year encompassing the day of retrieval was required for the determination of surface reflectance. This study describes the improved GOCI YAER algorithm version 2 (V2) for NRT processing with improved accuracy based on updates to the cloud-masking and surface-reflectance calculations using a multi-year Rayleigh-corrected reflectance and wind speed database, and inversion channels for surface conditions. The improved GOCI AOD τG is closer to that of the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) AOD than was the case for AOD from the YAER V1 algorithm. The V2 τG has a lower median bias and higher ratio within the MODIS expected error range (0.60 for land and 0.71 for ocean) compared with V1 (0.49 for land and 0.62 for ocean) in a validation test against Aerosol Robotic Network (AERONET) AOD τA from 2011 to 2016. A validation using the Sun-Sky Radiometer Observation Network (SONET) over China shows similar results. The bias of error (τG - τA) is within -0.1 and 0.1, and it is a function of AERONET AOD and Ångström exponent (AE), scattering angle, normalized difference vegetation index (NDVI), cloud fraction and homogeneity of retrieved AOD, and observation time, month, and year. In addition, the diagnostic and prognostic expected error (PEE) of τG are estimated. The estimated PEE of GOCI V2 AOD is well correlated with the actual error over East Asia, and the GOCI V2 AOD over South

  4. GOCI Yonsei aerosol retrieval version 2 aerosol products: improved algorithm description and error analysis with uncertainty estimation from 5-year validation over East Asia

    Science.gov (United States)

    Choi, M.; Kim, J.; Lee, J.; KIM, M.; Park, Y. J.; Holben, B. N.; Eck, T. F.; Li, Z.; Song, C. H.

    2017-12-01

    The Geostationary Ocean Color Imager (GOCI) Yonsei aerosol retrieval (YAER) version 1 algorithm was developed for retrieving hourly aerosol optical depth at 550 nm (AOD) and other subsidiary aerosol optical properties over East Asia. The GOCI YAER AOD showed comparable accuracy compared to ground-based and other satellite-based observations, but still had errors due to uncertainties in surface reflectance and simple cloud masking. Also, it was not capable of near-real-time (NRT) processing because it required a monthly database of each year encompassing the day of retrieval for the determination of surface reflectance. This study describes the improvement of GOCI YAER algorithm to the version 2 (V2) for NRT processing with improved accuracy from the modification of cloud masking, surface reflectance determination using multi-year Rayleigh corrected reflectance and wind speed database, and inversion channels per surface conditions. Therefore, the improved GOCI AOD ( ) is similar with those of Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) AOD compared to V1 of the YAER algorithm. The shows reduced median bias and increased ratio within range (i.e. absolute expected error range of MODIS AOD) compared to V1 in the validation results using Aerosol Robotic Network (AERONET) AOD ( ) from 2011 to 2016. The validation using the Sun-Sky Radiometer Observation Network (SONET) over China also shows similar results. The bias of error ( is within -0.1 and 0.1 range as a function of AERONET AOD and AE, scattering angle, NDVI, cloud fraction and homogeneity of retrieved AOD, observation time, month, and year. Also, the diagnostic and prognostic expected error (DEE and PEE, respectively) of are estimated. The estimated multiple PEE of GOCI V2 AOD is well matched with actual error over East Asia, and the GOCI V2 AOD over Korea shows higher ratio within PEE compared to over China and Japan. Hourly AOD products based on the

  5. The Incidence Patterns Model to Estimate the Distribution of New HIV Infections in Sub-Saharan Africa: Development and Validation of a Mathematical Model.

    Directory of Open Access Journals (Sweden)

    Annick Bórquez

    2016-09-01

    Full Text Available Programmatic planning in HIV requires estimates of the distribution of new HIV infections according to identifiable characteristics of individuals. In sub-Saharan Africa, robust routine data sources and historical epidemiological observations are available to inform and validate such estimates.We developed a predictive model, the Incidence Patterns Model (IPM, representing populations according to factors that have been demonstrated to be strongly associated with HIV acquisition risk: gender, marital/sexual activity status, geographic location, "key populations" based on risk behaviours (sex work, injecting drug use, and male-to-male sex, HIV and ART status within married or cohabiting unions, and circumcision status. The IPM estimates the distribution of new infections acquired by group based on these factors within a Bayesian framework accounting for regional prior information on demographic and epidemiological characteristics from trials or observational studies. We validated and trained the model against direct observations of HIV incidence by group in seven rounds of cohort data from four studies ("sites" conducted in Manicaland, Zimbabwe; Rakai, Uganda; Karonga, Malawi; and Kisesa, Tanzania. The IPM performed well, with the projections' credible intervals for the proportion of new infections per group overlapping the data's confidence intervals for all groups in all rounds of data. In terms of geographical distribution, the projections' credible intervals overlapped the confidence intervals for four out of seven rounds, which were used as proxies for administrative divisions in a country. We assessed model performance after internal training (within one site and external training (between sites by comparing mean posterior log-likelihoods and used the best model to estimate the distribution of HIV incidence in six countries (Gabon, Kenya, Malawi, Rwanda, Swaziland, and Zambia in the region. We subsequently inferred the potential

  6. The Incidence Patterns Model to Estimate the Distribution of New HIV Infections in Sub-Saharan Africa: Development and Validation of a Mathematical Model.

    Science.gov (United States)

    Bórquez, Annick; Cori, Anne; Pufall, Erica L; Kasule, Jingo; Slaymaker, Emma; Price, Alison; Elmes, Jocelyn; Zaba, Basia; Crampin, Amelia C; Kagaayi, Joseph; Lutalo, Tom; Urassa, Mark; Gregson, Simon; Hallett, Timothy B

    2016-09-01

    Programmatic planning in HIV requires estimates of the distribution of new HIV infections according to identifiable characteristics of individuals. In sub-Saharan Africa, robust routine data sources and historical epidemiological observations are available to inform and validate such estimates. We developed a predictive model, the Incidence Patterns Model (IPM), representing populations according to factors that have been demonstrated to be strongly associated with HIV acquisition risk: gender, marital/sexual activity status, geographic location, "key populations" based on risk behaviours (sex work, injecting drug use, and male-to-male sex), HIV and ART status within married or cohabiting unions, and circumcision status. The IPM estimates the distribution of new infections acquired by group based on these factors within a Bayesian framework accounting for regional prior information on demographic and epidemiological characteristics from trials or observational studies. We validated and trained the model against direct observations of HIV incidence by group in seven rounds of cohort data from four studies ("sites") conducted in Manicaland, Zimbabwe; Rakai, Uganda; Karonga, Malawi; and Kisesa, Tanzania. The IPM performed well, with the projections' credible intervals for the proportion of new infections per group overlapping the data's confidence intervals for all groups in all rounds of data. In terms of geographical distribution, the projections' credible intervals overlapped the confidence intervals for four out of seven rounds, which were used as proxies for administrative divisions in a country. We assessed model performance after internal training (within one site) and external training (between sites) by comparing mean posterior log-likelihoods and used the best model to estimate the distribution of HIV incidence in six countries (Gabon, Kenya, Malawi, Rwanda, Swaziland, and Zambia) in the region. We subsequently inferred the potential contribution of each

  7. Analytical method development and validation of simultaneous estimation of rabeprazole, pantoprazole, and itopride by reverse-phase high-performance liquid chromatography

    Directory of Open Access Journals (Sweden)

    Senthamil Selvan Perumal

    2014-12-01

    Full Text Available A simple, selective, rapid, and precise reverse-phase high-performance liquid chromatography (RP-HPLC method for the simultaneous estimation of rabeprazole (RP, pantoprazole (PP, and itopride (IP has been developed. The compounds were well separated on a Phenomenex C18 (Luna column (250 mm × 4.6 mm, dp = 5 μm with C18 guard column (4 mm × 3 mm × 5 μm with a mobile phase consisting of buffer containing 10 mM potassium dihydrogen orthophosphate (adjusted to pH 6.8: acetonitrile (70:30 v/v at a flow rate of 1.0 mL/min and ultraviolet detection at 288 nm. The retention time of RP, PP, and IP were 5.35, 7.92, and 11.16 minutes, respectively. Validation of the proposed method was carried out according to International Conference on Harmonisation (ICH guidelines. Linearity range was obtained for RP, PP, and IP over the concentration range of 2.5–25, 1–30, and 3–35 μg/mL and the r2 values were 0.994, 0.978, and 0.991, respectively. The calculated limit of detection (LOD values were 1, 0.3, and 1 μg/mL and limit of quantitation (LOQ values were 2.5, 1, and 3 μg/mL for RP, PP, and IP correspondingly. Thus, the current study showed that the developed reverse-phase liquid chromatography method is sensitive and selective for the estimation of RP, PP, and IP in combined dosage form.

  8. Analytical method development and validation of simultaneous estimation of rabeprazole, pantoprazole, and itopride by reverse-phase high-performance liquid chromatography.

    Science.gov (United States)

    Perumal, Senthamil Selvan; Ekambaram, Sanmuga Priya; Raja, Samundeswari

    2014-12-01

    A simple, selective, rapid, and precise reverse-phase high-performance liquid chromatography (RP-HPLC) method for the simultaneous estimation of rabeprazole (RP), pantoprazole (PP), and itopride (IP) has been developed. The compounds were well separated on a Phenomenex C 18 (Luna) column (250 mm × 4.6 mm, dp = 5 μm) with C 18 guard column (4 mm × 3 mm × 5 μm) with a mobile phase consisting of buffer containing 10 mM potassium dihydrogen orthophosphate (adjusted to pH 6.8): acetonitrile (70:30 v/v) at a flow rate of 1.0 mL/min and ultraviolet detection at 288 nm. The retention time of RP, PP, and IP were 5.35, 7.92, and 11.16 minutes, respectively. Validation of the proposed method was carried out according to International Conference on Harmonisation (ICH) guidelines. Linearity range was obtained for RP, PP, and IP over the concentration range of 2.5-25, 1-30, and 3-35 μg/mL and the r 2 values were 0.994, 0.978, and 0.991, respectively. The calculated limit of detection (LOD) values were 1, 0.3, and 1 μg/mL and limit of quantitation (LOQ) values were 2.5, 1, and 3 μg/mL for RP, PP, and IP correspondingly. Thus, the current study showed that the developed reverse-phase liquid chromatography method is sensitive and selective for the estimation of RP, PP, and IP in combined dosage form. Copyright © 2014. Published by Elsevier B.V.

  9. Validation of fatty acid intakes estimated by a food frequency questionnaire using erythrocyte fatty acid profiling in the Montreal Heart Institute Biobank.

    Science.gov (United States)

    Turcot, V; Brunet, J; Daneault, C; Tardif, J C; Des Rosiers, C; Lettre, G

    2015-12-01

    To improve the prevention, treatment and risk prediction of cardiovascular diseases, genetic markers and gene-diet interactions are currently being investigated. The Montreal Heart Institute (MHI) Biobank is suitable for such studies because of its large sample size (currently, n = 17 000), the availability of biospecimens, and the collection of data on dietary intakes of saturated (SFAs) and n-3 and n-6 polyunsaturated (PUFAs) fatty acids estimated from a 14-item food frequency questionnaire (FFQ). We tested the validity of the FFQ by correlating dietary intakes of these fatty acids with their red blood cell (RBC) content in MHI Biobank participants. Seventy-five men and 75 women were selected from the Biobank. We successfully obtained RBC fatty acids for 142 subjects using gas chromatography coupled to mass spectrometry. Spearman correlation coefficients were used to test whether SFA scores and daily intakes (g day(-1)) of n-3 and n-6 PUFAs correlate with their RBC content. Based on covariate-adjusted analyses, intakes of n-3 PUFAs from vegetable sources were significantly correlated with RBC α-linolenic acid levels (ρ = 0.23, P = 0.007), whereas n-3 PUFA intakes from marine sources correlated significantly with RBC eicosapentaenoic acid (ρ = 0.29, P = 0.0008) and docosahexaenoic acid (ρ = 0.41, P = 9.2 × 10(-7)) levels. Intakes of n-6 PUFAs from vegetable sources correlated with RBC linoleic acid (ρ = 0.18, P = 0.04). SFA scores were not correlated with RBC total SFAs. The MHI Biobank 14-item FFQ can appropriately estimate daily intakes of n-3 PUFAs from vegetable and marine sources, as well as vegetable n-6 PUFAs, which enables the possibility of using these data in future studies. © 2014 The British Dietetic Association Ltd.

  10. Validation of a Food Frequency Questionnaire for Estimating Micronutrient Intakes in an Urban US Sample of Multi-Ethnic Pregnant Women.

    Science.gov (United States)

    Brunst, Kelly J; Kannan, Srimathi; Ni, Yu-Ming; Gennings, Chris; Ganguri, Harish B; Wright, Rosalind J

    2016-02-01

    To validate the Block98 food frequency questionnaire (FFQ) for estimating antioxidant, methyl-nutrient and polyunsaturated fatty acids (PUFA) intakes in a pregnant sample of ethnic/racial minority women in the United States (US). Participants (n = 42) were from the Programming of Intergenerational Stress Mechanisms study. Total micronutrient intakes from food and supplements was ascertained using the modified Block98 FFQ and two 24-h dietary recalls collected at random on nonconsecutive days subsequent to completion of the FFQ in mid-pregnancy. Correlation coefficients (r) corrected for attenuation from within-person variation in the recalls were calculated for antioxidants (n = 7), methyl-nutrients (n = 8), and PUFAs (n = 2). The sample was largely ethnic minorities (38 % Black, 33 % Hispanic) with 21 % being foreign born and 41 % having less than or equal to a high school degree. Significant and adequate deattenuated correlations (r ≥ 0.40) for total dietary intakes of antioxidants were observed for vitamin C, vitamin E, magnesium, and zinc. Reasonable deattenuated correlations were also observed for methyl-nutrient intakes of vitamin B6, betaine, iron, and n:6 PUFAs; however, they did not reach significance. Most women were classified into the same or adjacent quartiles (≥70 %) for total (dietary + supplements) estimates of antioxidants (5 out of 7) and methyl-nutrients (4 out of 5). The Block98 FFQ is an appropriate dietary method for evaluating antioxidants in pregnant ethnic/minorities in the US; it may be less efficient in measuring methyl-nutrient and PUFA intakes.

  11. Development and Validation of a HPTLC Method for Simultaneous Estimation of L-Glutamic Acid and γ-Aminobutyric Acid in Mice Brain.

    Science.gov (United States)

    Sancheti, J S; Shaikh, M F; Khatwani, P F; Kulkarni, Savita R; Sathaye, Sadhana

    2013-11-01

    A new robust, simple and economic high performance thin layer chromatographic method was developed for simultaneous estimation of L-glutamic acid and γ-amino butyric acid in brain homogenate. The high performance thin layer chromatographic separation of these amino acid was achieved using n-butanol:glacial acetic acid:water (22:3:5 v/v/v) as mobile phase and ninhydrin as a derivatising agent. Quantitation of the method was achieved by densitometric method at 550 nm over the concentration range of 10-100 ng/spot. This method showed good separation of amino acids in the brain homogenate with Rf value of L-glutamic acid and γ-amino butyric acid as 21.67±0.58 and 33.67±0.58, respectively. The limit of detection and limit of quantification for L-glutamic acid was found to be 10 and 20 ng and for γ-amino butyric acid it was 4 and 10 ng, respectively. The method was also validated in terms of accuracy, precision and repeatability. The developed method was found to be precise and accurate with good reproducibility and shows promising applicability for studying pathological status of disease and therapeutic significance of drug treatment.

  12. QbD-Based Development and Validation of a Stability-Indicating HPLC Method for Estimating Ketoprofen in Bulk Drug and Proniosomal Vesicular System.

    Science.gov (United States)

    Yadav, Nand K; Raghuvanshi, Ashish; Sharma, Gajanand; Beg, Sarwar; Katare, Om P; Nanda, Sanju

    2016-03-01

    The current studies entail systematic quality by design (QbD)-based development of simple, precise, cost-effective and stability-indicating high-performance liquid chromatography method for estimation of ketoprofen. Analytical target profile was defined and critical analytical attributes (CAAs) were selected. Chromatographic separation was accomplished with an isocratic, reversed-phase chromatography using C-18 column, pH 6.8, phosphate buffer-methanol (50 : 50v/v) as a mobile phase at a flow rate of 1.0 mL/min and UV detection at 258 nm. Systematic optimization of chromatographic method was performed using central composite design by evaluating theoretical plates and peak tailing as the CAAs. The method was validated as per International Conference on Harmonization guidelines with parameters such as high sensitivity, specificity of the method with linearity ranging between 0.05 and 250 µg/mL, detection limit of 0.025 µg/mL and quantification limit of 0.05 µg/mL. Precision was demonstrated using relative standard deviation of 1.21%. Stress degradation studies performed using acid, base, peroxide, thermal and photolytic methods helped in identifying the degradation products in the proniosome delivery systems. The results successfully demonstrated the utility of QbD for optimizing the chromatographic conditions for developing highly sensitive liquid chromatographic method for ketoprofen. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Estimation of influential points in any data set from coefficient of determination and its leave-one-out cross-validated counterpart.

    Science.gov (United States)

    Tóth, Gergely; Bodai, Zsolt; Héberger, Károly

    2013-10-01

    Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.

  14. Strain Pattern in Supercooled Liquids

    Science.gov (United States)

    Illing, Bernd; Fritschi, Sebastian; Hajnal, David; Klix, Christian; Keim, Peter; Fuchs, Matthias

    2016-11-01

    Investigations of strain correlations at the glass transition reveal unexpected phenomena. The shear strain fluctuations show an Eshelby-strain pattern [˜cos (4 θ ) /r2 ], characteristic of elastic response, even in liquids, at long times. We address this using a mode-coupling theory for the strain fluctuations in supercooled liquids and data from both video microscopy of a two-dimensional colloidal glass former and simulations of Brownian hard disks. We show that the long-ranged and long-lived strain signatures follow a scaling law valid close to the glass transition. For large enough viscosities, the Eshelby-strain pattern is visible even on time scales longer than the structural relaxation time τ and after the shear modulus has relaxed to zero.

  15. Development and validation of RP-HPLC and UV-spectrophotometric methods for rapid simultaneous estimation of amlodipine and benazepril in pure and fixed dose combination

    Directory of Open Access Journals (Sweden)

    Abhi Kavathia

    2017-05-01

    Full Text Available High-performance liquid chromatographic (HPLC and UV spectrophotometric methods were developed and validated for the quantitative determination of amlodipine besylate (AM and benazepril hydrochloride (BZ. Different analytical performance parameters such as linearity, precision, accuracy, specificity, limit of detection (LOD and limit of quantification (LOQ were determined according to International Conference on Harmonization ICH Q2B guidelines. The RP-HPLC method was developed by the isocratic technique on a reversed-phase Shodex C-18 5e column. The retention time for AM and BZ was 4.43 min and 5.70 min respectively. The UV spectrophotometric determinations were performed at 237 nm and 366 nm for AM and at 237 nm for BZ. Correlation between absorbance of AM at 237 nm and 366 nm was established and based on developed correlation equation estimation of BZ at 237 nm was carried out. The linearity of the calibration curves for each analyte in the desired concentration range was good (r2 > 0.999 by both the HPLC and UV methods. The method showed good reproducibility and recovery with percent relative standard deviation less than 5%. Moreover, the accuracy and precision obtained with HPLC co-related well with the UV method which implied that UV spectroscopy can be a cheap, reliable and less time consuming alternative for chromatographic analysis. The proposed methods are highly sensitive, precise and accurate and hence successfully applied for determining the assay and in vitro dissolution of a marketed formulation.

  16. Dating Pupae of the Blow Fly Calliphora vicina Robineau–Desvoidy 1830 (Diptera: Calliphoridae for Post Mortem Interval—Estimation: Validation of Molecular Age Markers

    Directory of Open Access Journals (Sweden)

    Barbara K. Zajac

    2018-03-01

    Full Text Available Determining the age of juvenile blow flies is one of the key tasks of forensic entomology when providing evidence for the minimum post mortem interval. While the age determination of blow fly larvae is well established using morphological parameters, the current study focuses on molecular methods for estimating the age of blow flies during the metamorphosis in the pupal stage, which lasts about half the total juvenile development. It has already been demonstrated in several studies that the intraspecific variance in expression of so far used genes in blow flies is often too high to assign a certain expression level to a distinct age, leading to an inaccurate prediction. To overcome this problem, we previously identified new markers, which show a very sharp age dependent expression course during pupal development of the forensically-important blow fly Calliphora vicina Robineau–Desvoidy 1830 (Diptera: Calliphoridae by analyzing massive parallel sequencing (MPS generated transcriptome data. We initially designed and validated two quantitative polymerase chain reaction (qPCR assays for each of 15 defined pupal ages representing a daily progress during the total pupal development if grown at 17 °C. We also investigated whether the performance of these assays is affected by the ambient temperature, when rearing pupae of C. vicina at three different constant temperatures—namely 17 °C, 20 °C and 25 °C. A temperature dependency of the performance could not be observed, except for one marker. Hence, for each of the defined development landmarks, we can present gene expression profiles of one to two markers defining the mentioned progress in development.

  17. Unsteady Aerodynamic Force Sensing from Measured Strain

    Science.gov (United States)

    Pak, Chan-Gi

    2016-01-01

    A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm. A cantilevered rectangular wing built and tested at the NASA Langley Research Center (Hampton, Virginia, USA) in 1959 is used to validate the simple approach. Unsteady aerodynamic forces as well as wing deflections, velocities, accelerations, and strains are computed using the CFL3D computational fluid dynamics (CFD) code and an MSC/NASTRAN code (MSC Software Corporation, Newport Beach, California, USA), and these CFL3D-based results are assumed as measured quantities. Based on the measured strains, wing deflections, velocities, accelerations, and aerodynamic forces are computed using the proposed approach. These computed deflections, velocities, accelerations, and unsteady aerodynamic forces are compared with the CFL3D/NASTRAN-based results. In general, computed aerodynamic forces based on the lifting surface theory in subsonic speeds are in good agreement with the target aerodynamic forces generated using CFL3D code with the Euler equation. Excellent aeroelastic responses are obtained even with unsteady strain data under the signal to noise ratio of -9.8dB. The deflections, velocities, and accelerations at each sensor location are independent of structural and aerodynamic models. Therefore, the distributed strain data together with the current proposed approaches can be used as distributed deflection

  18. Spontaneous abortion and physical strain around implantation

    DEFF Research Database (Denmark)

    Hjollund, N H; Jensen, Tina Kold; Bonde, Jens Peter

    2000-01-01

    pregnancy the women recorded physical strain prospectively in a structured diary. Physical strain around the time of implantation was associated with later spontaneous abortion. The adjusted risk ratio for women who reported physical strain higher than average at day 6 to 9 after the estimated date...

  19. Development and validation of especific equations for estimate body fat percentage of youngsters pubescents of male gender between ten and seventeen years old

    Directory of Open Access Journals (Sweden)

    Rogério Pereira

    2006-06-01

    Full Text Available Body composition evaluation of youth determined from anthropometric equations is lacking in Brazilian kinantropometry. The aim of this study was to development and validate equations to predict fat percentage in pubescent boys between the ages 10 and 17 years old. This study was descriptive and had with correlational characteristics, the values of body density were computed by Lohman‘s equation, and correlated with variables: age; body weight; stature; eight skinfolds; six circumferences; three diameters and body mass index. One hundred boys were subdivided into two goups, one was the regression group (n = 75 and other was the validation group (n= 25 with the same stages of maturity performed by self-assessment. From multiple stepwise regression analyses using SPSS program, equations were developed and validated for the mean of age (13.3± 2.1years. The equation: 0.654252*(Thigh+ Calf – 0.002009*(Thigh + Calf 2 was the most adequate model developed with the higher variables correlation r (thigh = 0.858 and calf = 0.855 and lower standard error of estimate: 3.7 % and R2 = 0.90.No diferences were observed with Boileau´s equation (p = 0.605, although it differed from the equations of Mukherjee & Roche and Slaughter (p values of 0.000 and 0.045, respectively. This research suggested the use of this equation with young pubescent boys with similar characteristics to allow for a new reference of percent of body fat in Brazilian boys. RESUMO A avaliação da composição corporal de jovens constitui lacuna científica na área cineantropométrica brasileira. O objetivo deste estudo foi, desenvolver e validar equações específicas para a estimativa do percentual de gordura corporal de jovens púberes do gênero masculino na faixa etária de 10 a 17 anos. Tratou-se de umapesquisa descritiva de característica correlacional, cujos valores da densidade corporal calculada através da pesagem hidrostática,foram convertidos pela equação de Lohman, e

  20. Chronic disease prevalence from Italian administrative databases in the VALORE project: a validation through comparison of population estimates with general practice databases and national survey

    Science.gov (United States)

    2013-01-01

    Background Administrative databases are widely available and have been extensively used to provide estimates of chronic disease prevalence for the purpose of surveillance of both geographical and temporal trends. There are, however, other sources of data available, such as medical records from primary care and national surveys. In this paper we compare disease prevalence estimates obtained from these three different data sources. Methods Data from general practitioners (GP) and administrative transactions for health services were collected from five Italian regions (Veneto, Emilia Romagna, Tuscany, Marche and Sicily) belonging to all the three macroareas of the country (North, Center, South). Crude prevalence estimates were calculated by data source and region for diabetes, ischaemic heart disease, heart failure and chronic obstructive pulmonary disease (COPD). For diabetes and COPD, prevalence estimates were also obtained from a national health survey. When necessary, estimates were adjusted for completeness of data ascertainment. Results Crude prevalence estimates of diabetes in administrative databases (range: from 4.8% to 7.1%) were lower than corresponding GP (6.2%-8.5%) and survey-based estimates (5.1%-7.5%). Geographical trends were similar in the three sources and estimates based on treatment were the same, while estimates adjusted for completeness of ascertainment (6.1%-8.8%) were slightly higher. For ischaemic heart disease administrative and GP data sources were fairly consistent, with prevalence ranging from 3.7% to 4.7% and from 3.3% to 4.9%, respectively. In the case of heart failure administrative estimates were consistently higher than GPs’ estimates in all five regions, the highest difference being 1.4% vs 1.1%. For COPD the estimates from administrative data, ranging from 3.1% to 5.2%, fell into the confidence interval of the Survey estimates in four regions, but failed to detect the higher prevalence in the most Southern region (4.0% in

  1. On strain and stress in living cells

    Science.gov (United States)

    Cox, Brian N.; Smith, David W.

    2014-11-01

    Recent theoretical simulations of amelogenesis and network formation and new, simple analyses of the basic multicellular unit (BMU) allow estimation of the order of magnitude of the strain energy density in populations of living cells in their natural environment. A similar simple calculation translates recent measurements of the force-displacement relation for contacting cells (cell-cell adhesion energy) into equivalent volume energy densities, which are formed by averaging the changes in contact energy caused by a cell's migration over the cell's volume. The rates of change of these mechanical energy densities (energy density rates) are then compared to the order of magnitude of the metabolic activity of a cell, expressed as a rate of production of metabolic energy per unit volume. The mechanical energy density rates are 4-5 orders of magnitude smaller than the metabolic energy density rate in amelogenesis or bone remodeling in the BMU, which involve modest cell migration velocities, and 2-3 orders of magnitude smaller for innervation of the gut or angiogenesis, where migration rates are among the highest for all cell types. For representative cell-cell adhesion gradients, the mechanical energy density rate is 6 orders of magnitude smaller than the metabolic energy density rate. The results call into question the validity of using simple constitutive laws to represent living cells. They also imply that cells need not migrate as inanimate objects of gradients in an energy field, but are better regarded as self-powered automata that may elect to be guided by such gradients or move otherwise. Thus Ġel=d/dt 1/2 >[(C11+C12)ɛ02+2μγ02]=(C11+C12)ɛ0ɛ˙0+2μγ0γ˙0 or Ġel=ηEɛ0ɛ˙0+η‧Eγ0γ˙0 with 1.4≤η≤3.4 and 0.7≤η‧≤0.8 for Poisson's ratio in the range 0.2≤ν≤0.4 and η=1.95 and η‧=0.75 for ν=0.3. The spatial distribution of shear strains arising within an individual cell as cells slide past one another during amelogenesis is not known

  2. Validity and Reproducibility of a Self-Administered Semi-Quantitative Food-Frequency Questionnaire for Estimating Usual Daily Fat, Fibre, Alcohol, Caffeine and Theobromine Intakes among Belgian Post-Menopausal Women

    Directory of Open Access Journals (Sweden)

    Selin Bolca

    2009-01-01

    Full Text Available A novel food-frequency questionnaire (FFQ was developed and validated to assess the usual daily fat, saturated, mono-unsaturated and poly-unsaturated fatty acid, fibre, alcohol, caffeine, and theobromine intakes among Belgian post-menopausal women participating in dietary intervention trials with phyto-oestrogens. The relative validity of the FFQ was estimated by comparison with 7 day (d estimated diet records (EDR, n 64 and its reproducibility was evaluated by repeated administrations 6 weeks apart (n 79. Although the questionnaire underestimated significantly all intakes compared to the 7 d EDR, it had a good ranking ability (r 0.47-0.94; weighted κ 0.25-0.66 and it could reliably distinguish extreme intakes for all the estimated nutrients, except for saturated fatty acids. Furthermore, the correlation between repeated administrations was high (r 0.71-0.87 with a maximal misclassification of 7% (weighted κ 0.33-0.80. In conclusion, these results compare favourably with those reported by others and indicate that the FFQ is a satisfactorily reliable and valid instrument for ranking individuals within this study population.

  3. Validity and reproducibility of a self-administered semi-quantitative food-frequency questionnaire for estimating usual daily fat, fibre, alcohol, caffeine and theobromine intakes among Belgian post-menopausal women.

    Science.gov (United States)

    Bolca, Selin; Huybrechts, Inge; Verschraegen, Mia; De Henauw, Stefaan; Van de Wiele, Tom

    2009-01-01

    A novel food-frequency questionnaire (FFQ) was developed and validated to assess the usual daily fat, saturated, mono-unsaturated and poly-unsaturated fatty acid, fibre, alcohol, caffeine, and theobromine intakes among Belgian post-menopausal women participating in dietary intervention trials with phyto-oestrogens. The relative validity of the FFQ was estimated by comparison with 7 day (d) estimated diet records (EDR, n 64) and its reproducibility was evaluated by repeated administrations 6 weeks apart (n 79). Although the questionnaire underestimated significantly all intakes compared to the 7 d EDR, it had a good ranking ability (r 0.47-0.94; weighted kappa 0.25-0.66) and it could reliably distinguish extreme intakes for all the estimated nutrients, except for saturated fatty acids. Furthermore, the correlation between repeated administrations was high (r 0.71-0.87) with a maximal misclassification of 7% (weighted kappa 0.33-0.80). In conclusion, these results compare favourably with those reported by others and indicate that the FFQ is a satisfactorily reliable and valid instrument for ranking individuals within this study population.

  4. On the determination of representative stress–strain relation of metallic materials using instrumented indentation

    International Nuclear Information System (INIS)

    Fu, Kunkun; Chang, Li; Zheng, Bailin; Tang, Youhong; Wang, Hongjian

    2015-01-01

    Highlights: • A method to convert indentation load–depth curve into representative stress–strain curve is presented. • Representative stress–strain curves of six metals are obtained using finite element analysis. • Different representative strain definitions are compared using finite element method. • Representative stress–strain curve of molybdenum films is obtained by nanoindentation tests. - Abstract: In this study, attempts have been made to estimate the representative stress–strain relation of metallic materials from indentation tests using an iterative method. Finite element analysis was performed to validate the method. The results showed that representative stress–strain relations of metallic materials using the present method were in a good agreement with those from tensile tests. Further, this method was extended to predict representative stress–strain relation of ultra-thin molybdenum films with a thickness of 485 nm using nanoindentation. Yielding strength and strain hardening exponent of the films were therefore obtained, which showed a good agreement with the published data

  5. The SGBS cell strain as a model for the in vitro study of obesity and cancer.

    LENUS (Irish Health Repository)

    Allott, Emma H

    2012-10-01

    The murine adipocyte cell line 3T3-L1 is well characterised and used widely, while the human pre-adipocyte cell strain, Simpson-Golabi-Behmel Syndrome (SGBS), requires validation for use in human studies. Obesity is currently estimated to account for up to 41 % of the worldwide cancer burden. A human in vitro model system is required to elucidate the molecular mechanisms for this poorly understood association. This work investigates the relevance of the SGBS cell strain for obesity and cancer research in humans.

  6. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  7. Strain expansion-reduction approach

    Science.gov (United States)

    Baqersad, Javad; Bharadwaj, Kedar

    2018-02-01

    Validating numerical models are one of the main aspects of engineering design. However, correlating million degrees of freedom of numerical models to the few degrees of freedom of test models is challenging. Reduction/expansion approaches have been traditionally used to match these degrees of freedom. However, the conventional reduction/expansion approaches are only limited to displacement, velocity or acceleration data. While in many cases only strain data are accessible (e.g. when a structure is monitored using strain-gages), the conventional approaches are not capable of expanding strain data. To bridge this gap, the current paper outlines a reduction/expansion technique to reduce/expand strain data. In the proposed approach, strain mode shapes of a structure are extracted using the finite element method or the digital image correlation technique. The strain mode shapes are used to generate a transformation matrix that can expand the limited set of measurement data. The proposed approach can be used to correlate experimental and analytical strain data. Furthermore, the proposed technique can be used to expand real-time operating data for structural health monitoring (SHM). In order to verify the accuracy of the approach, the proposed technique was used to expand the limited set of real-time operating data in a numerical model of a cantilever beam subjected to various types of excitations. The proposed technique was also applied to expand real-time operating data measured using a few strain gages mounted to an aluminum beam. It was shown that the proposed approach can effectively expand the strain data at limited locations to accurately predict the strain at locations where no sensors were placed.

  8. Talking on a Wireless Cellular Device While Driving: Improving the Validity of Crash Odds Ratio Estimates in the SHRP 2 Naturalistic Driving Study

    Directory of Open Access Journals (Sweden)

    Richard A. Young

    2017-12-01

    Full Text Available Dingus and colleagues (Proc. Nat. Acad. Sci. U.S.A. 2016, 113, 2636–2641 reported a crash odds ratio (OR estimate of 2.2 with a 95% confidence interval (CI from 1.6 to 3.1 for hand-held cell phone conversation (hereafter, “Talk” in the SHRP 2 naturalistic driving database. This estimate is substantially higher than the effect sizes near one in prior real-world and naturalistic driving studies of conversation on wireless cellular devices (whether hand-held, hands-free portable, or hands-free integrated. Two upward biases were discovered in the Dingus study. First, it selected many Talk-exposed drivers who simultaneously performed additional secondary tasks besides Talk but selected Talk-unexposed drivers with no secondary tasks. This “selection bias” was removed by: (1 filtering out records with additional tasks from the Talk-exposed group; or (2 adding records with other tasks to the Talk-unexposed group. Second, it included records with driver behavior errors, a confounding bias that was also removed by filtering out such records. After removing both biases, the Talk OR point estimates declined to below 1, now consistent with prior studies. Pooling the adjusted SHRP 2 Talk OR estimates with prior study effect size estimates to improve precision, the population effect size for wireless cellular conversation while driving is estimated as 0.72 (CI 0.60–0.88.

  9. Development and Validation of the Total HUman Model for Safety (THUMS) Version 5 Containing Multiple 1D Muscles for Estimating Occupant Motions with Muscle Activation During Side Impacts.

    Science.gov (United States)

    Iwamoto, Masami; Nakahira, Yuko

    2015-11-01

    Accurate prediction of occupant head kinematics is critical for better understanding of head/face injury mechanisms in side impacts, especially far-side occupants. In light of the fact that researchers have demonstrated that muscle activations, especially in neck muscles, can affect occupant head kinematics, a human body finite element (FE) model that considers muscle activation is useful for predicting occupant head kinematics in real-world automotive accidents. In this study, we developed a human body FE model called the THUMS (Total HUman Model for Safety) Version 5 that contains 262 one-dimensional (1D) Hill-type muscle models over the entire body. The THUMS was validated against 36 series of PMHS (Post Mortem Human Surrogate) and volunteer test data in this study, and 16 series of PMHS and volunteer test data on side impacts are presented. Validation results with force-time curves were also evaluated quantitatively using the CORA (CORrelation and Analysis) method. The validation results suggest that the THUMS has good biofidelity in the responses of the regional or full body for side impacts, but relatively poor biofidelity in its local level of responses such as brain displacements. Occupant kinematics predicted by the THUMS with a muscle controller using 22 PID (Proportional-Integral- Derivative) controllers were compared with those of volunteer test data on low-speed lateral impacts. The THUMS with muscle controller reproduced the head kinematics of the volunteer data more accurately than that without muscle activation, although further studies on validation of torso kinematics are needed for more accurate predictions of occupant head kinematics.

  10. Validation of the reference tissue model for estimation of dopaminergic D{sub 2}-like receptor binding with [{sup 18}F](N-methyl)benperidol in humans

    Energy Technology Data Exchange (ETDEWEB)

    Antenor-Dorsey, Jo Ann V. [Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO (United States); Markham, Joanne; Moerlein, Stephen M. [Department of Radiology, Washington University School of Medicine, St. Louis, MO (United States); Videen, Tom O. [Department of Radiology, Washington University School of Medicine, St. Louis, MO (United States); Department of Neurology, Washington University School of Medicine, St. Louis, MO (United States); Perlmutter, Joel S. [Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO (United States); Department of Radiology, Washington University School of Medicine, St. Louis, MO (United States); Department of Neurology, Washington University School of Medicine, St. Louis, MO (United States); Program in Physical Therapy, Washington University School of Medicine, St. Louis, MO (United States)], E-mail: joel@npg.wustl.edu

    2008-04-15

    Positron emission tomography measurements of dopaminergic D{sub 2}-like receptors may provide important insights into disorders such as Parkinson's disease, schizophrenia, dystonia and Tourette's syndrome. The positron emission tomography (PET) radioligand [{sup 18}F](N-methyl)benperidol ([{sup 18}F]NMB) has high affinity and selectivity for D{sub 2}-like receptors and is not displaced by endogenous dopamine. The goal of this study is to evaluate the use of a graphical method utilizing a reference tissue region for [{sup 18}F]-NMB PET analysis by comparisons to an explicit three-compartment tracer kinetic model and graphical method that use arterial blood measurements. We estimated binding potential (BP) in the caudate and putamen using all three methods in 16 humans and found that the three-compartment tracer kinetic method provided the highest BP estimates while the graphical method using a reference region yielded the lowest estimates (P<.0001 by repeated-measures ANOVA). However, the three methods yielded highly correlated BP estimates for the two regions of interest. We conclude that the graphical method using a reference region still provides a useful estimate of BP comparable to methods using arterial blood sampling, especially since the reference region method is less invasive and computationally more straightforward, thereby simplifying these measurements.

  11. Validation of the reference tissue model for estimation of dopaminergic D2-like receptor binding with [18F](N-methyl)benperidol in humans

    International Nuclear Information System (INIS)

    Antenor-Dorsey, Jo Ann V.; Markham, Joanne; Moerlein, Stephen M.; Videen, Tom O.; Perlmutter, Joel S.

    2008-01-01

    Positron emission tomography measurements of dopaminergic D 2 -like receptors may provide important insights into disorders such as Parkinson's disease, schizophrenia, dystonia and Tourette's syndrome. The positron emission tomography (PET) radioligand [ 18 F](N-methyl)benperidol ([ 18 F]NMB) has high affinity and selectivity for D 2 -like receptors and is not displaced by endogenous dopamine. The goal of this study is to evaluate the use of a graphical method utilizing a reference tissue region for [ 18 F]-NMB PET analysis by comparisons to an explicit three-compartment tracer kinetic model and graphical method that use arterial blood measurements. We estimated binding potential (BP) in the caudate and putamen using all three methods in 16 humans and found that the three-compartment tracer kinetic method provided the highest BP estimates while the graphical method using a reference region yielded the lowest estimates (P<.0001 by repeated-measures ANOVA). However, the three methods yie