WorldWideScience

Sample records for valid estimates based

  1. Development and validation of satellite based estimates of surface visibility

    Science.gov (United States)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2015-10-01

    A satellite based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5% for classifying Clear (V ≥ 30 km), Moderate (10 km ≤ V GOES-R ABI visibility retrieval can be used to augment measurements from the United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network, and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  2. Development and validation of satellite based estimates of surface visibility

    OpenAIRE

    Brunner, J.; R. B. Pierce; A. Lenzen

    2015-01-01

    A satellite based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorol...

  3. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K. [Boston Children' s Hospital, Harvard Medical School, Department of Radiology, Boston, MA (United States)

    2016-03-15

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  4. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    Science.gov (United States)

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  5. Estimating misclassification error: a closer look at cross-validation based methods

    Directory of Open Access Journals (Sweden)

    Ounpraseuth Songthip

    2012-11-01

    Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

  6. Using AERONET to complement irradiance networks on the validation of satellite-based estimations

    Science.gov (United States)

    Oumbe, A.; Bru, H.; Ghedira, H.; Chiesa, M.; Blanc, P.; Wald, L.

    2012-12-01

    Long-term measurements of surface solar irradiance (SSI) are essential for predicting the production of solar energy conversion systems. Ground-based SSIs are also needed for validation and calibration of models which convert satellite images into down-welling irradiances. Unfortunately, well-controlled data are publicly available for only a limited number of locations, especially when it comes to beam normal irradiance (BNI). In the Middle East particularly, there is only one publicly available research-class station: the Sede Boqer station, monitored by the BSRN (Baseline Surface Radiation Network). Thus, estimations of SSIs have been so far difficult to validate in this region. Besides irradiance networks, AERONET (Aerosol Robotic network) program provides long-term and public accessible sun photometer measurements. Its main goal is to provide validation data for satellite retrievals of aerosol optical properties. Various atmospheric properties are measured: aerosol optical depth at several wavelengths, water vapor amount, Angstrom coefficients. These data can be utilized for computation of SSI in cloudless sky by means of a radiative transfer model (RTM). The appropriate conversion of AERONET atmospheric properties into irradiances would provide additional in-situ irradiance data. In this work, we select the AERONET data which are relevant for irradiance calculation, compute the direct and global irradiances using the RTM LibRadTran and validate the outcomes with nearest actual irradiance measurements. The comparisons are made in the Middle East region. At Sede Boqer where AERONET and BSRN measurements are simultaneously available, the standard-deviation obtained is only 6% for BNI and 5% for GHI (global horizontal irradiance) between the computed and the measured hourly mean irradiances (see the attached figure). When the AERONET and BSRN stations considered are 100 km away, the standard-deviation between actually measured and AERONET-derived irradiances

  7. MR-based water content estimation in cartilage: design and validation of a method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sophie; Ringgaard, Steffen

    cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer used. Finally, the method was validated after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained...... data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue contains) and we measured the water they contained. Results:The 37 Celsius degree system and the analysis can be reproduced in a similar way. MR T1...... map based water content sequences can provide information that, after being analyzed using a T1-map analysis software, can be interpreted as the water contained inside a cartilage tissue. The amount of water estimated using this method was similar to the one obtained at the dry-freeze procedure...

  8. Estimation and validation of biomarker-based exposures for historical ammonium perfluorooctanoate.

    Science.gov (United States)

    Kreckmann, Kim H; Sakr, Carine J; Leonard, Robin C; Dawson, Barbara J

    2009-09-01

    Ammonium perfluorooctanoate (APFO) exposures were estimated for use in an occupational mortality study using detailed work histories of cohort members and an exposure reconstruction model developed from occupational information and serum PFO(-) data collected in 2004 as part of a cross-sectional health survey. Measured serum PFO(-) levels of the health survey participants were linked with the job title held by the individuals at the time of sampling. The median, range, and distribution of serum levels were calculated to determine the typical exposure intensity for each job title. High variability was observed in the serum levels of workers within the same job titles. In addition, working in many "APFO-use" jobs did not result in higher exposure than working in "no APFO-use" jobs. Each job title was then assigned to one of three relative APFO job exposure categories (low, medium, or high). Participants' length of time in their job was examined in relation to their serum PFO(-) level and found unlikely to contribute to misclassification of job titles within exposure categories. The mean of the serum PFO(-) measurements for each job exposure category served as the mean intensity factor. Subsequently, the job exposure categories were applied to all historical job titles of the mortality cohort based on their correspondence with job titles represented in the health survey. The resulting job exposure matrix was validated with additional historical blood data collected between 1979 through 2002 from voluntary participants in a separate biomonitoring program. The validation analyses showed general agreement between estimated and measured exposure, reflecting the within-job-title variability observed in measured serum levels used to classify job exposure.

  9. Validation of a spectrophotometer-based method for estimating daily sperm production and deferent duct transit.

    Science.gov (United States)

    Froman, D P; Rhoads, D D

    2012-10-01

    The objectives of the present work were 3-fold. First, a new method for estimating daily sperm production was validated. This method, in turn, was used to evaluate testis output as well as deferent duct throughput. Next, this analytical approach was evaluated in 2 experiments. The first experiment compared left and right reproductive tracts within roosters. The second experiment compared reproductive tract throughput in roosters from low and high sperm mobility lines. Standard curves were constructed from which unknown concentrations of sperm cells and sperm nuclei could be predicted from observed absorbance. In each case, the independent variable was based upon hemacytometer counts, and absorbance was a linear function of concentration. Reproductive tracts were excised, semen recovered from each duct, and the extragonadal sperm reserve determined by multiplying volume by sperm cell concentration. Testicular sperm nuclei were procured by homogenization of a whole testis, overlaying a 20-mL volume of homogenate upon 15% (wt/vol) Accudenz (Accurate Chemical and Scientific Corporation, Westbury, NY), and then washing nuclei by centrifugation through the Accudenz layer. Daily sperm production was determined by dividing the predicted number of sperm nuclei within the homogenate by 4.5 d (i.e., the time sperm with elongated nuclei spend within the testis). Sperm transit through the deferent duct was estimated by dividing the extragonadal reserve by daily sperm production. Neither the efficiency of sperm production (sperm per gram of testicular parenchyma per day) nor deferent duct transit differed between left and right reproductive tracts (P > 0.05). Whereas efficiency of sperm production did not differ (P > 0.05) between low and high sperm mobility lines, deferent duct transit differed between lines (P < 0.001). On average, this process required 2.2 and 1.0 d for low and high lines, respectively. In summary, we developed and then tested a method for quantifying male

  10. Validation of satellite-based precipitation estimates over different African River Basins

    Science.gov (United States)

    Thiemig, V.; Rojas, R.; Levizzani, V.; De Roo, A.

    2012-04-01

    Satellite-based precipitation products have become increasingly available and accessible in near real-time, encouraging the scientific community increasingly to use these data to replace or supplement sparse ground observations. Six satellite-based rainfall estimates (SRFE), namely, CMORPH, RFE 2.0, TRMM 3B42, GPROF 6.0, PERSIANN, GSMaP-MKV, and one reanalysis product (ERA-interim) are validated against rain gauge data over four partly sparsely-gauged African river basins (Zambezi, Volta, Juba-Shabelle and Baro-Akobo). The objective is to provide the scientific community using SRFE as input data for hydro-meteorological applications an intercomparable validation study of these products over different hydro-climatological conditions in Africa. The validation focuses on the general ability of the SRFE products to reproduce daily and monthly rainfall and, particularly, on rainfall characteristics that are relevant to hydro-meteorological applications, such as, annual catchment totals, spatial distribution pattern within the river basin, seasonality of precipitation, number of rainy days per year, and timing and amount of heavy rainfall events. The accuracy of those products is assessed using a ground observation network, comprising of 203 stations with daily records between 2003 and 2006 (data coverage: 75 % of data for 38, 13, 18 and 31 % of stations, respectively). Considering the time and space variability of the different rainfall characteristics as well as the conventional hydrological working units, the validation is done on three spatially-aggregated levels: point, subcatchment, and river basin. For the latter two the ground observations are interpolated using Kriging with External Drift, where the drift is defined as the terrain elevation. The performance is measured using standard statistical measures (MAE, RMSE, pBIAS, r, and NSeff) as well as visual inspection. The examined products showed depending on the spatially-aggregated level they have been analyzed

  11. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    Science.gov (United States)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the

  12. Validity of the WHO VAW study instrument for estimating gender-based violence against women.

    Science.gov (United States)

    Schraiber, Lilia Blima; Latorre, Maria do Rosário Dias O; França, Ivan; Segri, Neuber José; D'Oliveira, Ana Flávia Pires Lucas

    2010-08-01

    To validate the instrument of the World Health Organization Violence Against Women (WHO VAW) study on psychological, physical and sexual violence against women perpetrated by intimate partners. This was a cross-sectional study conducted in several countries between 2000 and 2003, including Brazil. Representative random samples of women aged 15-49 years with intimate partners were selected, living in the city of São Paulo (n = 940) and in the Zona da Mata, Pernambuco (n = 1,188), southeastern and northeastern regions, respectively. Exploratory factor analysis on questions relating to violence was performed (four psychological, six physical and three sexual questions), with varimax rotation and creation of three factors. Cronbach's alpha was calculated to analyze the internal consistency. To validate through extreme groups, mean scores (0 to 13 points) for violence were tested in relation to the following outcomes: self-rated health, daily activities, presence of discomfort or pain, suicidal ideation or attempts, heavy alcohol consumption and presence of common mental disorders. Three factors were defined, with similar accumulated variance (0.6092 in São Paulo and 0.6350 in the Zona da Mata). For São Paulo, the first factor was determined by physical violence, the second by sexual violence and the third by psychological violence. For the Zona da Mata, the first factor was formed by psychological violence, the second by physical violence and the third by sexual violence. Cronbach's alpha coefficients were 0.88 in São Paulo and 0.89 in the Zona da Mata. The mean scores for violence were significantly higher for less favorable outcomes, with the exception of suicide attempts in São Paulo. The instrument was shown to be adequate for estimating gender-based violence against women perpetrated by intimate partners and can be used in studies on this subject. It has high internal consistency and a capacity to discriminate between different forms of violence

  13. Model-based PSF and MTF estimation and validation from skeletal clinical CT images

    Energy Technology Data Exchange (ETDEWEB)

    Pakdel, Amirreza [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Mainprize, James G.; Robert, Normand [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5 (Canada); Fialkov, Jeffery [Division of Plastic Surgery, Sunnybrook Health Sciences Center, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Whyne, Cari M., E-mail: cari.whyne@sunnybrook.ca [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada)

    2014-01-15

    Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge

  14. Validation of proton stopping power ratio estimation based on dual energy CT using fresh tissue samples

    Science.gov (United States)

    Taasti, Vicki T.; Michalak, Gregory J.; Hansen, David C.; Deisher, Amanda J.; Kruse, Jon J.; Krauss, Bernhard; Muren, Ludvig P.; Petersen, Jørgen B. B.; McCollough, Cynthia H.

    2018-01-01

    Dual energy CT (DECT) has been shown, in theoretical and phantom studies, to improve the stopping power ratio (SPR) determination used for proton treatment planning compared to the use of single energy CT (SECT). However, it has not been shown that this also extends to organic tissues. The purpose of this study was therefore to investigate the accuracy of SPR estimation for fresh pork and beef tissue samples used as surrogates of human tissues. The reference SPRs for fourteen tissue samples, which included fat, muscle and femur bone, were measured using proton pencil beams. The tissue samples were subsequently CT scanned using four different scanners with different dual energy acquisition modes, giving in total six DECT-based SPR estimations for each sample. The SPR was estimated using a proprietary algorithm (syngo.via DE Rho/Z Maps, Siemens Healthcare, Forchheim, Germany) for extracting the electron density and the effective atomic number. SECT images were also acquired and SECT-based SPR estimations were performed using a clinical Hounsfield look-up table. The mean and standard deviation of the SPR over large volume-of-interests were calculated. For the six different DECT acquisition methods, the root-mean-square errors (RMSEs) for the SPR estimates over all tissue samples were between 0.9% and 1.5%. For the SECT-based SPR estimation the RMSE was 2.8%. For one DECT acquisition method, a positive bias was seen in the SPR estimates, having a mean error of 1.3%. The largest errors were found in the very dense cortical bone from a beef femur. This study confirms the advantages of DECT-based SPR estimation although good results were also obtained using SECT for most tissues.

  15. Type-specific human papillomavirus biological features: validated model-based estimates.

    Directory of Open Access Journals (Sweden)

    Iacopo Baussano

    Full Text Available Infection with high-risk (hr human papillomavirus (HPV is considered the necessary cause of cervical cancer. Vaccination against HPV16 and 18 types, which are responsible of about 75% of cervical cancer worldwide, is expected to have a major global impact on cervical cancer occurrence. Valid estimates of the parameters that regulate the natural history of hrHPV infections are crucial to draw reliable projections of the impact of vaccination. We devised a mathematical model to estimate the probability of infection transmission, the rate of clearance, and the patterns of immune response following the clearance of infection of 13 hrHPV types. To test the validity of our estimates, we fitted the same transmission model to two large independent datasets from Italy and Sweden and assessed finding consistency. The two populations, both unvaccinated, differed substantially by sexual behaviour, age distribution, and study setting (screening for cervical cancer or Chlamydia trachomatis infection. Estimated transmission probability of hrHPV types (80% for HPV16, 73%-82% for HPV18, and above 50% for most other types; clearance rates decreasing as a function of time since infection; and partial protection against re-infection with the same hrHPV type (approximately 20% for HPV16 and 50% for the other types were similar in the two countries. The model could accurately predict the HPV16 prevalence observed in Italy among women who were not infected three years before. In conclusion, our models inform on biological parameters that cannot at the moment be measured directly from any empirical data but are essential to forecast the impact of HPV vaccination programmes.

  16. Assessing the external validity of model-based estimates of the incidence of heart attack in England: a modelling study

    Directory of Open Access Journals (Sweden)

    Peter Scarborough

    2016-11-01

    Full Text Available Abstract Background The DisMod II model is designed to estimate epidemiological parameters on diseases where measured data are incomplete and has been used to provide estimates of disease incidence for the Global Burden of Disease study. We assessed the external validity of the DisMod II model by comparing modelled estimates of the incidence of first acute myocardial infarction (AMI in England in 2010 with estimates derived from a linked dataset of hospital records and death certificates. Methods Inputs for DisMod II were prevalence rates of ever having had an AMI taken from a population health survey, total mortality rates and AMI mortality rates taken from death certificates. By definition, remission rates were zero. We estimated first AMI incidence in an external dataset from England in 2010 using a linked dataset including all hospital admissions and death certificates since 1998. 95 % confidence intervals were derived around estimates from the external dataset and DisMod II estimates based on sampling variance and reported uncertainty in prevalence estimates respectively. Results Estimates of the incidence rate for the whole population were higher in the DisMod II results than the external dataset (+54 % for men and +26 % for women. Age-specific results showed that the DisMod II results over-estimated incidence for all but the oldest age groups. Confidence intervals for the DisMod II and external dataset estimates did not overlap for most age groups. Conclusion By comparison with AMI incidence rates in England, DisMod II did not achieve external validity for age-specific incidence rates, but did provide global estimates of incidence that are of similar magnitude to measured estimates. The model should be used with caution when estimating age-specific incidence rates.

  17. An intercomparison and validation of satellite-based surface radiative flux estimates over the Arctic

    Science.gov (United States)

    Riihelä, Aku; Key, Jeffrey; Fokke Meirink, Jan; Kuipers Munneke, Peter; Palo, Timo; Karlsson, Karl-Göran

    2017-04-01

    Accurate determination of radiative energy fluxes over the Arctic is of crucial importance for understanding atmosphere-surface interactions, melt and refreezing cycles of the snow and ice cover, and the role of the Arctic in the global energy budget. Satellite-based estimates can provide comprehensive spatiotemporal coverage, but the accuracy and comparability of the existing datasets must be ascertained to facilitate their use. Here we compare radiative flux estimates from CERES SYN/EBAF, GEWEX SRB and our own experimental Fluxnet-CLARA data against in situ observations over Arctic sea ice and the Greenland Ice Sheet during summer of 2007. In general, CERES SYN1deg flux estimates agree best with in situ measurements, although with two particular limitations. 1) Over sea ice the upwelling shortwave flux in CERES SYN1deg appears to be underestimated because of an underestimated surface albedo. And 2), the CERES SYN1deg upwelling longwave flux over sea ice saturates during midsummer. The AVHRR-based GEWEX and Fluxnet-CLARA flux estimates generally show a larger range in retrieval errors relative to CERES, with contrasting tendencies relative to each other. The largest source of retrieval error in the Fluxnet-CLARA downwelling shortwave flux is shown to be an overestimated cloud optical thickness. The results illustrate that satellite-based flux estimates over the Arctic are not yet homogeneous and further efforts are necessary to investigate the differences in the surface and cloud properties which lead to disagreements in flux retrievals.

  18. Validation of proton stopping power ratio estimation based on dual energy CT using fresh tissue samples

    DEFF Research Database (Denmark)

    Taasti, Vicki Trier; Michalak, Gregory James; Hansen, David C

    2017-01-01

    ) for extracting the electron density and the effective atomic number. SECT images were also acquired and SECT-based SPR estimations were performed using a clinical Hounsfield look-up table (HLUT). The mean and standard deviation of the SPR over large volume-of-interests (VOIs) were calculated. For the six...

  19. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    Science.gov (United States)

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  20. An intercomparison and validation of satellite-based surface radiative energy flux estimates over the Arctic

    Science.gov (United States)

    Riihelä, Aku; Key, Jeffrey R.; Meirink, Jan Fokke; Kuipers Munneke, Peter; Palo, Timo; Karlsson, Karl-Göran

    2017-05-01

    Accurate determination of radiative energy fluxes over the Arctic is of crucial importance for understanding atmosphere-surface interactions, melt and refreezing cycles of the snow and ice cover, and the role of the Arctic in the global energy budget. Satellite-based estimates can provide comprehensive spatiotemporal coverage, but the accuracy and comparability of the existing data sets must be ascertained to facilitate their use. Here we compare radiative flux estimates from Clouds and the Earth's Radiant Energy System (CERES) Synoptic 1-degree (SYN1deg)/Energy Balanced and Filled, Global Energy and Water Cycle Experiment (GEWEX) surface energy budget, and our own experimental FluxNet / Satellite Application Facility on Climate Monitoring cLoud, Albedo and RAdiation (CLARA) data against in situ observations over Arctic sea ice and the Greenland Ice Sheet during summer of 2007. In general, CERES SYN1deg flux estimates agree best with in situ measurements, although with two particular limitations: (1) over sea ice the upwelling shortwave flux in CERES SYN1deg appears to be underestimated because of an underestimated surface albedo and (2) the CERES SYN1deg upwelling longwave flux over sea ice saturates during midsummer. The Advanced Very High Resolution Radiometer-based GEWEX and FluxNet-CLARA flux estimates generally show a larger range in retrieval errors relative to CERES, with contrasting tendencies relative to each other. The largest source of retrieval error in the FluxNet-CLARA downwelling shortwave flux is shown to be an overestimated cloud optical thickness. The results illustrate that satellite-based flux estimates over the Arctic are not yet homogeneous and that further efforts are necessary to investigate the differences in the surface and cloud properties which lead to disagreements in flux retrievals.

  1. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations

    Directory of Open Access Journals (Sweden)

    Checchi Francesco

    2013-01-01

    Full Text Available Abstract Background Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Methods Our method consisted of multiplying (i manual counts of assumed residential structures on a satellite image and (ii estimates of the mean number of people per structure (structure occupancy obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons’ camps and two urban neighbourhoods with a mixture of residents and displaced ranging in population from 1,969 to 90,547, and compared these to “gold standard” reference population figures from census or other robust methods. Results Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of Conclusions In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or shelters, a complex pattern of roofs and multi-level buildings. Based on these results, we discuss possible ways forward for the method’s development.

  2. Validity and reliability of dental age estimation of teeth root translucency based on digital luminance determination.

    Science.gov (United States)

    Ramsthaler, Frank; Kettner, Mattias; Verhoff, Marcel A

    2014-01-01

    In forensic anthropological casework, estimating age-at-death is key to profiling unknown skeletal remains. The aim of this study was to examine the reliability of a new, simple, fast, and inexpensive digital odontological method for age-at-death estimation. The method is based on the original Lamendin method, which is a widely used technique in the repertoire of odontological aging methods in forensic anthropology. We examined 129 single root teeth employing a digital camera and imaging software for the measurement of the luminance of the teeth's translucent root zone. Variability in luminance detection was evaluated using statistical technical error of measurement analysis. The method revealed stable values largely unrelated to observer experience, whereas requisite formulas proved to be camera-specific and should therefore be generated for an individual recording setting based on samples of known chronological age. Multiple regression analysis showed a highly significant influence of the coefficients of the variables "arithmetic mean" and "standard deviation" of luminance for the regression formula. For the use of this primer multivariate equation for age-at-death estimation in casework, a standard error of the estimate of 6.51 years was calculated. Step-by-step reduction of the number of embedded variables to linear regression analysis employing the best contributor "arithmetic mean" of luminance yielded a regression equation with a standard error of 6.72 years (p age-related phenomenon, but also demonstrate that translucency reflects a number of other influencing factors in addition to age. This new digital measuring technique of the zone of dental root luminance can broaden the array of methods available for estimating chronological age, and furthermore facilitate measurement and age classification due to its low dependence on observer experience.

  3. Content validity and its estimation

    Directory of Open Access Journals (Sweden)

    Yaghmale F

    2003-04-01

    Full Text Available Background: Measuring content validity of instruments are important. This type of validity can help to ensure construct validity and give confidence to the readers and researchers about instruments. content validity refers to the degree that the instrument covers the content that it is supposed to measure. For content validity two judgments are necessary: the measurable extent of each item for defining the traits and the set of items that represents all aspects of the traits. Purpose: To develop a content valid scale for assessing experience with computer usage. Methods: First a review of 2 volumes of International Journal of Nursing Studies, was conducted with onlyI article out of 13 which documented content validity did so by a 4-point content validity index (CV! and the judgment of 3 experts. Then a scale with 38 items was developed. The experts were asked to rate each item based on relevance, clarity, simplicity and ambiguity on the four-point scale. Content Validity Index (CVI for each item was determined. Result: Of 38 items, those with CVIover 0.75 remained and the rest were discarded reSulting to 25-item scale. Conclusion: Although documenting content validity of an instrument may seem expensive in terms of time and human resources, its importance warrants greater attention when a valid assessment instrument is to be developed. Keywords: Content Validity, Measuring Content Validity

  4. Development and validation of satellite-based estimates of surface visibility

    Science.gov (United States)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2016-02-01

    A satellite-based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5 % for classifying clear (V ≥ 30 km), moderate (10 km ≤ V GOES-R ABI visibility retrieval can be used to augment measurements from the United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  5. MR-based Water Content Estimation in Cartilage: Design and Validation of a Method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sofie; Ringgaard, Steffen

    2012-01-01

    was costumed and programmed. Finally, we validated the method after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue...

  6. [Validity of an equation based on anthropometry to estimate body fat in older adults].

    Science.gov (United States)

    Huerta, Raquel Huerta; Esparza-Romero, Julián; Urquidez, Rene; Pacheco, Bertha I; Valencia, Mauro E; Alemán-Mateo, Heliodoro

    2007-12-01

    A prediction equation to estimate body fat mass from skinfold thickness for healthy elderly was developed using a four compartmental (4C) model as criterion method. This study included 202 subjects = 60 y old. The measurements of total body water, bone mineral content and body density were included in the 4C model equation. Total sample was randomly partitioned. Sub-sample one was used to design the equations, which were applied in sub-sample two. Its accuracy and precision was evaluated by lineal regression analysis and the bias by Bland and Altman analysis and simple lineal regression. The best model included body mass, sex and the calf and triceps skinfolds thicknesses, with an R2, standard error of the estimate and Cp of 0.85, 3.2 and 3.2, respectively. When the equation was applied in sub-sample two, it was accurate and precise, it showed no significant deviation from the line of identity (the intercept was no significantly different from zero, P>0.05), and slope was different from cero (or similar to 1) (PFat mass by the equation accounted for 86% of the variability of the mean fat mass estimated by the 4C model, having a low standard error of the estimate (3.2 kg) and low pure error (3.1 kg). The new equation was accurate and precise as well as free of significant bias in men and women together and for separately. This equation can be a good option to estimate fat mass in elderly men and women with similar physical characteristics to subjects of this study, and it can be used in clinical and epidemiological studies in this growing group.

  7. Multiple imputation was a valid approach to estimate absolute risk from a prediction model based on case-cohort data.

    Science.gov (United States)

    Mühlenbruch, Kristin; Kuxhaus, Olga; di Giuseppe, Romina; Boeing, Heiner; Weikert, Cornelia; Schulze, Matthias B

    2017-04-01

    To compare weighting methods for Cox regression and multiple imputation (MI) in a case-cohort study in the context of risk prediction modeling. Based on the European Prospective Investigation into Cancer and Nutrition Potsdam study, we estimated risk scores to predict incident type-2 diabetes using full cohort data and case-cohort data assuming missing information on waist circumference outside the case-cohort (∼90%). Varying weighting approaches and MI were compared with regard to the calculation of relative risks, absolute risks, and predictive abilities including C-index, the net reclassification improvement, and calibration. The full cohort comprised 21,845 participants, and the case-cohort comprised 2,703 participants. Relative risks were similar across all methods and compatible with full cohort estimates. Absolute risk estimates showed stronger disagreement mainly for Prentice and Self & Prentice weighting. Barlow and Langholz & Jiao weighting methods and MI were in good agreement with full cohort analysis. Predictive abilities were closest to full cohort estimates for MI or for Barlow and Langholz & Jiao weighting. MI seems to be a valid method for deriving or extending a risk prediction model from case-cohort data and might be superior for absolute risk calculation when compared to weighted approaches. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  8. Who r u?: On the (in)accuracy of incumbent-based estimates of range restriction in criterion-related and differential validity research.

    Science.gov (United States)

    Roth, Philip L; Le, Huy; Oh, In-Sue; Van Iddekinge, Chad H; Robbins, Steven B

    2017-05-01

    Correcting validity estimates for selection procedures for range restriction typically involves comparing variance in predictor scores between all job applicants and applicants who were selected. However, some research on criterion-related and differential validity of cognitive ability tests has relied on range restriction corrections based on data from job incumbents. Unfortunately, there remains ambiguity concerning the accuracy of this incumbent-based approach vis-à-vis the applicant-based approach. To address this issue, we conducted several Monte Carlo simulations, as well as an analysis of college admissions data. Our first simulation study showed that incumbent-based range restriction corrections result in downwardly biased estimates of criterion-related validity, whereas applicant-based corrections were quite accurate. Our second set of simulations showed that incumbent-based range restriction corrections can produce evidence of differential validity when there is no differential validity in the population. In contrast, applicant-based corrections tended to accurately estimate population parameters and showed little, if any, evidence of differential validity when there is no differential validity in the population. Analysis of data for the ACT as a predictor of academic performance revealed similar patterns of bias for incumbent-based corrections in an academic setting. Overall, the present findings raise serious concerns regarding the use of incumbent-based range restriction corrections in lieu of applicant-based corrections. They also cast doubt on recent evidence for differential validity of predictors of job performance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Trunk-acceleration based assessment of gait parameters in older persons: a comparison of reliability and validity of four inverted pendulum based estimations.

    Science.gov (United States)

    Zijlstra, Agnes; Zijlstra, Wiebren

    2013-09-01

    Inverted pendulum (IP) models of human walking allow for wearable motion-sensor based estimations of spatio-temporal gait parameters during unconstrained walking in daily-life conditions. At present it is unclear to what extent different IP based estimations yield different results, and reliability and validity have not been investigated in older persons without a specific medical condition. The aim of this study was to compare reliability and validity of four different IP based estimations of mean step length in independent-living older persons. Participants were assessed twice and walked at different speeds while wearing a tri-axial accelerometer at the lower back. For all step-length estimators, test-retest intra-class correlations approached or were above 0.90. Intra-class correlations with reference step length were above 0.92 with a mean error of 0.0 cm when (1) multiplying the estimated center-of-mass displacement during a step by an individual correction factor in a simple IP model, or (2) adding an individual constant for bipedal stance displacement to the estimated displacement during single stance in a 2-phase IP model. When applying generic corrections or constants in all subjects (i.e. multiplication by 1.25, or adding 75% of foot length), correlations were above 0.75 with a mean error of respectively 2.0 and 1.2 cm. Although the results indicate that an individual adjustment of the IP models provides better estimations of mean step length, the ease of a generic adjustment can be favored when merely evaluating intra-individual differences. Further studies should determine the validity of these IP based estimations for assessing gait in daily life. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Development and validation of satellite-based estimates of surface visibility

    OpenAIRE

    Brunner, J.; R. B. Pierce; A. Lenzen

    2016-01-01

    A satellite-based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weat...

  11. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  12. Exploring the validity of HPQ-based presenteeism measures to estimate productivity losses in the health and education sectors.

    Science.gov (United States)

    Scuffham, Paul A; Vecchio, Nerina; Whiteford, Harvey A

    2014-01-01

    Illness-related presenteeism (suboptimal work performance) may be a significant factor in worker productivity. Until now, there has been no generally accepted best method of measuring presenteeism across different industries and occupations. This study sought to validate the Health and Work Performance Questionnaire (HPQ)-based measure of presenteeism across occupations and industries and assess the most appropriate method for data analysis. . Work performance was measured using the modified version of the HPQ conducted in workforce samples from the education and health workforce in Queensland, Australia (N = 30,870) during 2005 and 2006. Three approaches to data analysis of presenteeism measures were assessed using absolute performance, the ratio of own performance to others' performance, and the difference between others' and own performance. The best measure is judged by its sensitivity to changes in health indicators. . The measure that best correlated to health indicators was absolute presenteeism. For example, in the health sector, correlations between physical health status and absolute presenteeism were 4 to 5 times greater than the ratio or difference approaches, and in the education sector, these correlations were twice as large. Using this approach, the estimated cost of presenteeism in 2006 was $Aus8338 and $Aus8092 per worker per annum for the health and education sectors, respectively. . The HPQ is a valid measure of presenteeism. Transforming responses by perceived performance of peers is unnecessary as absolute presenteeism correlated best with health indicators. Absolute presenteeism was more insightful for ascertaining the cost of presenteeism.

  13. How to improve parameter estimates in GLM-based fMRI data analysis: cross-validated Bayesian model averaging.

    Science.gov (United States)

    Soch, Joram; Meyer, Achim Pascal; Haynes, John-Dylan; Allefeld, Carsten

    2017-09-01

    In functional magnetic resonance imaging (fMRI), model quality of general linear models (GLMs) for first-level analysis is rarely assessed. In recent work (Soch et al., 2016: "How to avoid mismodelling in GLM-based fMRI data analysis: cross-validated Bayesian model selection", NeuroImage, vol. 141, pp. 469-489; http://dx.doi.org/10.1016/j.neuroimage.2016.07.047), we have introduced cross-validated Bayesian model selection (cvBMS) to infer the best model for a group of subjects and use it to guide second-level analysis. While this is the optimal approach given that the same GLM has to be used for all subjects, there is a much more efficient procedure when model selection only addresses nuisance variables and regressors of interest are included in all candidate models. In this work, we propose cross-validated Bayesian model averaging (cvBMA) to improve parameter estimates for these regressors of interest by combining information from all models using their posterior probabilities. This is particularly useful as different models can lead to different conclusions regarding experimental effects and the most complex model is not necessarily the best choice. We find that cvBMS can prevent not detecting established effects and that cvBMA can be more sensitive to experimental effects than just using even the best model in each subject or the model which is best in a group of subjects. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Criterion-Related Validity of the Distance- and Time-Based Walk/Run Field Tests for Estimating Cardiorespiratory Fitness: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús

    2016-01-01

    Objectives The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Materials and Methods Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt’s psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. Results From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42–0.79), with the 1.5 mile (rp = 0.79, 0.73–0.85) and 12 min walk/run tests (rp = 0.78, 0.72–0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. Conclusions When the evaluation of an individual’s maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness. PMID:26987118

  15. Criterion-Related Validity of the Distance- and Time-Based Walk/Run Field Tests for Estimating Cardiorespiratory Fitness: A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús

    2016-01-01

    The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt's psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42-0.79), with the 1.5 mile (rp = 0.79, 0.73-0.85) and 12 min walk/run tests (rp = 0.78, 0.72-0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. When the evaluation of an individual's maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness.

  16. Criterion-Related Validity of the Distance- and Time-Based Walk/Run Field Tests for Estimating Cardiorespiratory Fitness: A Systematic Review and Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Daniel Mayorga-Vega

    Full Text Available The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults.Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt's psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min.From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42-0.79, with the 1.5 mile (rp = 0.79, 0.73-0.85 and 12 min walk/run tests (rp = 0.78, 0.72-0.83 having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests.When the evaluation of an individual's maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness.

  17. Heterogeneity in the validity of administrative-based estimates of immunization coverage across health districts in Burkina Faso: implications for measurement, monitoring and planning

    Science.gov (United States)

    Haddad, Slim; Bicaba, Abel; Feletto, Marta; Fournier, Pierre; Zunzunegui, Maria Victoria

    2010-01-01

    Background Data aggregation in national information systems begins at the district level. Decentralization has given districts a lead role in health planning and management, therefore validity of administrative-based estimates at that level is important to improve the performance of immunization information systems. Objective To assess the validity of administrative-based immunization estimates and their usability for planning and monitoring activities at district level. Methods DTP3 and measles coverage rates from administrative sources were compared with estimates from the EPI cluster survey (ECS) and Demographic and Health Survey (DHS) carried out in 2003 at national and regional levels. ECS estimates were compared with administrative rates across the 52 districts, which were classified into three groups: those where administrative rates were underestimating, overestimating or concordant with ECS estimates (differences within 95% CI of ECS rate). Results National rates provided by administrative data and ECS are similar (74% and 71% for DTP3 and 68% and 66% for measles, respectively); DHS estimates are much lower. Regional administrative data show large discrepancies when compared against ECS and DHS data (differences sometimes reaching 30 percentage points). At district level, geographical area is correlated with over- or underestimation by administrative sources, which overestimate DTP3 and measles coverage in remote areas. Underestimation is observed in districts near urban and highly populated centres. Over- and underestimation are independent of the antigen under consideration. Conclusions Variability in immunization coverage across districts highlights the limitations of using nationally aggregated indicators. If district data are to be used in monitoring and planning immunization programmes as intended by decentralization, heterogeneity in their validity must be reduced. The authors recommend: (1) strengthening administrative data systems; (2

  18. Heterogeneity in the validity of administrative-based estimates of immunization coverage across health districts in Burkina Faso: implications for measurement, monitoring and planning.

    Science.gov (United States)

    Haddad, Slim; Bicaba, Abel; Feletto, Marta; Fournier, Pierre; Zunzunegui, Maria Victoria

    2010-09-01

    Data aggregation in national information systems begins at the district level. Decentralization has given districts a lead role in health planning and management, therefore validity of administrative-based estimates at that level is important to improve the performance of immunization information systems. To assess the validity of administrative-based immunization estimates and their usability for planning and monitoring activities at district level. DTP3 and measles coverage rates from administrative sources were compared with estimates from the EPI cluster survey (ECS) and Demographic and Health Survey (DHS) carried out in 2003 at national and regional levels. ECS estimates were compared with administrative rates across the 52 districts, which were classified into three groups: those where administrative rates were underestimating, overestimating or concordant with ECS estimates (differences within 95% CI of ECS rate). National rates provided by administrative data and ECS are similar (74% and 71% for DTP3 and 68% and 66% for measles, respectively); DHS estimates are much lower. Regional administrative data show large discrepancies when compared against ECS and DHS data (differences sometimes reaching 30 percentage points). At district level, geographical area is correlated with over- or underestimation by administrative sources, which overestimate DTP3 and measles coverage in remote areas. Underestimation is observed in districts near urban and highly populated centres. Over- and underestimation are independent of the antigen under consideration. Variability in immunization coverage across districts highlights the limitations of using nationally aggregated indicators. If district data are to be used in monitoring and planning immunization programmes as intended by decentralization, heterogeneity in their validity must be reduced. The authors recommend: (1) strengthening administrative data systems; (2) implementing indicators that are insensitive to population

  19. A mathematical method for verifying the validity of measured information about the flows of energy resources based on the state estimation theory

    Science.gov (United States)

    Pazderin, A. V.; Sof'in, V. V.; Samoylenko, V. O.

    2015-11-01

    Efforts aimed at improving energy efficiency in all branches of the fuel and energy complex shall be commenced with setting up a high-tech automated system for monitoring and accounting energy resources. Malfunctions and failures in the measurement and information parts of this system may distort commercial measurements of energy resources and lead to financial risks for power supplying organizations. In addition, measurement errors may be connected with intentional distortion of measurements for reducing payment for using energy resources on the consumer's side, which leads to commercial loss of energy resource. The article presents a universal mathematical method for verifying the validity of measurement information in networks for transporting energy resources, such as electricity and heat, petroleum, gas, etc., based on the state estimation theory. The energy resource transportation network is represented by a graph the nodes of which correspond to producers and consumers, and its branches stand for transportation mains (power lines, pipelines, and heat network elements). The main idea of state estimation is connected with obtaining the calculated analogs of energy resources for all available measurements. Unlike "raw" measurements, which contain inaccuracies, the calculated flows of energy resources, called estimates, will fully satisfy the suitability condition for all state equations describing the energy resource transportation network. The state equations written in terms of calculated estimates will be already free from residuals. The difference between a measurement and its calculated analog (estimate) is called in the estimation theory an estimation remainder. The obtained large values of estimation remainders are an indicator of high errors of particular energy resource measurements. By using the presented method it is possible to improve the validity of energy resource measurements, to estimate the transportation network observability, to eliminate

  20. Trunk-acceleration based assessment of gait parameters in older persons : A comparison of reliability and validity of four inverted pendulum based estimations

    NARCIS (Netherlands)

    Zijlstra, Agnes; Zijlstra, Wiebren

    Inverted pendulum (IP) models of human walking allow for wearable motion-sensor based estimations of spatio-temporal gait parameters during unconstrained walking in daily-life conditions. At present it is unclear to what extent different IP based estimations yield different results, and reliability

  1. Application of MET for the validation of satellite precipitation estimates

    Science.gov (United States)

    Kucera, P.; Brown, B.; Bullock, R.; Ahijevych, D.

    2009-04-01

    The goal of this study is to demonstrate the usefulness of the NCAR Model Evaluation Tools (MET) applied to the validation of high-resolution satellite rainfall estimates. MET provides grid-to-point, grid-to-grid, and advanced spatial validation techniques in one unified, modular toolkit that can be applied to a variety of spatial fields (e.g., satellite precipitation estimates). Most validation studies rely on the use of standard validation measures (mean error, bias, mean absolute error, and root mean squared error, etc.) to quantify the quality of the precipitation estimates. Often these measures indicate poorer performance because, among other things, they are unable to account for small-scale variability or discriminate types of errors such as displacement in time and/or space (location, intensity, and orientation errors, etc.) in the precipitation estimates. This issue has motivated recent research and development of many new techniques such as, but not limited to, scale decomposition, fuzzy neighborhood, and object orientated methods for evaluating spatial precipitation estimates. This study will compute statistics for high resolution satellite estimates of precipitation using standard validation measures for the comparison with object orientated measures using the MET built-in Method for Object-based Diagnostic Evaluation (MODE) algorithm using the radar-rainfall estimates as the reference. Rainfall estimates generated by the TRMM Multi-satellite precipitation analysis (TMPA) and CPC Morphing technique (CMORPH) will be used demonstrate the new validation techniques.

  2. Validity of food frequency questionnaire-based estimates of long-term long-chain n-3 polyunsaturated fatty acid intake.

    Science.gov (United States)

    Wallin, Alice; Di Giuseppe, Daniela; Burgaz, Ann; Håkansson, Niclas; Cederholm, Tommy; Michaëlsson, Karl; Wolk, Alicja

    2014-01-01

    To evaluate how long-term dietary intake of long-chain n-3 polyunsaturated fatty acids (LCn-3 PUFAs), estimated by repeated food frequency questionnaires (FFQs) over 15 years, is correlated with LCn-3 PUFAs in adipose tissue (AT). Subcutaneous adipose tissue was obtained in 2003-2004 (AT-03) from 239 randomly selected women, aged 55-75 years, after completion of a 96-item FFQ (FFQ-03). All participants had previously returned an identical FFQ in 1997 (FFQ-97) and a 67-item version in 1987-1990 (FFQ-87). Pearson product-moment correlations were used to evaluate associations between intake of total and individual LCn-3 PUFAs as estimated by the three FFQ assessments and AT-03 content (% of total fatty acids). FFQ-estimated mean relative intake of LCn-3 PUFAs (% of total fat intake) increased between all three assessments (FFQ-87, 0.55 ± 0.34; FFQ-97, 0.74 ± 0.64; FFQ-03, 0.88 ± 0.56). Validity, in terms of Pearson correlations between FFQ-03 estimates and AT-03 content, was 0.41 (95% CI 0.30-0.51) for total LCn-3 PUFA and ranged from 0.29 to 0.48 for individual fatty acids; lower correlation was observed among participants with higher percentage body fat. With regard to long-term intake estimates, past dietary intake was also correlated with AT-03 content, with correlation coefficients in the range of 0.21-0.33 and 0.21-0.34 for FFQ-97 and FFQ-87, respectively. The correlations were improved by using average estimates from two or more FFQ assessments. Exclusion of fish oil supplement users (14%) did not alter the correlations. These data indicate reasonable validity of FFQ-based estimates of long-term (up to 15 years) LCn-3 PUFA intake, justifying their use in studies of diet-disease associations.

  3. Multivariate dynamical systems-based estimation of causal brain interactions in fMRI: Group-level validation using benchmark data, neurophysiological models and human connectome project data.

    Science.gov (United States)

    Ryali, Srikanth; Chen, Tianwen; Supekar, Kaustubh; Tu, Tao; Kochalka, John; Cai, Weidong; Menon, Vinod

    2016-08-01

    Causal estimation methods are increasingly being used to investigate functional brain networks in fMRI, but there are continuing concerns about the validity of these methods. Multivariate dynamical systems (MDS) is a state-space method for estimating dynamic causal interactions in fMRI data. Here we validate MDS using benchmark simulations as well as simulations from a more realistic stochastic neurophysiological model. Finally, we applied MDS to investigate dynamic casual interactions in a fronto-cingulate-parietal control network using human connectome project (HCP) data acquired during performance of a working memory task. Crucially, since the ground truth in experimental data is unknown, we conducted novel stability analysis to determine robust causal interactions within this network. MDS accurately recovered dynamic causal interactions with an area under receiver operating characteristic (AUC) above 0.7 for benchmark datasets and AUC above 0.9 for datasets generated using the neurophysiological model. In experimental fMRI data, bootstrap procedures revealed a stable pattern of causal influences from the anterior insula to other nodes of the fronto-cingulate-parietal network. MDS is effective in estimating dynamic causal interactions in both the benchmark and neurophysiological model based datasets in terms of AUC, sensitivity and false positive rates. Our findings demonstrate that MDS can accurately estimate causal interactions in fMRI data. Neurophysiological models and stability analysis provide a general framework for validating computational methods designed to estimate causal interactions in fMRI. The right anterior insula functions as a causal hub during working memory. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Comparison of anthropometric-based equations for estimation of body fat percentage in a normal-weight and overweight female cohort: validation via air-displacement plethysmography.

    Science.gov (United States)

    Temple, Derry; Denis, Romain; Walsh, Marianne C; Dicker, Patrick; Byrne, Annette T

    2015-02-01

    To evaluate the accuracy of the most commonly used anthropometric-based equations in the estimation of percentage body fat (%BF) in both normal-weight and overweight women using air-displacement plethysmography (ADP) as the criterion measure. A comparative study in which the equations of Durnin and Womersley (1974; DW) and Jackson, Pollock and Ward (1980) at three, four and seven sites (JPW₃, JPW₄ and JPW₇) were validated against ADP in three groups. Group 1 included all participants, group 2 included participants with a BMI <25·0 kg/m² and group 3 included participants with a BMI ≥25·0 kg/m². Human Performance Laboratory, Institute for Sport and Health, University College Dublin, Republic of Ireland. Forty-three female participants aged between 18 and 55 years. In all three groups, the %BF values estimated from the DW equation were closer to the criterion measure (i.e. ADP) than those estimated from the other equations. Of the three JPW equations, JPW₃ provided the most accurate estimation of %BF when compared with ADP in all three groups. In comparison to ADP, these findings suggest that the DW equation is the most accurate anthropometric method for the estimation of %BF in both normal-weight and overweight females.

  5. Temperature based validation of the analytical model for the estimation of the amount of heat generated during friction stir welding

    Directory of Open Access Journals (Sweden)

    Milčić Dragan S.

    2012-01-01

    Full Text Available Friction stir welding is a solid-state welding technique that utilizes thermomechanical influence of the rotating welding tool on parent material resulting in a monolith joint - weld. On the contact of welding tool and parent material, significant stirring and deformation of parent material appears, and during this process, mechanical energy is partially transformed into heat. Generated heat affects the temperature of the welding tool and parent material, thus the proposed analytical model for the estimation of the amount of generated heat can be verified by temperature: analytically determined heat is used for numerical estimation of the temperature of parent material and this temperature is compared to the experimentally determined temperature. Numerical solution is estimated using the finite difference method - explicit scheme with adaptive grid, considering influence of temperature on material's conductivity, contact conditions between welding tool and parent material, material flow around welding tool, etc. The analytical model shows that 60-100% of mechanical power given to the welding tool is transformed into heat, while the comparison of results shows the maximal relative difference between the analytical and experimental temperature of about 10%.

  6. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  7. Validation of Core Temperature Estimation Algorithm

    Science.gov (United States)

    2016-01-20

    and risk of heat injury. An algorithm for estimating core temperature based on heart rate has been developed by others in order to avoid standard... risk of heat injury. Accepted standards for measuring core temperature include probes in the pulmonary artery, rectum, or esophagus, and an ingestible...temperature estimation from heart rate for first responders wearing different levels of personal protective equipment," Ergonomics , 2015. 8. J.M

  8. Towards valid 'serious non-fatal injury' indicators for international comparisons based on probability of admission estimates

    DEFF Research Database (Denmark)

    Cryer, Colin; Miller, Ted R; Lyons, Ronan A

    2017-01-01

    were calculated. RESULTS: The results confirmed that femoral fractures have high PrA across all countries studied. Strong evidence for high PrA also exists for fracture of base of skull with cerebral laceration and contusion; intracranial haemorrhage; open fracture of radius, ulna, tibia and fibula......; pneumohaemothorax and injury to the liver and spleen. Slightly weaker evidence exists for cerebellar or brain stem laceration; closed fracture of the tibia and fibula; open and closed fracture of the ankle; haemothorax and injury to the heart and lung. CONCLUSIONS: Using a large study size, we identified injury...

  9. Remote Estimation of Chlorophyll-a in Inland Waters by a NIR-Red-Based Algorithm: Validation in Asian Lakes

    Directory of Open Access Journals (Sweden)

    Gongliang Yu

    2014-04-01

    Full Text Available Satellite remote sensing is a highly useful tool for monitoring chlorophyll-a concentration (Chl-a in water bodies. Remote sensing algorithms based on near-infrared-red (NIR-red wavelengths have demonstrated great potential for retrieving Chl-a in inland waters. This study tested the performance of a recently developed NIR-red based algorithm, SAMO-LUT (Semi-Analytical Model Optimizing and Look-Up Tables, using an extensive dataset collected from five Asian lakes. Results demonstrated that Chl-a retrieved by the SAMO-LUT algorithm was strongly correlated with measured Chl-a (R2 = 0.94, and the root-mean-square error (RMSE and normalized root-mean-square error (NRMS were 8.9 mg∙m−3 and 72.6%, respectively. However, the SAMO-LUT algorithm yielded large errors for sites where Chl-a was less than 10 mg∙m−3 (RMSE = 1.8 mg∙m−3 and NRMS = 217.9%. This was because differences in water-leaving radiances at the NIR-red wavelengths (i.e., 665 nm, 705 nm and 754 nm used in the SAMO-LUT were too small due to low concentrations of water constituents. Using a blue-green algorithm (OC4E instead of the SAMO-LUT for the waters with low constituent concentrations would have reduced the RMSE and NRMS to 1.0 mg∙m−3 and 16.0%, respectively. This indicates (1 the NIR-red algorithm does not work well when water constituent concentrations are relatively low; (2 different algorithms should be used in light of water constituent concentration; and thus (3 it is necessary to develop a classification method for selecting the appropriate algorithm.

  10. Estimation of spatial-temporal gait parameters in level walking based on a single accelerometer: validation on normal subjects by standard gait analysis.

    Science.gov (United States)

    Bugané, F; Benedetti, M G; Casadio, G; Attala, S; Biagi, F; Manca, M; Leardini, A

    2012-10-01

    This paper investigates the ability of a single wireless inertial sensing device stuck on the lower trunk to provide spatial-temporal parameters during level walking. The 3-axial acceleration signals were filtered and the timing of the main gait events identified. Twenty-two healthy subjects were analyzed with this system for validation, and the estimated parameters were compared with those obtained with state-of-the-art gait analysis, i.e. stereophotogrammetry and dynamometry. For each side, from four to six gait cycles were measured with the device, of which two were validated by gait analysis. The new acquisition system is easy to use and does not interfere with regular walking. No statistically significant differences were found between the acceleration-based measurements and the corresponding ones from gait analysis for most of the spatial-temporal parameters, i.e. stride length, stride duration, cadence and speed, etc.; significant differences were found for the gait cycle phases, i.e. single and double support duration, etc. The system therefore shows promise also for a future routine clinical use. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. Validity of treadmill- and track-based individual calibration methods for estimating free-living walking speed and VO2 using the Actigraph accelerometer.

    Science.gov (United States)

    Barnett, Anthony; Cerin, Ester; Vandelanotte, Corneel; Matsumoto, Aya; Jenkins, David

    2015-01-01

    For many patients clinical prescription of walking will be beneficial to health and accelerometers can be used to monitor their walking intensity, frequency and duration over many days. Walking intensity should include establishment of individual specific accelerometer count, walking speed and energy expenditure (VO2) relationships and this can be achieved using a walking protocol on a treadmill or overground. However, differences in gait mechanics during treadmill compared to overground walking may result in inaccurate estimations of free-living walking speed and VO2. The aims of this study were to compare the validity of track- and treadmill-based calibration methods for estimating free-living level walking speed and VO2 and to explain between-method differences in accuracy of estimation. Fifty healthy adults [32 women and 18 men; mean (SD): 40 (13) years] walked at four pre-determined speeds on an outdoor track and a treadmill, and completed three 1-km self-paced level walks while wearing an Actigraph monitor and a mobile oxygen analyser. Speed- and VO2-to-Actigraph count individual calibration equations were computed for each calibration method. Between-method differences in calibration equation parameters, prediction errors, and relationships of walking speed with VO2 and Actigraph counts were assessed. The treadmill-calibration equation overestimated free-living walking speed (on average, by 0.7 km · h(-1)) and VO2 (by 4.99 ml · kg(-1) · min(-1)), while the track-calibration equation did not. This was because treadmill walking, from which the calibration equation was derived, produced lower Actigraph counts and higher VO2 for a given walking speed compared to walking on a track. The prediction error associated with the use of the treadmill-calibration method increased with free-living walking speed. This issue was not observed when using the track-calibration method. The proposed track-based individual accelerometer calibration method can

  12. Constraint-based feature validation

    OpenAIRE

    Dohmen, M.H.P.J.

    1998-01-01

    The feature modeling paradigm combines geometric and functional product information in one model. In an ideal product development environment, multiple views of a product in terms of features coexist. Feature validation concerns the validity of the feature information in all these views, focusing on validity specification and maintenance. This thesis presents a feature validation scheme based on constraints. It enables flexible and expressive feature validity maintenance. The scheme ensures t...

  13. Joint framework for motion validity and estimation using block overlap.

    Science.gov (United States)

    Santoro, Michael; AlRegib, Ghassan; Altunbasak, Yucel

    2013-04-01

    This paper presents a block-overlap-based validity metric for use as a measure of motion vector (MV) validity and to improve the quality of the motion field. In contrast to other validity metrics in the literature, the proposed metric is not sensitive to image features and does not require the use of neighboring MVs or manual thresholds. Using a hybrid de-interlacer, it is shown that the proposed metric outperforms other block-based validity metrics in the literature. To help regularize the ill-posed nature of motion estimation, the proposed validity metric is also used as a regularizer in an energy minimization framework to determine the optimal MV. Experimental results show that the proposed energy minimization framework outperforms several existing motion estimation methods in the literature in terms of MV and interpolation quality. For interpolation quality, our algorithm outperforms all other block-based methods as well as several complex optical flow methods. In addition, it is one of the fastest implementations at the time of this writing.

  14. Estimating the validity of administrative variables

    NARCIS (Netherlands)

    Bakker, B.F.M.

    2012-01-01

    Administrative data have become more important for both official statistics and academic research. One possible problem with such data is that they are biased and have a low validity. Although this problem is often mentioned in a qualitative respect, the validity is seldom quantitatively measured.

  15. Food composition table-based estimation of energy and major nutrient intake in comparison with chemical analysis: a validation study in Korea.

    Science.gov (United States)

    Kim, Eul-Sang; Ko, Yang-Sook; Kim, Junghun; Matsuda-Inoguchi, Naoko; Nakatsuka, Haruo; Watanabe, Takao; Shimbo, Shinichiro; Ikeda, Masayuki

    2003-05-01

    This study was initiated to examine the accuracy of conventional food composition table-based estimation of intakes of energy, protein, lipid and carbohydrate, in comparison with chemical analysis. For this purpose, 66 women (at the ages of 29 to 54 years) in three locations in Jeju Island, Republic of Korea, volunteered to offer 24-hour food duplicate samples. A half of them were house-wives, and the remaining half were farmers or fishers. The duplicate samples were subjected 1) to the chemical analysis for daily intake of energy, protein, lipid and carbohydrate after official methods in Korea (measured values), and 2) to the estimation of intakes of the same items taking advantage of Korean Food Composition Tables (estimated values). The two sets of the results, i.e., the measured and estimated values, were compared by paired and unpaired t-test, and linear regression analysis. The estimated values correlated closely with the measured values, irrespective of energy or the three major nutrients. A close agreement was observed for energy intake (the estimated/measured ratio of > 98%), and it was also the case for protein intake (101%). Under- and over-estimation was observed, however, in regard to carbohydrate (by - 8%) and lipid intake (by + 24%), respectively. It was concluded that the Korean Food Composition Tables are sufficiently accurate when applied for estimation of total energy intake as well as protein intake. Cares should be taken, however, in applying the tables for estimation of lipid and carbohydrate intake, because there may be the risk of over- and under-estimation for the former and the latter, respectively.

  16. Online cross-validation-based ensemble learning.

    Science.gov (United States)

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. A practical strategy for sEMG-based knee joint moment estimation during gait and its validation in individuals with cerebral palsy.

    Science.gov (United States)

    Kwon, Suncheol; Park, Hyung-Soon; Stanley, Christopher J; Kim, Jung; Kim, Jonghyun; Damiano, Diane L

    2012-05-01

    Individuals with cerebral palsy have neurological deficits that may interfere with motor function and lead to abnormal walking patterns. It is important to know the joint moment generated by the patient's muscles during walking in order to assist the suboptimal gait patterns. In this paper, we describe a practical strategy for estimating the internal moment of a knee joint from surface electromyography (sEMG) and knee joint angle measurements. This strategy requires only isokinetic knee flexion and extension tests to obtain a relationship between the sEMG and the knee internal moment, and it does not necessitate comprehensive laboratory calibration, which typically requires a 3-D motion capture system and ground reaction force plates. Four estimation models were considered based on different assumptions about the functions of the relevant muscles during the isokinetic tests and the stance phase of walking. The performance of the four models was evaluated by comparing the estimated moments with the gold standard internal moment calculated from inverse dynamics. The results indicate that an optimal estimation model can be chosen based on the degree of cocontraction. The estimation error of the chosen model is acceptable (normalized root-mean-squared error: 0.15-0.29, R: 0.71-0.93) compared to previous studies (Doorenbosch and Harlaar, 2003; Doorenbosch and Harlaar, 2004; Doorenbosch, Joosten, and Harlaar, 2005), and this strategy provides a simple and effective solution for estimating knee joint moment from sEMG.

  18. Validation of Core Temperature Estimation Algorithm

    Science.gov (United States)

    2016-01-29

    a constant heart rate until the CT estimate converges, with convergence defined as the time for CT to fall within 0.5% of the final temperature...range of error within which 95% of the estimated errors should fall assuming a normal distribution, which is consistent with the error distribution...and B.C. Ruby , "Core-Temperature Sensor Ingestion Timing and Measurement Variability," Jounral of Athletic Training, vol. 45, no. 6, pp. 594–600

  19. Validated Extractive Spectrophotometric Estimation of Tadalafil in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    J. Adlin Jino Nesalin

    2009-01-01

    Full Text Available Two simple spectrophotometric methods have been developed for the estimation of tadalafil in both pure and tablet dosage form. Methods A and B are based on the formation of ion-pair complexes of the drug with dyes such as bromothymol blue (BTB and bromocresol green (BCG in acidic buffer solution followed by their extraction with chloroform to form yellow colored chromogen with absorption maxima at 420 nm and 415 nm respectively. Beer’s law is valid in the concentration range of 10-50 mcg/mL for both the methods. These developed methods were validated for precision, accuracy, ruggedness and robustness. Statistical analysis proves that the methods are reproducible and selective for the routine analysis of the said drug.

  20. Estimation and Validation of RapidEye-Based Time-Series of Leaf Area Index for Winter Wheat in the Rur Catchment (Germany

    Directory of Open Access Journals (Sweden)

    Muhammad Ali

    2015-03-01

    Full Text Available Leaf Area Index (LAI is an important variable for numerous processes in various disciplines of bio- and geosciences. In situ measurements are the most accurate source of LAI among the LAI measuring methods, but the in situ measurements have the limitation of being labor intensive and site specific. For spatial-explicit applications (from regional to continental scales, satellite remote sensing is a promising source for obtaining LAI with different spatial resolutions. However, satellite-derived LAI measurements using empirical models require calibration and validation with the in situ measurements. In this study, we attempted to validate a direct LAI retrieval method from remotely sensed images (RapidEye with in situ LAI (LAIdestr. Remote sensing LAI (LAIrapideye were derived using different vegetation indices, namely SAVI (Soil Adjusted Vegetation Index and NDVI (Normalized Difference Vegetation Index. Additionally, applicability of the newly available red-edge band (RE was also analyzed through Normalized Difference Red-Edge index (NDRE and Soil Adjusted Red-Edge index (SARE. The LAIrapideye obtained from vegetation indices with red-edge band showed better correlation with LAIdestr (r = 0.88 and Root Mean Square Devation, RMSD = 1.01 & 0.92. This study also investigated the need to apply radiometric/atmospheric correction methods to the time-series of RapidEye Level 3A data prior to LAI estimation. Analysis of the the RapidEye Level 3A data set showed that application of the radiometric/atmospheric correction did not improve correlation of the estimated LAI with in situ LAI.

  1. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    Science.gov (United States)

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides

  2. Modified cross-validation as a method for estimating parameter

    Science.gov (United States)

    Shi, Chye Rou; Adnan, Robiah

    2014-12-01

    Best subsets regression is an effective approach to distinguish models that can attain objectives with as few predictors as would be prudent. Subset models might really estimate the regression coefficients and predict future responses with smaller variance than the full model using all predictors. The inquiry of how to pick subset size λ depends on the bias and variance. There are various method to pick subset size λ. Regularly pick the smallest model that minimizes an estimate of the expected prediction error. Since data are regularly small, so Repeated K-fold cross-validation method is the most broadly utilized method to estimate prediction error and select model. The data is reshuffled and re-stratified before each round. However, the "one-standard-error" rule of Repeated K-fold cross-validation method always picks the most stingy model. The objective of this research is to modify the existing cross-validation method to avoid overfitting and underfitting model, a modified cross-validation method is proposed. This paper compares existing cross-validation and modified cross-validation. Our results reasoned that the modified cross-validation method is better at submodel selection and evaluation than other methods.

  3. Validation of Some Anthropometric Equations for Estimating Body ...

    African Journals Online (AJOL)

    The study validated some of the equations that are been used for estimation of percent body fat for Nigerian women. Sample of convenience was used to select seventy five female undergraduates of Obafemi Awolowo University, Ile Ife Nigeria with mean age of 22.92 +3.02 years. Participants weight was measured under ...

  4. Practice effects distort translational validity estimates for a Neurocognitive Battery.

    Science.gov (United States)

    Ibrahim, Ibtihal; Tobar, Salwa; Elassy, Mai; Mansour, Hader; Chen, Kehui; Wood, Joel; Gur, Ruben C; Gur, Raquel E; El Bahaei, Wafaa; Nimgaonkar, Vishwajit

    2015-01-01

    With the globalization of biomedical research and the advent of "precision medicine," there is increased need for translation of neuropsychological tests, such as computerized batteries that can be incorporated in large-scale genomic studies. Estimates of translational validity are obtained by administering the test in the original and the translated versions to bilingual individuals. We investigated the translation of a neuropsychological battery from English to Arabic and how practice effects influence translational validity estimates. The Penn computerized neurocognitive battery (Penn CNB) includes tests that were validated with functional neuroimaging and provides measures of accuracy and speed of performance in several cognitive domains. To develop an Arabic version of the CNB, the English version was translated into Arabic, then back translated and revised. The Arabic and the original English versions were administered in a randomized crossover design to bilingual participants (N = 22). Performance varied by cognitive domain, but generally improved at the second session regardless of the language of the initial test. When performance on the English and Arabic version was compared, significant positive correlations were detected for accuracy in 8/13 cognitive domains and for speed in 4/13 domains (r = .02 to .97). When the practice estimates using linear models were incorporated, the translational validity estimates improved substantially (accuracy, r = .50-.96, speed, r = .63-.92, all correlations, p = .05 or better). While crossover designs control for order effects on average performance, practice effects, regardless of language, still need to be removed to obtain estimates of translational validity. When practice effect is controlled for, the Arabic and English versions of the Penn-CNB are well correlated, and the Arabic version is suitable for use in research.

  5. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  6. Observer-Based Human Knee Stiffness Estimation.

    Science.gov (United States)

    Misgeld, Berno J E; Luken, Markus; Riener, Robert; Leonhardt, Steffen

    2017-05-01

    We consider the problem of stiffness estimation for the human knee joint during motion in the sagittal plane. The new stiffness estimator uses a nonlinear reduced-order biomechanical model and a body sensor network (BSN). The developed model is based on a two-dimensional knee kinematics approach to calculate the angle-dependent lever arms and the torques of the muscle-tendon-complex. To minimize errors in the knee stiffness estimation procedure that result from model uncertainties, a nonlinear observer is developed. The observer uses the electromyogram (EMG) of involved muscles as input signals and the segmental orientation as the output signal to correct the observer-internal states. Because of dominating model nonlinearities and nonsmoothness of the corresponding nonlinear functions, an unscented Kalman filter is designed to compute and update the observer feedback (Kalman) gain matrix. The observer-based stiffness estimation algorithm is subsequently evaluated in simulations and in a test bench, specifically designed to provide robotic movement support for the human knee joint. In silico and experimental validation underline the good performance of the knee stiffness estimation even in the cases of a knee stiffening due to antagonistic coactivation. We have shown the principle function of an observer-based approach to knee stiffness estimation that employs EMG signals and segmental orientation provided by our own IPANEMA BSN. The presented approach makes realtime, model-based estimation of knee stiffness with minimal instrumentation possible.

  7. Rule-Based Flight Software Cost Estimation

    Science.gov (United States)

    Stukes, Sherry A.; Spagnuolo, John N. Jr.

    2015-01-01

    This paper discusses the fundamental process for the computation of Flight Software (FSW) cost estimates. This process has been incorporated in a rule-based expert system [1] that can be used for Independent Cost Estimates (ICEs), Proposals, and for the validation of Cost Analysis Data Requirements (CADRe) submissions. A high-level directed graph (referred to here as a decision graph) illustrates the steps taken in the production of these estimated costs and serves as a basis of design for the expert system described in this paper. Detailed discussions are subsequently given elaborating upon the methodology, tools, charts, and caveats related to the various nodes of the graph. We present general principles for the estimation of FSW using SEER-SEM as an illustration of these principles when appropriate. Since Source Lines of Code (SLOC) is a major cost driver, a discussion of various SLOC data sources for the preparation of the estimates is given together with an explanation of how contractor SLOC estimates compare with the SLOC estimates used by JPL. Obtaining consistency in code counting will be presented as well as factors used in reconciling SLOC estimates from different code counters. When sufficient data is obtained, a mapping into the JPL Work Breakdown Structure (WBS) from the SEER-SEM output is illustrated. For across the board FSW estimates, as was done for the NASA Discovery Mission proposal estimates performed at JPL, a comparative high-level summary sheet for all missions with the SLOC, data description, brief mission description and the most relevant SEER-SEM parameter values is given to illustrate an encapsulation of the used and calculated data involved in the estimates. The rule-based expert system described provides the user with inputs useful or sufficient to run generic cost estimation programs. This system's incarnation is achieved via the C Language Integrated Production System (CLIPS) and will be addressed at the end of this paper.

  8. View Estimation Based on Value System

    Science.gov (United States)

    Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru

    Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.

  9. A Practical Strategy for sEMG-Based Knee Joint Moment Estimation During Gait and Its Validation in Individuals With Cerebral Palsy

    OpenAIRE

    Kwon, Suncheol; Park, Hyung-Soon; Stanley, Christopher J.; Kim, Jung; Kim, Jonghyun; Damiano, Diane L.

    2012-01-01

    Individuals with cerebral palsy have neurological deficits that may interfere with motor function and lead to abnormal walking patterns. It is important to know the joint moment generated by the patient’s muscles during walking in order to assist the suboptimal gait patterns. In this paper, we describe a practical strategy for estimating the internal moment of a knee joint from surface electromyography (sEMG) and knee joint angle measurements. This strategy requires only isokinetic knee flexi...

  10. Estimation of pelvis kinematics in level walking based on a single inertial sensor positioned close to the sacrum: validation on healthy subjects with stereophotogrammetric system.

    Science.gov (United States)

    Buganè, Francesca; Benedetti, Maria Grazia; D'Angeli, Valentina; Leardini, Alberto

    2014-10-21

    Kinematics measures from inertial sensors have a value in the clinical assessment of pathological gait, to track quantitatively the outcome of interventions and rehabilitation programs. To become a standard tool for clinicians, it is necessary to evaluate their capability to provide reliable and comprehensible information, possibly by comparing this with that provided by the traditional gait analysis. The aim of this study was to assess by state-of-the-art gait analysis the reliability of a single inertial device attached to the sacrum to measure pelvis kinematics during level walking. The output signals of the three-axis gyroscope were processed to estimate the spatial orientation of the pelvis in the sagittal (tilt angle), frontal (obliquity) and transverse (rotation) anatomical planes These estimated angles were compared with those provided by a 8 TV-cameras stereophotogrammetric system utilizing a standard experimental protocol, with four markers on the pelvis. This was observed in a group of sixteen healthy subjects while performing three repetitions of level walking along a 10 meter walkway at slow, normal and fast speeds. The determination coefficient, the scale factor and the bias of a linear regression model were calculated to represent the differences between the angular patterns from the two measurement systems. For the intra-subject variability, one volunteer was asked to repeat walking at normal speed 10 times. A good match was observed for obliquity and rotation angles. For the tilt angle, the pattern and range of motion was similar, but a bias was observed, due to the different initial inclination angle in the sagittal plane of the inertial sensor with respect to the pelvis anatomical frame. A good intra-subject consistency has also been shown by the small variability of the pelvic angles as estimated by the new system, confirmed by very small values of standard deviation for all three angles. These results suggest that this inertial device is a

  11. Constraint-based feature validation

    NARCIS (Netherlands)

    Dohmen, M.H.P.J.

    1998-01-01

    The feature modeling paradigm combines geometric and functional product information in one model. In an ideal product development environment, multiple views of a product in terms of features coexist. Feature validation concerns the validity of the feature information in all these views, focusing on

  12. Base Flow Model Validation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  13. Validation of GPM precipitation estimates over the Eastern Mediterranean

    Science.gov (United States)

    Retalis, Adrianos; Katsanos, Dimitrios; Tymvios, Filippos; Michaelides, Silas

    2017-04-01

    The performance of a Global Precipitation Measurement (GPM) high-resolution product regarding extreme rainfall is validated against rain gauges over the island of Cyprus. The precipitation estimates are available in both high temporal (half hourly) and spatial (10km) resolution and combine data from all passive microwave instruments in the GPM constellation. The comparison is performed with data from a dense and reliable network of meteorological stations, also available in high temporal (hourly) resolution. The period of study covers 12 months of data, focusing on cases with extreme rainfall rate. The aim of the study is to verify the GPM rain estimates, given that such a high-resolution dataset can be used in a series of applications, including the assimilation in numerical weather prediction models or the study of flash floods with hydrological models.

  14. Formal Computer Validation of the Quantum Phase Estimation Algorithm

    Science.gov (United States)

    Witzel, Wayne; Rudinger, Kenneth; Sarovar, Mohan; Carr, Robert

    While peer review and scientific consensus provide some assurance to the validity of ideas, people do make mistakes that can slip through the cracks. A plethora of formal methods tools exist and are in use in a variety of settings where high assurance is demanded. Existing tools, however, require a great deal of expertise and lack versatility, demanding a non-trivial translation between a high-level description of a problem and the formal system. Our software, called Prove-It, allows a nearly direct translation between human-recognizable formulations and the underlying formal system. While Prove-It is not designed for particularly efficient automation, a primary goal of other formal methods tools, it is extremely flexible in following a desired line of reasoning (proof structure). This approach is particularly valuable for validating proofs that are already known. We will demonstrate a validation of the Quantum Phase Estimation Algorithm using Prove-It. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories.

  15. Base Flow Model Validation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  16. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...

  17. Validation of Persian rapid estimate of adult literacy in dentistry.

    Science.gov (United States)

    Pakpour, Amir H; Lawson, Douglas M; Tadakamadla, Santosh K; Fridlund, Bengt

    2016-05-01

    The aim of the present study was to establish the psychometric properties of the Rapid Estimate of adult Literacy in Dentistry-99 (REALD-99) in the Persian language for use in an Iranian population (IREALD-99). A total of 421 participants with a mean age of 28 years (59% male) were included in the study. Participants included those who were 18 years or older and those residing in Quazvin (a city close to Tehran), Iran. A forward-backward translation process was used for the IREALD-99. The Test of Functional Health Literacy in Dentistry (TOFHLiD) was also administrated. The validity of the IREALD-99 was investigated by comparing the IREALD-99 across the categories of education and income levels. To further investigate, the correlation of IREALD-99 with TOFHLiD was computed. A principal component analysis (PCA) was performed on the data to assess unidimensionality and strong first factor. The Rasch mathematical model was used to evaluate the contribution of each item to the overall measure, and whether the data were invariant to differences in sex. Reliability was estimated with Cronbach's α and test-retest correlation. Cronbach's alpha for the IREALD-99 was 0.98, indicating strong internal consistency. The test-retest correlation was 0.97. IREALD-99 scores differed by education levels. IREALD-99 scores were positively related to TOFHLiD scores (rh = 0.72, P < 0.01). In addition, IREALD-99 showed positive correlation with self-rated oral health status (rh = 0.31, P < 0.01) as evidence of convergent validity. The PCA indicated a strong first component, five times the strength of the second component and nine times the third. The empirical data were a close fit with the Rasch mathematical model. There was not a significant difference in scores with respect to income level (P = 0.09), and only the very lowest income level was significantly different (P < 0.01). The IREALD-99 exhibited excellent reliability on repeated administrations, as well as internal

  18. Validation of a partial coherence interferometry method for estimating retinal shape

    OpenAIRE

    Verkicharla, Pavan K.; Suheimat, Marwan; Pope, James M.; Sepehrband, Farshid; Mathur, Ankit; Schmid, Katrina L.; Atchison, David A.

    2015-01-01

    To validate a simple partial coherence interferometry (PCI) based retinal shape method, estimates of retinal shape were determined in 60 young adults using off-axis PCI, with three stages of modeling using variants of the Le Grand model eye, and magnetic resonance imaging (MRI). Stage 1 and 2 involved a basic model eye without and with surface ray deviation, respectively and Stage 3 used model with individual ocular biometry and ray deviation at surfaces. Considering the theoretical uncertain...

  19. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in

    2016-09-07

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.

  20. Invasive validation of arteriograph estimates of central blood pressure in patients with type 2 diabetes.

    Science.gov (United States)

    Rossen, Niklas Blach; Laugesen, Esben; Peters, Christian Daugaard; Ebbehøj, Eva; Knudsen, Søren Tang; Poulsen, Per Løgstrup; Bøtker, Hans Erik; Hansen, Klavs Würgler

    2014-05-01

    Central blood pressure (BP) has attracted increasing interest because of a potential superiority over brachial BP in predicting cardiovascular morbidity and mortality. Several devices estimating central BP noninvasively are now available. The aim of our study was to determine the validity of the Arteriograph, a brachial cuff-based, oscillometric device, in patients with type 2 diabetes. We measured central BP invasively and compared it with the Arteriograph-estimated values in 22 type 2 diabetic patients referred to elective coronary angiography. The difference (invasively measured BP minus Arteriograph-estimated BP) in central systolic BP (SBP) was 4.4±8.7 mm Hg (P = 0.03). The limits of agreement were ±17.1 mm Hg. Compared with invasively measured central SBP, we found a systematic underestimation by the Arteriograph. However, the limits of agreement were similar to the previous Arteriograph validation study and to the invasive validation studies of other brachial cuff-based, oscillometric devices. A limitation in our study was the large number of patients (n = 14 of 36) in which the Arteriograph was unable to analyze the pressure curves. In a research setting, the Arteriograph seems applicable in patients with type 2 diabetes. ClinicalTrials.gov ID NCT01538290.

  1. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  2. Improving Sample Estimate Reliability and Validity with Linked Ego Networks

    CERN Document Server

    Lu, Xin

    2012-01-01

    Respondent-driven sampling (RDS) is currently widely used in public health, especially for the study of hard-to-access populations such as injecting drug users and men who have sex with men. The method works like a snowball sample but can, given that some assumptions are met, generate unbiased population estimates. However, recent studies have shown that traditional RDS estimators are likely to generate large variance and estimate error. To improve the performance of traditional estimators, we propose a method to generate estimates with ego network data collected by RDS. By simulating RDS processes on an empirical human social network with known population characteristics, we have shown that the precision of estimates on the composition of network link types is greatly improved with ego network data. The proposed estimator for population characteristics shows superior advantage over traditional RDS estimators, and most importantly, the new method exhibits strong robustness to the recruitment preference of res...

  3. Importance of Statistical Evidence in Estimating Valid DEA Scores.

    Science.gov (United States)

    Barnum, Darold T; Johnson, Matthew; Gleason, John M

    2016-03-01

    Data Envelopment Analysis (DEA) allows healthcare scholars to measure productivity in a holistic manner. It combines a production unit's multiple outputs and multiple inputs into a single measure of its overall performance relative to other units in the sample being analyzed. It accomplishes this task by aggregating a unit's weighted outputs and dividing the output sum by the unit's aggregated weighted inputs, choosing output and input weights that maximize its output/input ratio when the same weights are applied to other units in the sample. Conventional DEA assumes that inputs and outputs are used in different proportions by the units in the sample. So, for the sample as a whole, inputs have been substituted for each other and outputs have been transformed into each other. Variables are assigned different weights based on their marginal rates of substitution and marginal rates of transformation. If in truth inputs have not been substituted nor outputs transformed, then there will be no marginal rates and therefore no valid basis for differential weights. This paper explains how to statistically test for the presence of substitutions among inputs and transformations among outputs. Then, it applies these tests to the input and output data from three healthcare DEA articles, in order to identify the effects on DEA scores when input substitutions and output transformations are absent in the sample data. It finds that DEA scores are badly biased when substitution and transformation are absent and conventional DEA models are used.

  4. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  5. Validity of Broselow tape for estimating weight of Indian children

    National Research Council Canada - National Science Library

    Vivek Shah; Sandeep B Bavdekar

    2017-01-01

    Background & objectives: The Broselow tape has been validated in both ambulatory and simulated emergency situations in the United States and is believed to reduce complications arising from inaccurate drug dosing...

  6. Validation of travel times to hospital estimated by GIS

    Directory of Open Access Journals (Sweden)

    Sauerzapf Violet

    2006-09-01

    Full Text Available Abstract Background An increasing number of studies use GIS estimates of car travel times to health services, without presenting any evidence that the estimates are representative of real travel times. This investigation compared GIS estimates of travel times with the actual times reported by a sample of 475 cancer patients who had travelled by car to attend clinics at eight hospitals in the North of England. Methods Car travel times were estimated by GIS using the shortest road route between home address and hospital and average speed assumptions. These estimates were compared with reported journey times and straight line distances using graphical, correlation and regression techniques. Results There was a moderately strong association between reported times and estimated travel times (r = 0.856. Reported travel times were similarly related to straight line distances. Altogether, 50% of travel time estimates were within five minutes of the time reported by respondents, 77% were within ten minutes and 90% were within fifteen minutes. The distribution of over- and under-estimates was symmetrical, but estimated times tended to be longer than reported times with increasing distance from hospital. Almost all respondents rounded their travel time to the nearest five or ten minutes. The reason for many cases of reported journey times exceeding the estimated times was confirmed by respondents' comments as traffic congestion. Conclusion GIS estimates of car travel times were moderately close approximations to reported times. GIS travel time estimates may be superior to reported travel times for modelling purposes because reported times contain errors and can reflect unusual circumstances. Comparison with reported times did not suggest that estimated times were a more sensitive measure than straight line distance.

  7. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Melius, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ong, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  8. Validation of statistical models for estimating hospitalization associated with influenza and other respiratory viruses.

    Directory of Open Access Journals (Sweden)

    Lin Yang

    Full Text Available BACKGROUND: Reliable estimates of disease burden associated with respiratory viruses are keys to deployment of preventive strategies such as vaccination and resource allocation. Such estimates are particularly needed in tropical and subtropical regions where some methods commonly used in temperate regions are not applicable. While a number of alternative approaches to assess the influenza associated disease burden have been recently reported, none of these models have been validated with virologically confirmed data. Even fewer methods have been developed for other common respiratory viruses such as respiratory syncytial virus (RSV, parainfluenza and adenovirus. METHODS AND FINDINGS: We had recently conducted a prospective population-based study of virologically confirmed hospitalization for acute respiratory illnesses in persons <18 years residing in Hong Kong Island. Here we used this dataset to validate two commonly used models for estimation of influenza disease burden, namely the rate difference model and Poisson regression model, and also explored the applicability of these models to estimate the disease burden of other respiratory viruses. The Poisson regression models with different link functions all yielded estimates well correlated with the virologically confirmed influenza associated hospitalization, especially in children older than two years. The disease burden estimates for RSV, parainfluenza and adenovirus were less reliable with wide confidence intervals. The rate difference model was not applicable to RSV, parainfluenza and adenovirus and grossly underestimated the true burden of influenza associated hospitalization. CONCLUSION: The Poisson regression model generally produced satisfactory estimates in calculating the disease burden of respiratory viruses in a subtropical region such as Hong Kong.

  9. Instrumental variable estimation based on grouped data

    NARCIS (Netherlands)

    Bekker, Paul A.; Ploeg, Jan van der

    2000-01-01

    The paper considers the estimation of the coefficients of a single equation in the presence of dummy intruments. We derive pseudo ML and GMM estimators based on moment restrictions induced either by the structural form or by the reduced form of the model. The performance of the estimators is

  10. Instrumental variable estimation based on grouped data

    NARCIS (Netherlands)

    Bekker, PA; van der Ploeg, Jan

    The paper considers the estimation of the coefficients of a single equation in the presence of dummy intruments. We derive pseudo ML and GMM estimators based on moment restrictions induced either by the structural form or by the reduced form of the model. The performance of the estimators is

  11. Validity of common ultrasound methods of fetal weight estimation in ...

    African Journals Online (AJOL)

    Abstract. Background: Accuracy of some ultrasound equations used in our locality for fetal weight estimation is doubtful. Objective: To assess the accuracy of common ultrasound equations used for fetal weight estimation. Subjects and Methods: A longitudinal study was conducted on selected Nigerian obstetric population at ...

  12. Validity of common ultrasound methods of fetal weight estimation in ...

    African Journals Online (AJOL)

    Background: Accuracy of some ultrasound equations used in our locality for fetal weight estimation is doubtful. Objective: To assess the accuracy of common ultrasound equations used for fetal weight estimation. Subjects and Methods: A longitudinal study was conducted on selected Nigerian obstetric population at Central ...

  13. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  14. Validation of Transverse Oscillation Vector Velocity Estimation In-Vivo

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Udesen, Jesper; Thomsen, Carsten

    2007-01-01

    method Transverse Oscillation (TO), which combines estimates of the axial and the transverse velocity components in the scan plane, makes it possible to estimate the vector velocity of the blood regardless of the Doppler angle. The present study evaluates the TO method with magnetic resonance angiography...... was constructed where the mean difference was 0.2 ml with limits of agreement at -1.4 ml and 1.9 ml (95 % CI for mean difference: -0.3 ml to 0.8 ml). The strong correlation and the low mean difference between the TO method and MRA indicates that reliable vector velocity estimates can be obtained in vivo using...

  15. Developing and validating a highway construction project cost estimation tool.

    Science.gov (United States)

    2004-01-01

    In May 2002, Virginia's Commonwealth Transportation Commissioner tasked his Chief of Technology, Research & Innovation with leading an effort to develop a definitive, consistent, and well-documented approach for estimating the cost of delivering cons...

  16. Validation of a partial coherence interferometry method for estimating retinal shape

    Science.gov (United States)

    Verkicharla, Pavan K.; Suheimat, Marwan; Pope, James M.; Sepehrband, Farshid; Mathur, Ankit; Schmid, Katrina L.; Atchison, David A.

    2015-01-01

    To validate a simple partial coherence interferometry (PCI) based retinal shape method, estimates of retinal shape were determined in 60 young adults using off-axis PCI, with three stages of modeling using variants of the Le Grand model eye, and magnetic resonance imaging (MRI). Stage 1 and 2 involved a basic model eye without and with surface ray deviation, respectively and Stage 3 used model with individual ocular biometry and ray deviation at surfaces. Considering the theoretical uncertainty of MRI (12-14%), the results of the study indicate good agreement between MRI and all three stages of PCI modeling with <4% and <7% differences in retinal shapes along horizontal and vertical meridians, respectively. Stage 2 and Stage 3 gave slightly different retinal co-ordinates than Stage 1 and we recommend the intermediate Stage 2 as providing a simple and valid method of determining retinal shape from PCI data. PMID:26417496

  17. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others

    1996-11-01

    variation between the best estimate predictions of the group. The assumptions of the users result in more uncertainty in the predictions (taking into account the 95% confidence intervals) than is shown by the confidence interval on the predictions of one user. Mistakes, being examples of incorrect user assumptions, cannot be ignored and must be accepted as contributing to the variability seen in the spread of predictions. The user's confidence in his/her understanding of a scenario description and/or confidence in working with a code does not necessarily mean that the predictions will be more accurate. Choice of parameter values contributed most to user-induced uncertainty followed by scenario interpretation. The contribution due to code implementation was low, but may have been limited due to the decision of the majority of the group not to submit predictions using the most complex of the three codes. Most modelers had difficulty adapting the models for certain expected output. Parameter values for wet and dry deposition, transfer from forage to milk and concentration ratios were mostly taken from the extensive database of Chernobyl fallout radionuclides, no matter what the scenario. Examples provided in the code manuals may influence code users considerably when preparing their own input files. A major problem concerns pasture concentrations given in fresh or dry weight: parameter values in codes have to be based on one or the other and the request for predictions in the scenario description may or may not be the same unit. This is a surprisingly common source of error. Most of the predictions showed order of magnitude discrepancies when best estimates are compared with the observations, although the participants had a highly professional background in radioecology and a good understanding of the importance of the processes modelled. When uncertainties are considered, however, mostly there was overlap between predictions and observations. A failure to reproduce the

  18. Evaluating Cardiovascular Health Disparities Using Estimated Race/Ethnicity: A Validation Study.

    Science.gov (United States)

    Bykov, Katsiaryna; Franklin, Jessica M; Toscano, Michele; Rawlins, Wayne; Spettell, Claire M; McMahill-Walraven, Cheryl N; Shrank, William H; Choudhry, Niteesh K

    2015-12-01

    Methods of estimating race/ethnicity using administrative data are increasingly used to examine and target disparities; however, there has been no validation of these methods using clinically relevant outcomes. To evaluate the validity of the indirect method of race/ethnicity identification based on place of residence and surname for assessing clinically relevant outcomes. A total of 2387 participants in the Post-MI Free Rx Event and Economic Evaluation (MI FREEE) trial who had both self-reported and Bayesian Improved Surname Geocoding method (BISG)-estimated race/ethnicity information available. We used tests of interaction to compare differences in the effect of providing full drug coverage for post-MI medications on adherence and rates of major vascular events or revascularization for white and nonwhite patients based upon self-reported and indirect racial/ethnic assignment. The impact of full coverage on clinical events differed substantially when based upon self-identified race (HR=0.97 for whites, HR=0.65 for nonwhites; interaction P-value=0.05); however, it did not differ among race/ethnicity groups classified using indirect methods (HR=0.87 for white and nonwhites; interaction P-value=0.83). The impact on adherence was the same for self-reported and BISG-estimated race/ethnicity for 2 of the 3 medication classes studied. Quantitatively and qualitatively different results were obtained when indirectly estimated race/ethnicity was used, suggesting that these techniques may not accurately describe aspects of race/ethnicity related to actual health behaviors.

  19. Validity of sports watches when estimating energy expenditure during running.

    Science.gov (United States)

    Roos, Lilian; Taube, Wolfgang; Beeler, Nadja; Wyss, Thomas

    2017-01-01

    The aim of this study was to assess the accuracy of three different sport watches in estimating energy expenditure during aerobic and anaerobic running. Twenty trained subjects ran at different intensities while wearing three commercial sport watches (Suunto Ambit2, Garmin Forerunner920XT, and Polar V800). Indirect calorimetry was used as the criterion measure for assessing energy expenditure. Different formulas were applied to compute energy expenditure from the gas exchange values for aerobic and anaerobic running. The accuracy of the energy expenditure estimations was intensity-dependent for all tested watches. During aerobic running (4-11 km/h), mean absolute percentage error values of -25.16% to +38.09% were observed, with the Polar V800 performing most accurately (stage 1: -12.20%, stage 2: -3.61%, and stage 3: -4.29%). The Garmin Forerunner920XT significantly underestimated energy expenditure during the slowest stage (stage 1: -25.16%), whereas, the Suunto Ambit2 significantly overestimated energy expenditure during the two slowest stages (stage 1: 38.09%, stage 2: 36.29%). During anaerobic running (14-17 km/h), all three watches significantly underestimated energy expenditure by -21.62% to -49.30%. Therefore, the error in estimating energy expenditure systematically increased as the anaerobic running speed increased. To estimate energy expenditure during aerobic running, the Polar V800 is recommended. By contrast, the other two watches either significantly overestimated or underestimated energy expenditure during most running intensities. The energy expenditure estimations generated during anaerobic exercises revealed large measurement errors in all tested sport watches. Therefore, the algorithms for estimating energy expenditure during intense activities must be improved before they can be used to monitor energy expenditure during high-intensity physical activities.

  20. Validity and reliability of Optojump photoelectric cells for estimating vertical jump height.

    Science.gov (United States)

    Glatthorn, Julia F; Gouge, Sylvain; Nussbaumer, Silvio; Stauffacher, Simone; Impellizzeri, Franco M; Maffiuletti, Nicola A

    2011-02-01

    Vertical jump is one of the most prevalent acts performed in several sport activities. It is therefore important to ensure that the measurements of vertical jump height made as a part of research or athlete support work have adequate validity and reliability. The aim of this study was to evaluate concurrent validity and reliability of the Optojump photocell system (Microgate, Bolzano, Italy) with force plate measurements for estimating vertical jump height. Twenty subjects were asked to perform maximal squat jumps and countermovement jumps, and flight time-derived jump heights obtained by the force plate were compared with those provided by Optojump, to examine its concurrent (criterion-related) validity (study 1). Twenty other subjects completed the same jump series on 2 different occasions (separated by 1 week), and jump heights of session 1 were compared with session 2, to investigate test-retest reliability of the Optojump system (study 2). Intraclass correlation coefficients (ICCs) for validity were very high (0.997-0.998), even if a systematic difference was consistently observed between force plate and Optojump (-1.06 cm; p height. We propose the following equation that allows force plate and Optojump results to be used interchangeably: force plate jump height (cm) = 1.02 × Optojump jump height + 0.29. In conclusion, the use of Optojump photoelectric cells is legitimate for field-based assessments of vertical jump height.

  1. Theory and validation of magnetic resonance fluid motion estimation using intensity flow data.

    Directory of Open Access Journals (Sweden)

    Kelvin Kian Loong Wong

    Full Text Available BACKGROUND: Motion tracking based on spatial-temporal radio-frequency signals from the pixel representation of magnetic resonance (MR imaging of a non-stationary fluid is able to provide two dimensional vector field maps. This supports the underlying fundamentals of magnetic resonance fluid motion estimation and generates a new methodology for flow measurement that is based on registration of nuclear signals from moving hydrogen nuclei in fluid. However, there is a need to validate the computational aspect of the approach by using velocity flow field data that we will assume as the true reference information or ground truth. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we create flow vectors based on an ideal analytical vortex, and generate artificial signal-motion image data to verify our computational approach. The analytical and computed flow fields are compared to provide an error estimate of our methodology. The comparison shows that the fluid motion estimation approach using simulated MR data is accurate and robust enough for flow field mapping. To verify our methodology, we have tested the computational configuration on magnetic resonance images of cardiac blood and proved that the theory of magnetic resonance fluid motion estimation can be applicable practically. CONCLUSIONS/SIGNIFICANCE: The results of this work will allow us to progress further in the investigation of fluid motion prediction based on imaging modalities that do not require velocity encoding. This article describes a novel theory of motion estimation based on magnetic resonating blood, which may be directly applied to cardiac flow imaging.

  2. Development and validation of aboveground biomass estimations for four Salix clones in central New York

    Energy Technology Data Exchange (ETDEWEB)

    Arevalo, Carmela B.M.; Volk, Timothy A.; Bevilacqua, Eddie; Abrahamson, Lawrence [Faculty of Forest and Natural Resources Management, State University of New York, College of Environmental Science and Forestry, 1 Forestry Drive, Syracuse, NY 13210 (United States)

    2007-01-15

    Commercial and research scale plantings of short-rotation woody crops require reliable and efficient estimations of biomass yield before time of harvest. Biomass equations currently exist but the accuracy and efficiency of estimation procedures at the level of specificity needs to be quantified for clones being used in North America. Diameter-based allometric equations for aboveground biomass for four clones of willow (Salix discolor, Salix alba, Salix dasyclados, and Salix sachalinensis), between two sites (Canastota and Tully, NY), and across four years (1998-2001), were developed using ordinary least-squares regression (OLSR) on log-transformed variables, weighted least squares regression (WLSR) on log-transformed variables, and nonlinear regression (NLR) methods and validated using independent data sets. Biomass estimations derived from clone, age, and site (Specific) using OLSR equations had highest R{sup 2} and lowest percent bias (<2.3%) allowing for accurate estimations of standing biomass. Values for specific equations using WLSR were similar, but bias was higher for NLR (0.7-12.5%). However, the amount of time and effort required to develop specific equations, is large and in many situations prohibitive. Biomass estimates derived from clone and age, regardless of site (Intermediate), resulted in small increases in prediction error and a small increase in percent bias using OLSR (<0.4%) and WLSR (<1.7%). The increase in percent bias was larger (1.1-5.7%) for NLR equations. Intermediate models correspond to the loss of only a small amount of accuracy while gaining more efficiency in estimating standing biomass. Estimates of biomass derived from clone alone (general) equations, considering neither age nor site, had the weakest prediction abilities that may lead to large errors for biomass estimations using OLSR (7.0-9.5%), WLSR (1.1-21.7%) or NLR (31.9-143.4%). (author)

  3. Validation of a spectrophotometric method to estimate the adsorption on nanoemulsions of an antimalarial oligonucleotide

    Directory of Open Access Journals (Sweden)

    Fernanda Bruxel

    2011-09-01

    Full Text Available This study describes the validation of a spectrophotometric method to estimate oligonucleotides association with cationic nanoemulsions. Phosphodiester and phosphorothioate oligonucleotides targeting Plasmodium falciparum topoisomerase II were analyzed at 262 nm. Linear response (r > 0.998 was observed from 0.4 to 1.0 nmol/mL, the relative standard deviation values for the intra- and inter-days precision were lower than 2.6% and the recovery ranged from 98.8 to 103.6% for both oligonucleotides. The association efficiency was estimated based on an ultrafiltration/centrifugation method. Oligonucleotides recovery through 30 kDa-membranes was higher than 92%. The extent of oligonucleotides association (42 to 98% varied with the composition of nanoemulsions

  4. In-vivo Validation of Fast Spectral Velocity Estimation Techniques

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Gran, Fredrik; Pedersen, Mads Møller

    2010-01-01

    Spectrograms in medical ultrasound are usually estimated with Welch’s method (WM). WM is dependent on an observation window (OW) of up to 256 emissions per estimate to achieve sufficient spectral resolution and contrast. Two adaptive filterbank methods have been suggested to reduce the OW: Blood ......, and that OW can be reduced to 32 using BPC and 16 using BAPES without reducing the usefulness of the spectrogram. This could potentially increase the temporal resolution of the spectrogram or the frame-rate of the interleaved B-mode images........ BAPES and BPC compared to WM had better resolution (lower FWHM) for all OW higher ratio). According to the scores given by the radiologists, BAPES, BPC and W.HAN performed equally well (p > 0.05) at OW 128 and 64, while W.BOX scored less (p

  5. Channel Estimation in DCT-Based OFDM

    Science.gov (United States)

    Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing

    2014-01-01

    This paper derives the channel estimation of a discrete cosine transform- (DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic. PMID:24757439

  6. Channel estimation in DCT-based OFDM.

    Science.gov (United States)

    Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing

    2014-01-01

    This paper derives the channel estimation of a discrete cosine transform-(DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic.

  7. Vehicle Sideslip Angle Estimation Based on General Regression Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Wei

    2016-01-01

    Full Text Available Aiming at the accuracy of estimation of vehicle’s mass center sideslip angle, an estimation method of slip angle based on general regression neural network (GRNN and driver-vehicle closed-loop system has been proposed: regarding vehicle’s sideslip angle as time series mapping of yaw speed and lateral acceleration; using homogeneous design project to optimize the training samples; building the mapping relationship among sideslip angle, yaw speed, and lateral acceleration; at the same time, using experimental method to measure vehicle’s sideslip angle to verify validity of this method. Estimation results of neural network and real vehicle experiment show the same changing tendency. The mean of error is within 10% of test result’s amplitude. Results show GRNN can estimate vehicle’s sideslip angle correctly. It can offer a reference to the application of vehicle’s stability control system on vehicle’s state estimation.

  8. World Equity Premium based Risk Aversion Estimates

    NARCIS (Netherlands)

    L.C.G. Pozzi (Lorenzo)

    2010-01-01

    textabstractThe equity premium puzzle holds that the coefficient of relative risk aversion estimated from the consumption based CAPM under power utility is excessively high. Moreover, estimates in the literature vary considerably across countries. We gauge the uncertainty pertaining to the country

  9. Criterion-Related Validity of the 20-M Shuttle Run Test for Estimating Cardiorespiratory Fitness: A Meta-Analysis.

    Science.gov (United States)

    Mayorga-Vega, Daniel; Aguilar-Soto, Pablo; Viciana, Jesús

    2015-09-01

    The main purpose of the present meta-analysis was to examine the criterion-related validity of the 20-m shuttle run test for estimating cardiorespiratory fitness. Relevant studies were searched from twelve electronic databases up to December 2014, as well as from several alternative modes of searching. The Hunter-Schmidt's psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the 20-m shuttle run test. From 57 studies that were included in the present meta-analysis, a total of 78 correlation values were analyzed. The overall results showed that the performance score of the 20-m shuttle run test had a moderate-to-high criterion-related validity for estimating maximum oxygen uptake (r p = 0.66-0.84), being higher when other variables (e.g. sex, age or body mass) were used (r p = 0.78-0.95). The present meta-analysis also showed that the criterion-related validity of Léger's protocol was statistically higher for adults (r p = 0.94, 0.87-1.00) than for children (r p = 0.78, 0.72-0.85). However, sex and maximum oxygen uptake level do not seem to affect the criterion-related validity values. When an individual's maximum oxygen uptake attained during a laboratory-based test is not feasible, the 20-m shuttle run test seems to be a useful alternative for estimating cardiorespiratory fitness. In adults the performance score only seems to be a strong estimator of cardiorespiratory fitness, in contrast among children the performance score should be combined with other variables. Nevertheless, as in the application of any physical fitness field test, evaluators must be aware that the performance score of the 20-m shuttle run test is simply an estimation and not a direct measure of cardiorespiratory fitness. Key pointsOverall the 20-m shuttle run test has a moderate-to-high mean criterion-related validity for estimating cardiorespiratory fitness.The criterion-related validity of the 20-m shuttle run test is significantly

  10. In-vivo validation of fast spectral velocity estimation techniques.

    Science.gov (United States)

    Hansen, K L; Gran, F; Pedersen, M M; Holfort, I K; Jensen, J A; Nielsen, M B

    2010-01-01

    Spectrograms in medical ultrasound are usually estimated with Welch's method (WM). WM is dependent on an observation window (OW) of up to 256 emissions per estimate to achieve sufficient spectral resolution and contrast. Two adaptive filterbank methods have been suggested to reduce the OW: Blood spectral Power Capon (BPC) and the Blood Amplitude and Phase EStimation method (BAPES). Ten volunteers were scanned over the carotid artery. From each data set, 28 spectrograms were produced by combining four approaches (WM with a Hanning window (W.HAN), WM with a boxcar window (W.BOX), BPC and BAPES) and seven OWs (128, 64, 32, 16, 8, 4, 2). The full-width-at-half-maximum (FWHM) and the ratio between main and side-lobe levels were calculated at end-diastole for each spectrogram. Furthermore, all 280 spectrograms were randomized and presented to nine radiologists for visual evaluation: useful/not useful. BAPES and BPC compared to WM had better resolution (lower FWHM) for all OW0.05) at OW 128 and 64, while W.BOX scored less (paverage good agreement (90%, kappa=0.79) and inter-observer variability showed moderate agreement (78%, kappa=0.56). The results indicated that BPC and BAPES had better resolution and BAPES better contrast than WM, and that OW can be reduced to 32 using BPC and 16 using BAPES without reducing the usefulness of the spectrogram. This could potentially increase the temporal resolution of the spectrogram or the frame-rate of the interleaved B-mode images.

  11. External validation of EPICON: a grouping system for estimating morbidity rates from electronic medical records.

    NARCIS (Netherlands)

    Biermans, M.C.J.; Elbers, G.H.; Verheij, R.A.; Veen, W.J. van der; Zielhuis, G.A.; Vries Robbé, P.F. de

    2008-01-01

    OBJECTIVE: To externally validate EPICON, a computerized system for grouping diagnoses from EMRs in general practice into episodes of care. These episodes can be used for estimating morbidity rates. DESIGN: Comparative observational study. MEASUREMENTS: Morbidity rates from an independent dataset,

  12. Validity of consumer-based physical activity monitors.

    Science.gov (United States)

    Lee, Jung-Min; Kim, Youngwon; Welk, Gregory J

    2014-09-01

    Many consumer-based monitors are marketed to provide personal information on the levels of physical activity and daily energy expenditure (EE), but little or no information is available to substantiate their validity. This study aimed to examine the validity of EE estimates from a variety of consumer-based, physical activity monitors under free-living conditions. Sixty (26.4 ± 5.7 yr) healthy males (n = 30) and females (n = 30) wore eight different types of activity monitors simultaneously while completing a 69-min protocol. The monitors included the BodyMedia FIT armband worn on the left arm, the DirectLife monitor around the neck, the Fitbit One, the Fitbit Zip, and the ActiGraph worn on the belt, as well as the Jawbone Up and Basis B1 Band monitor on the wrist. The validity of the EE estimates from each monitor was evaluated relative to criterion values concurrently obtained from a portable metabolic system (i.e., Oxycon Mobile). Differences from criterion measures were expressed as a mean absolute percent error and were evaluated using 95% equivalence testing. For overall group comparisons, the mean absolute percent error values (computed as the average absolute value of the group-level errors) were 9.3%, 10.1%, 10.4%, 12.2%, 12.6%, 12.8%, 13.0%, and 23.5% for the BodyMedia FIT, Fitbit Zip, Fitbit One, Jawbone Up, ActiGraph, DirectLife, NikeFuel Band, and Basis B1 Band, respectively. The results from the equivalence testing showed that the estimates from the BodyMedia FIT, Fitbit Zip, and NikeFuel Band (90% confidence interval = 341.1-359.4) were each within the 10% equivalence zone around the indirect calorimetry estimate. The indicators of the agreement clearly favored the BodyMedia FIT armband, but promising preliminary findings were also observed with the Fitbit Zip.

  13. Subspace Based Blind Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Hayashi, Kazunori; Matsushima, Hiroki; Sakai, Hideaki

    2012-01-01

    The paper proposes a subspace based blind sparse channel estimation method using 1–2 optimization by replacing the 2–norm minimization in the conventional subspace based method by the 1–norm minimization problem. Numerical results confirm that the proposed method can significantly improve...... the estimation accuracy for the sparse channel, while achieving the same performance as the conventional subspace method when the channel is dense. Moreover, the proposed method enables us to estimate the channel response with unknown channel order if the channel is sparse enough....

  14. Validity of a digital diet estimation method for use with preschool children

    Science.gov (United States)

    The validity of using the Remote Food Photography Method (RFPM) for measuring food intake of minority preschool children's intake is not well documented. The aim of the study was to determine the validity of intake estimations made by human raters using the RFPM compared with those obtained by weigh...

  15. Validation of refractive index structure parameter estimation for certain infrared bands.

    Science.gov (United States)

    Sivaslıgil, Mustafa; Erol, Cemil Berin; Polat, Özgür Murat; Sarı, Hüseyin

    2013-05-10

    Variation of the atmospheric refraction index due to turbulent fluctuations is one of the key factors that affect the performance of electro-optical and infrared systems and sensors. Therefore, any prior knowledge about the degree of variation in the refractive index is critical in the success of field studies such as search and rescue missions, military applications, and remote sensing studies where these systems are used frequently. There are many studies in the literature in which the optical turbulence effects are modeled by estimation of the refractive index structure parameter, C(n)(2), from meteorological data for all levels of the atmosphere. This paper presents a modified approach for bulk-method-based C(n)(2) estimation. According to this approach, conventional wind speed, humidity, and temperature values above the surface by at least two levels are used as input data for Monin-Obukhov similarity theory in the estimation of similarity scaling constants with a finite difference approximation and a bulk-method-based C(n)(2) estimation. Compared with the bulk method, this approach provides the potential for using more than two levels of standard meteorological data, application of the scintillation effects of estimated C(n)(2) on the images, and a much simpler solution than traditional ones due to elimination of the roughness parameters, which are difficult to obtain and which increase the complexity, the execution time, and the number of additional input parameters of the algorithm. As a result of these studies, Atmospheric Turbulence Model Software is developed and the results are validated in comparison to the C(n)(2) model presented by Tunick.

  16. Estimation and Q-Matrix Validation for Diagnostic Classification Models

    Science.gov (United States)

    Feng, Yuling

    2013-01-01

    Diagnostic classification models (DCMs) are structured latent class models widely discussed in the field of psychometrics. They model subjects' underlying attribute patterns and classify subjects into unobservable groups based on their mastery of attributes required to answer the items correctly. The effective implementation of DCMs depends…

  17. Validation and use of a QuEChERS-based gas chromatographic-tandem mass spectrometric method for multiresidue pesticide analysis in blackcurrants including studies of matrix effects and estimation of measurement uncertainty.

    Science.gov (United States)

    Walorczyk, Stanisław

    2014-03-01

    A triple quadrupole GC-QqQ-MS/MS method was optimized for multiresidue analysis of over 180 pesticides in blackcurrants. The samples were prepared by using a modified quick, easy, cheap, effective, rugged and safe (QuEChERS) analytical protocol. To reduce matrix co-extractives in the final extract, the supernatant was cleaned up by dispersive-solid phase extraction (dispersive-SPE) with a mixture of sorbents: primary secondary amine (PSA), octadecyl (C18) and graphitized carbon black (GCB). The validation results demonstrated fitness for purpose of the streamlined method. The overall recoveries at the three spiking levels of 0.01, 0.05 and 0.2 mg kg(-1) spanned between 70% and 116% (102% on average) with relative standard deviation (RSD) values between 3% and 19% except for chlorothalonil (23%). Response linearity was studied in the range between 0.005 and 0.5 mg kg(-1). The matrix effect for each individual compound was evaluated through the study of ratios of the slopes obtained in solvent and blackcurrant matrix. The optimized method provided small matrix effect (matrix effect was 10-20%, 20-30% and >30%, respectively. Following the application of "top-down" approach, the expanded measurement uncertainty was estimated as being 21% on average (coverage factor k=2, confidence level 95%). If compared with samples of other crops, the analyses of blackcurrants revealed a high percentage of exceedance of the legislative maximum residue levels (MRLs), as well as some instances of the detection of pesticides unapproved on this crop. © 2013 Published by Elsevier B.V.

  18. Validation of walk score for estimating neighborhood walkability: an analysis of four US metropolitan areas.

    Science.gov (United States)

    Duncan, Dustin T; Aldstadt, Jared; Whalen, John; Melly, Steven J; Gortmaker, Steven L

    2011-11-01

    Neighborhood walkability can influence physical activity. We evaluated the validity of Walk Score(®) for assessing neighborhood walkability based on GIS (objective) indicators of neighborhood walkability with addresses from four US metropolitan areas with several street network buffer distances (i.e., 400-, 800-, and 1,600-meters). Address data come from the YMCA-Harvard After School Food and Fitness Project, an obesity prevention intervention involving children aged 5-11 years and their families participating in YMCA-administered, after-school programs located in four geographically diverse metropolitan areas in the US (n = 733). GIS data were used to measure multiple objective indicators of neighborhood walkability. Walk Scores were also obtained for the participant's residential addresses. Spearman correlations between Walk Scores and the GIS neighborhood walkability indicators were calculated as well as Spearman correlations accounting for spatial autocorrelation. There were many significant moderate correlations between Walk Scores and the GIS neighborhood walkability indicators such as density of retail destinations and intersection density (p walkability. Correlations generally became stronger with a larger spatial scale, and there were some geographic differences. Walk Score(®) is free and publicly available for public health researchers and practitioners. Results from our study suggest that Walk Score(®) is a valid measure of estimating certain aspects of neighborhood walkability, particularly at the 1600-meter buffer. As such, our study confirms and extends the generalizability of previous findings demonstrating that Walk Score is a valid measure of estimating neighborhood walkability in multiple geographic locations and at multiple spatial scales.

  19. Model-based estimation for official statistics

    NARCIS (Netherlands)

    van den Brakel, J.; Bethlehem, J.

    2008-01-01

    Design-based and model-assisted estimation procedures are widely applied by most of the European national statistical institutes. There are, however, situations were model-based approaches can have additional value in the production of official statistics, e.g. to deal with small sample sizes,

  20. Validity of Carrea's index in stature estimation among two racial populations in India

    Science.gov (United States)

    Anita, P.; Madankumar, P. D.; Sivasamy, Shyam; Balan, I. Nanda

    2016-01-01

    Background: Stature is considered to be one of the “big fours” in forensic anthropology. Though Carrea's Index was published as early as 1920 it has not been validated in any other population apart from the Brazilians. Aim: The present study was conducted to validate Carrea's index in stature estimation in two different racial populations in India. Materials and Methods: The study was carried out in a sample of 100 persons comprising of 25 Aryan males, 25 Aryan females, 25 Dravidian males, and 25 Dravidian females in the age group of 18–30 years. The maximum and minimum stature of all individuals was estimated by Carrea's Index. The actual stature was measured by an anthropometer. The estimated stature was compared with the actual stature and percentage of success was calculated. Results: The Carrea's Index was found to be valid in predicting the stature of 80% Dravidian and 84% Aryan males, the difference being statistically insignificant (Fisher Exact test–0.16; P = 0.99). The stature of 76% of females in both Aryan and Dravidian races was successfully predicted by Carrea's index. Regression analysis showed that the minimum estimated height was more valid in estimating the stature of Aryan and Dravidian population. Conclusion: The validity to use Carrea's index in Aryan and Dravidian population was evaluated and found to be valid. PMID:27555731

  1. Evaluating Expert Estimators Based on Elicited Competences

    Directory of Open Access Journals (Sweden)

    Hrvoje Karna

    2015-07-01

    Full Text Available Utilization of expert effort estimation approach shows promising results when it is applied to software development process. It is based on judgment and decision making process and due to comparative advantages extensively used especially in situations when classic models cannot be accounted for. This becomes even more accentuated in today’s highly dynamical project environment. Confronted with these facts companies are placing ever greater focus on their employees, specifically on their competences. Competences are defined as knowledge, skills and abilities required to perform job assignments. During effort estimation process different underlying expert competences influence the outcome i.e. judgments they express. Special problem here is the elicitation, from an input collection, of those competences that are responsible for accurate estimates. Based on these findings different measures can be taken to enhance estimation process. The approach used in study presented in this paper was targeted at elicitation of expert estimator competences responsible for production of accurate estimates. Based on individual competences scores resulting from performed modeling experts were ranked using weighted scoring method and their performance evaluated. Results confirm that experts with higher scores in competences identified by applied models in general exhibit higher accuracy during estimation process. For the purpose of modeling data mining methods were used, specifically the multilayer perceptron neural network and the classification and regression decision tree algorithms. Among other, applied methods are suitable for the purpose of elicitation as in a sense they mimic the ways human brains operate. Data used in the study was collected from real projects in the company specialized for development of IT solutions in telecom domain. The proposed model, applied methodology for elicitation of expert competences and obtained results give evidence that in

  2. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  3. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    Science.gov (United States)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  4. Risk Probability Estimating Based on Clustering

    DEFF Research Database (Denmark)

    Chen, Yong; Jensen, Christian D.; Gray, Elizabeth

    2003-01-01

    of prior experiences, recommendations from a trusted entity or the reputation of the other entity. In this paper we propose a dynamic mechanism for estimating the risk probability of a certain interaction in a given environment using hybrid neural networks. We argue that traditional risk assessment models...... from the insurance industry do not directly apply to ubiquitous computing environments. Instead, we propose a dynamic mechanism for risk assessment, which is based on pattern matching, classification and prediction procedures. This mechanism uses an estimator of risk probability, which is based...

  5. Validating agent based models through virtual worlds.

    Energy Technology Data Exchange (ETDEWEB)

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  6. Validity of Carrea's index in stature estimation among two racial populations in India

    OpenAIRE

    P Anita; P D Madankumar; Shyam Sivasamy; I Nanda Balan

    2016-01-01

    Background: Stature is considered to be one of the ?big fours? in forensic anthropology. Though Carrea's Index was published as early as 1920 it has not been validated in any other population apart from the Brazilians. Aim: The present study was conducted to validate Carrea's index in stature estimation in two different racial populations in India. Materials and Methods: The study was carried out in a sample of 100 persons comprising of 25 Aryan males, 25 Aryan females, 25 Dravidian males, an...

  7. Estimation of cardiac reserve by peak power: validation and initial application of a simplified index

    OpenAIRE

    Armstrong, G; Carlier, S.; Fukamachi, K; Thomas, J; Marwick, T

    1999-01-01

    OBJECTIVES—To validate a simplified estimate of peak power (SPP) against true (invasively measured) peak instantaneous power (TPP), to assess the feasibility of measuring SPP during exercise and to correlate this with functional capacity.
DESIGN—Development of a simplified method of measurement and observational study.
SETTING—Tertiary referral centre for cardiothoracic disease.
SUBJECTS—For validation of SPP with TPP, seven normal dogs and four dogs with dilated cardiomyopathy were studied. ...

  8. Validation of equations and proposed reference values to estimate fat mass in Chilean university students.

    Science.gov (United States)

    Gómez Campos, Rossana; Pacheco Carrillo, Jaime; Almonacid Fierro, Alejandro; Urra Albornoz, Camilo; Cossío-Bolaños, Marco

    2018-03-01

    (i) To propose regression equations based on anthropometric measures to estimate fat mass (FM) using dual energy X-ray absorptiometry (DXA) as reference method, and (ii)to establish population reference standards for equation-derived FM. A cross-sectional study on 6,713 university students (3,354 males and 3,359 females) from Chile aged 17.0 to 27.0years. Anthropometric measures (weight, height, waist circumference) were taken in all participants. Whole body DXA was performed in 683 subjects. A total of 478 subjects were selected to develop regression equations, and 205 for their cross-validation. Data from 6,030 participants were used to develop reference standards for FM. Equations were generated using stepwise multiple regression analysis. Percentiles were developed using the LMS method. Equations for men were: (i) FM=-35,997.486 +232.285 *Weight +432.216 *CC (R 2 =0.73, SEE=4.1); (ii)FM=-37,671.303 +309.539 *Weight +66,028.109 *ICE (R2=0.76, SEE=3.8), while equations for women were: (iii)FM=-13,216.917 +461,302 *Weight+91.898 *CC (R 2 =0.70, SEE=4.6), and (iv) FM=-14,144.220 +464.061 *Weight +16,189.297 *ICE (R 2 =0.70, SEE=4.6). Percentiles proposed included p10, p50, p85, and p95. The developed equations provide valid and accurate estimation of FM in both sexes. The values obtained using the equations may be analyzed from percentiles that allow for categorizing body fat levels by age and sex. Copyright © 2017 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.

  9. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    Science.gov (United States)

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating

  10. Estimation of dynamic rotor loads for the rotor systems research aircraft: Methodology development and validation

    Science.gov (United States)

    Duval, R. W.; Bahrami, M.

    1985-01-01

    The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.

  11. Entropy-based adaptive attitude estimation

    Science.gov (United States)

    Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.

    2018-03-01

    Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.

  12. Uncertainty estimation for map-based analyses

    Science.gov (United States)

    Ronald E. McRoberts; Mark A. Hatfield; Susan J. Crocker

    2010-01-01

    Traditionally, natural resource managers have asked the question, “How much?” and have received sample-based estimates of resource totals or means. Increasingly, however, the same managers are now asking the additional question, “Where?” and are expecting spatially explicit answers in the form of maps. Recent development of natural resource databases, access to...

  13. Development and validation of a method to estimate body weight in ...

    African Journals Online (AJOL)

    This model was validated against two samples, the National Health and Nutrition Examination Survey datasets and data from two previous PAWPER tape studies. The primary outcome measure was to achieve >70% of estimations within 10% of measured weight (PW10 >70%) and >95% within 20% of measured weight ...

  14. Development and validation of GFR-estimating equations using diabetes, transplant and weight

    DEFF Research Database (Denmark)

    Stevens, Lesley A; Schmid, Christopher H; Zhang, Yaping L

    2010-01-01

    BACKGROUND: We have reported a new equation (CKD-EPI equation) that reduces bias and improves accuracy for GFR estimation compared to the MDRD study equation while using the same four basic predictor variables: creatinine, age, sex and race. Here, we describe the development and validation of thi...

  15. Research of DOA Estimation Based on Single MEMS Vector Hydrophone.

    Science.gov (United States)

    Zhang, Wen Dong; Guan, Ling Gang; Zhang, Guo Jun; Xue, Chen Yang; Zhang, Kai Rui; Wang, Jian Ping

    2009-01-01

    The MEMS vector hydrophone is a novel acoustic sensor with a "four-beam-cilia" structure. Based on the MEMS vector hydrophone with this structure, the paper studies the method of estimated direction of arrival (DOA). According to various research papers, many algorithms can be applied to vector hydrophones. The beam-forming approach and bar graph approach are described in detail. Laboratory tests by means of the a standing-wave tube are performed to validate the theoretical results. Both the theoretical analysis and the results of tests prove that the proposed MEMS vector hydrophone possesses the desired directional function.

  16. SPECTRAL data-based estimation of soil heat flux

    Science.gov (United States)

    Singh, R.K.; Irmak, A.; Walter-Shea, Elizabeth; Verma, S.B.; Suyker, A.E.

    2011-01-01

    Numerous existing spectral-based soil heat flux (G) models have shown wide variation in performance for maize and soybean cropping systems in Nebraska, indicating the need for localized calibration and model development. The objectives of this article are to develop a semi-empirical model to estimate G from a normalized difference vegetation index (NDVI) and net radiation (Rn) for maize (Zea mays L.) and soybean (Glycine max L.) fields in the Great Plains, and present the suitability of the developed model to estimate G under similar and different soil and management conditions. Soil heat fluxes measured in both irrigated and rainfed fields in eastern and south-central Nebraska were used for model development and validation. An exponential model that uses NDVI and Rn was found to be the best to estimate G based on r2 values. The effect of geographic location, crop, and water management practices were used to develop semi-empirical models under four case studies. Each case study has the same exponential model structure but a different set of coefficients and exponents to represent the crop, soil, and management practices. Results showed that the semi-empirical models can be used effectively for G estimation for nearby fields with similar soil properties for independent years, regardless of differences in crop type, crop rotation, and irrigation practices, provided that the crop residue from the previous year is more than 4000 kg ha-1. The coefficients calibrated from particular fields can be used at nearby fields in order to capture temporal variation in G. However, there is a need for further investigation of the models to account for the interaction effects of crop rotation and irrigation. Validation at an independent site having different soil and crop management practices showed the limitation of the semi-empirical model in estimating G under different soil and environment conditions.

  17. Ensemble-based observation impact estimates using the NCEP GFS

    Directory of Open Access Journals (Sweden)

    Yoichiro Ota

    2013-09-01

    Full Text Available The impacts of the assimilated observations on the 24-hour forecasts are estimated with the ensemble-based method proposed by Kalnay et al. using an ensemble Kalman filter (EnKF. This method estimates the relative impact of observations in data assimilation similar to the adjoint-based method proposed by Langland and Baker but without using the adjoint model. It is implemented on the National Centers for Environmental Prediction Global Forecasting System EnKF that has been used as part of operational global data assimilation system at NCEP since May 2012. The result quantifies the overall positive impacts of the assimilated observations and the relative importance of the satellite radiance observations compared to other types of observations, especially for the moisture fields. A simple moving localisation based on the average wind, although not optimal, seems to work well. The method is also used to identify the cause of local forecast failure cases in the 24-hour forecasts. Data-denial experiments of the observations identified as producing a negative impact are performed, and forecast errors are reduced as estimated, thus validating the impact estimation.

  18. Validity of Two New Brief Instruments to Estimate Vegetable Intake in Adults

    Directory of Open Access Journals (Sweden)

    Janine Wright

    2015-08-01

    Full Text Available Cost effective population-based monitoring tools are needed for nutritional surveillance and interventions. The aim was to evaluate the relative validity of two new brief instruments (three item: VEG3 and five item: VEG5 for estimating usual total vegetable intake in comparison to a 7-day dietary record (7DDR. Sixty-four Australian adult volunteers aged 30 to 69 years (30 males, mean age ± SD 56.3 ± 9.2 years and 34 female mean age ± SD 55.3 ± 10.0 years. Pearson correlations between 7DDR and VEG3 and VEG5 were modest, at 0.50 and 0.56, respectively. VEG3 significantly (p < 0.001 underestimated mean vegetable intake compared to 7DDR measures (2.9 ± 1.3 vs. 3.6 ± 1.6 serves/day, respectively, whereas mean vegetable intake assessed by VEG5 did not differ from 7DDR measures (3.3 ± 1.5 vs. 3.6 ± 1.6 serves/day. VEG5 was also able to correctly identify 95%, 88% and 75% of those subjects not consuming five, four and three serves/day of vegetables according to their 7DDR classification. VEG5, but not VEG3, can estimate usual total vegetable intake of population groups and had superior performance to VEG3 in identifying those not meeting different levels of vegetable intake. VEG5, a brief instrument, shows measurement characteristics useful for population-based monitoring and intervention targeting.

  19. Development and validation of GFR-estimating equations using diabetes, transplant and weight

    DEFF Research Database (Denmark)

    Stevens, L.A.; Schmid, C.H.; Zhang, Y.L.

    2009-01-01

    BACKGROUND: We have reported a new equation (CKD-EPI equation) that reduces bias and improves accuracy for GFR estimation compared to the MDRD study equation while using the same four basic predictor variables: creatinine, age, sex and race. Here, we describe the development and validation......) in the development, internal validation and external validation datasets, respectively. In external validation, an equation that included a linear age term and spline terms in creatinine to account for a reduction in the magnitude of the slope at low serum creatinine values exhibited the best performance (bias = 2.......5, RMSE = 0.250) among models using the four basic predictor variables. Addition of terms for diabetes and transplant did not improve performance. Equations with weight showed a small improvement in the subgroup with BMI

  20. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.

    Science.gov (United States)

    Wang, Fei; Li, Hong; Lu, Mingquan

    2017-06-30

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.

  1. Robust correlation coefficient based on Qn estimator

    Science.gov (United States)

    Zakaria, Nur Amira; Abdullah, Suhaida; Ahad, Nor Aishah

    2017-11-01

    This paper presents a new robust correlation coefficient called Qn correlation coefficient. This coefficient is developed as an alternative for classical correlation coefficient as the performance of classical correlation coefficient is nasty under contamination data. This study applied robust scale estimator called Qn because this estimator have high breakdown point. Simulation studies are carried out in determining the performances of the new robust correlation coefficient. Clean and contamination data are generated in assessing the performance of these coefficient. The performances of the Qn correlation coefficient is compared with classical correlation coefficient based on the value of coefficient, average bias and standard error. The outcome of the simulation studies shows that the performance of Qn correlation coefficient is superior compared to the classical and existing robust correlation coefficient.

  2. Postprocessing MPEG based on estimated quantization parameters

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2009-01-01

    Postprocessing of MPEG(-2) video is widely used to attenuate the coding artifacts, especially deblocking but also deringing have been addressed. The focus has been on filters where the decoder has access to the code stream and e.g. utilizes information about the quantization parameter. We consider...... the case where the coded stream is not accessible, or from an architectural point of view not desirable to use, and instead estimate some of the MPEG stream parameters based on the decoded sequence. The I-frames are detected and the quantization parameters are estimated from the coded stream and used......' postprocessing compares favorable to a reference postprocessing filter which has access to the quantization parameters not only for I-frames but also on P and B-frames....

  3. Validating Dose Uncertainty Estimates Produced by AUTODIRECT: An Automated Program to Evaluate Deformable Image Registration Accuracy.

    Science.gov (United States)

    Kim, Hojin; Chen, Josephine; Phillips, Justin; Pukala, Jason; Yom, Sue S; Kirby, Neil

    2017-01-01

    Deformable image registration is a powerful tool for mapping information, such as radiation therapy dose calculations, from one computed tomography image to another. However, deformable image registration is susceptible to mapping errors. Recently, an automated deformable image registration evaluation of confidence tool was proposed to predict voxel-specific deformable image registration dose mapping errors on a patient-by-patient basis. The purpose of this work is to conduct an extensive analysis of automated deformable image registration evaluation of confidence tool to show its effectiveness in estimating dose mapping errors. The proposed format of automated deformable image registration evaluation of confidence tool utilizes 4 simulated patient deformations (3 B-spline-based deformations and 1 rigid transformation) to predict the uncertainty in a deformable image registration algorithm's performance. This workflow is validated for 2 DIR algorithms (B-spline multipass from Velocity and Plastimatch) with 1 physical and 11 virtual phantoms, which have known ground-truth deformations, and with 3 pairs of real patient lung images, which have several hundred identified landmarks. The true dose mapping error distributions closely followed the Student t distributions predicted by automated deformable image registration evaluation of confidence tool for the validation tests: on average, the automated deformable image registration evaluation of confidence tool-produced confidence levels of 50%, 68%, and 95% contained 48.8%, 66.3%, and 93.8% and 50.1%, 67.6%, and 93.8% of the actual errors from Velocity and Plastimatch, respectively. Despite the sparsity of landmark points, the observed error distribution from the 3 lung patient data sets also followed the expected error distribution. The dose error distributions from automated deformable image registration evaluation of confidence tool also demonstrate good resemblance to the true dose error distributions. Automated

  4. Validity of anthropometric equations to estimate infant fat mass at birth and in early infancy.

    Science.gov (United States)

    Cauble, Jennifer S; Dewi, Mira; Hull, Holly R

    2017-03-27

    In newborns and children, body fat estimation equations are often used at different ages than the age used to develop the equations. Limited validation studies exist for newborn body fat estimation equations at birth or later in infancy. The study purpose was to validate 4 newborn fat mass (FM) estimation equations in comparison to FM measured by air displacement plethysmography (ADP; the Pea Pod) at birth and 3 months. Ninety-five newborns (1-3 days) had their body composition measured by ADP and anthropometrics assessed by skinfolds. Sixty-three infants had repeat measures taken (3 months). FM measured by ADP was compared to FM from the skinfold estimation equations (Deierlein, Catalano, Lingwood, and Aris). Paired t-tests assessed mean differences, linear regression assessed accuracy, precision was assessed by R 2 and standard error of the estimate (SEE), and bias was assessed by Bland-Altman plots. At birth, FM measured by ADP differed from FM estimated by Deierlein, Lingwood and Aris equations, but did not differ from the Catalano equation. At 3 months, FM measured by ADP was different from all equations. At both time points, poor precision and accuracy was detected. Bias was detected in most all equations. Poor agreement, precision, and accuracy were found between prediction equations and the criterion at birth and 3 months.

  5. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time.

    Science.gov (United States)

    Martin, Corby K; Correa, John B; Han, Hongmei; Allen, H Raymond; Rood, Jennifer C; Champagne, Catherine M; Gunturk, Bahadir K; Bray, George A

    2012-04-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1's objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake (EI) with the Remote Food Photography Method (RFPM) over 6 days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, EI estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n = 24) or Customized Prompts (n = 16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating EI when Standard (mean ± s.d. = -895 ± 770 kcal/day, P < 0.0001), but not Customized Prompts (-270 ± 748 kcal/day, P = 0.22) were used. Error (EI from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM's ability to accurately estimate EI in free-living adults (N = 50) over 6 days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living EI (-152 ± 694 kcal/day, P = 0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake.

  6. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time

    Science.gov (United States)

    Martin, C. K.; Correa, J. B.; Han, H.; Allen, H. R.; Rood, J.; Champagne, C. M.; Gunturk, B. K.; Bray, G. A.

    2014-01-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1’s objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake with the Remote Food Photography Method (RFPM) over six days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, energy intake estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n=24) or Customized Prompts (n=16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating energy intake when Standard (mean±SD = −895±770 kcal/day, p<.0001), but not Customized Prompts (−270±748 kcal/day, p=.22) were used. Error (energy intake from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM’s ability to accurately estimate energy intake in free-living adults (N=50) over six days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living energy intake (−152±694 kcal/day, p=0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake. PMID:22134199

  7. ESTIMATION OF STATURE BASED ON FOOT LENGTH

    Directory of Open Access Journals (Sweden)

    Vidyullatha Shetty

    2015-01-01

    Full Text Available BACKGROUND : Stature is the height of the person in the upright posture. It is an important measure of physical identity. Estimation of body height from its segments or dismember parts has important considerations for identifications of living or dead human body or remains recovered from disasters or other similar conditions. OBJECTIVE : Stature is an important indicator for identification. There are numerous means to establish stature and their significance lies in the simplicity of measurement, applicability and accuracy in prediction. Our aim of the study was to review the relationship between foot length and body height. METHODS : The present study reviews various prospective studies which were done to estimate the stature. All the measurements were taken by using standard measuring devices and standard anthropometric techniques. RESULTS : This review shows there is a correlation between stature and foot dimensions it is found to be positive and statistically highly significant. Prediction of stature was found to be most accurate by multiple regression analysis. CONCLUSIONS : Stature and gender estimation can be done by using foot measurements and stud y will help in medico - legal cases in establishing identity of an individual and this would be useful for Anatomists and Anthropologists to calculate stature based on foot length

  8. Robustifying Correspondence Based 6D Object Pose Estimation

    DEFF Research Database (Denmark)

    Hietanen, Antti; Halme, Jussi; Buch, Anders Glent

    2017-01-01

    We propose two methods to robustify point correspondence based 6D object pose estimation. The first method, curvature filtering, is based on the assumption that low curvature regions provide false matches, and removing points in these regions improves robustness. The second method, region pruning......, is more general by making no assumptions about local surface properties. Our region pruning segments a model point cloud into cluster regions and searches good region combinations using a validation set. The robustifying methods are general and can be used with any correspondence based method....... For the experiments, we evaluated three correspondence selection methods, Geometric Consistency (GC) [1], Hough Grouping (HG) [2] and Search of Inliers (SI) [3] and report systematic improvements for their robustified versions with two distinct datasets....

  9. A food frequency questionnaire validated for estimating dietary flavonoid intake in an Australian population.

    Science.gov (United States)

    Somerset, Shawn; Papier, Keren

    2014-01-01

    Flavonoids, a broad category of nonnutrient food components, are potential protective dietary factors in the etiology of some cancers. However, previous epidemiological studies showing associations between flavonoid intake and cancer risk have used unvalidated intake assessment methods. A 62-item food frequency questionnaire (FFQ) based on usual intake of a representative Australian adult population sample was validated against a 3-day diet diary method in 60 young adults. Spearman's rank correlations showed 17 of 25 individual flavonoids, 3 of 5 flavonoid subgroups, and total flavonoids having strong/moderate correlation coefficients (0.40-0.70), and 8 of 25 individual flavonoids and 2 of 5 flavonoid subgroups having weak/insignificant correlations (0.01-0.39) between the 2 methods. Bland-Altman plots showed most subjects within ±1.96 SD for intakes of flavonoid subgroups and total flavonoids. The FFQ classified 73-90% of participants for all flavonoids except isorhamnetin, cyanidin, delphinidin, peonidin, and pelargonidin; 73.3-85.0% for all flavonoid subgroups except Anthocyanidins; and 86.7% for total flavonoid intake in the same/adjacent quartile determined by the 3-day diary. Weighted kappa values ranged from 0.00 (Isorhamnetin, Pelargonidin) to 0.60 (Myricetin) and were statistically significant for 18 of 25 individual flavonoids, 3 of 5 subgroups, and total flavonoids. This FFQ provides a simple and inexpensive means to estimate total flavonoid and flavonoid subgroup intake.

  10. [Estimating glomerular filtration rate based on serum cystatin C].

    Science.gov (United States)

    Lü, Rui-Xue; Li, Yi-Song; Huang, Heng-Jian; Peng, Zhi-Ying; Ying, Bin-Wu; An, Zhen-Mei

    2012-01-01

    To develop an estimating formula for glomerular filtration Rate (GFR) based on serum cystatin C in patients with chronic kidney disease (CKD). Clinical characteristics of 242 CKD patients were collected. The patients were randomly divided into modeling group and model validation group. The rGFR obtained from 99mTc-DTPA clearance rate was used as a reference value of GFR. s-cystatin C was detected by latex enhanced immunoturbidimetric method. Preliminary linear regression analysis followed by multiple linear regression were performed to investigate the association between s-cystatin C and rGFR. The validity of the estimation formula was tested in the model validation group in comparison with Hoek formula and Orebro formula. With standardised countdown conversion, s-cystatin showed linear correlation with rGFR, with a correlation coefficient of 0.773. The multiple correlation coefficient, determination coefficient, adjusted R square and std. error of the estimation model were 0.863, 0.745, 0.742, and 0.207, respectively. The residuals P-P probability plot analysis showed that the model residuals fitted into normal distribution with homogeneity of variance. Theeformula was: eGFR = 67/s-cystatin C +3. No significant difference was found between the distribution of eGFR and rGFR. Our formula had an accuracy of 30% and 50%, which were no less than those obtained from Hoek formula and Orebro formula. The new formula also had acceptable bias and high precision. The Bland-Altman analysis and ROC curve analysis showed good applicability of the new formula. The GFR prediction formula we established has a good prediction performance as comparised with other formulae, which could be used in measuring GFR in CKD patients.

  11. Self-reported smoking in online surveys: prevalence estimate validity and item format effects.

    Science.gov (United States)

    Klein, Jonathan D; Thomas, Randall K; Sutter, Erika J

    2007-07-01

    We assessed validity of self-reported smoking prevalence estimates from an online sample, and explored the impact of different item response formats on estimates. Self-reported current smoking status was obtained from 110,837 respondents from the Harris Poll Online (HPOL) panel from April 2004 to January 2005. Current smoking prevalence was compared with national estimates from the 2004 Behavioral Risk Factor Surveillance System (BRFSS), 2003 National Health Interview Survey (NHIS), and 2001-2002 National Health and Nutrition Examination Survey (NHANES). All estimates were weighted to reflect the US population. A separate survey section measured smoking prevalence using randomly assigned response formats, including yes/no grid, multiple response, numeric box, category grid, and drop-down box formats. 24.0% (95% confidence interval [CI] = 23.7-24.4) of HPOL respondents reported current smoking. BRFSS, NHIS, and NHANES estimates found 20.9%, 21.5% (95% CI = 20.9-22.1), and 24.9% (95% CI = 22.4-27.5), respectively, reporting current smoking. An additional 4.5% of NHANES respondents reporting not smoking had cotinine levels > or =15 ng/mL, indicating current smoking. Estimates of smoking prevalence varied by prevalence period and response format. Prevalence estimates obtained from the HPOL panel are comparable to those from national surveys. Online response format choices result in variation in estimated behavioral prevalence. Online surveys may be useful for public health surveillance of the US population.

  12. A Novel Rules Based Approach for Estimating Software Birthmark

    Directory of Open Access Journals (Sweden)

    Shah Nazir

    2015-01-01

    Full Text Available Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark.

  13. A personalized-model-based central aortic pressure estimation method.

    Science.gov (United States)

    Jiang, Sheng; Zhang, Zhi-Qiang; Wang, Fang; Wu, Jian-Kang

    2016-12-08

    Central Aortic Pressure (CAP) can be used to predict cardiovascular structural damage and cardiovascular events, and the development of simple, well-validated and non-invasive methods for CAP waveforms estimation is critical to facilitate the routine clinical applications of CAP. Existing widely applied methods, such as generalized transfer function (GTF-CAP) method and N-Point Moving Average (NPMA-CAP) method, are based on clinical practices, and lack a mathematical foundation. Those methods also have inherent drawback that there is no personalisation, and missing individual aortic characteristics. To overcome this pitfall, we present a personalized-model-based central aortic pressure estimation method (PM-CAP)in this paper. This PM-CAP has a mathematical foundation: a human aortic network model is proposed which is developed based on viscous fluid mechanics theory and could be personalized conveniently. Via measuring the pulse wave at the proximal and distal ends of the radial artery, the least square method is then proposed to estimate patient-specific circuit parameters. Thus the central aortic pulse wave can be obtained via calculating the transfer function between the radial artery and central aorta. An invasive validation study with 18 subjects comparing PM-CAP with direct aortic root pressure measurements during percutaneous transluminal coronary intervention was carried out at the Beijing Hospital. The experimental results show better performance of the PM-CAP method compared to the GTF-CAP method and NPMA-CAP method, which illustrates the feasibility and effectiveness of the proposed method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Air temperature estimation with MSG-SEVIRI data: Calibration and validation of the TVX algorithm for the Iberian Peninsula

    DEFF Research Database (Denmark)

    Nieto Solana, Hector; Sandholt, Inge; Aguado, Inmaculada

    2011-01-01

    is assumed to be equal to the LST corresponding to the effective full vegetation cover, and is found by extrapolating the line to a maximum value of NDVImax. The algorithm has been tested and reported in the literature previously. However, the effect of vegetation types and climates and the potential......Air temperature can be estimated from remote sensing by combining information in thermal infrared and optical wavelengths. The empirical TVX algorithm is based on an estimated linear relationship between observed Land Surface Temperature (LST) and a Spectral Vegetation Index (NDVI). Air temperature...... variation in NDVI of the effective full cover has not been subject for investigation. The present study proposes a novel methodology to estimate NDVImax that uses observed air temperature to calibrate the NDVImax for each vegetation type. To assess the validity of this methodology, we have compared...

  15. Least squares support vector machines for direction of arrival estimation with error control and validation.

    Energy Technology Data Exchange (ETDEWEB)

    Christodoulou, Christos George (University of New Mexico, Albuquerque, NM); Abdallah, Chaouki T. (University of New Mexico, Albuquerque, NM); Rohwer, Judd Andrew

    2003-02-01

    The paper presents a multiclass, multilabel implementation of least squares support vector machines (LS-SVM) for direction of arrival (DOA) estimation in a CDMA system. For any estimation or classification system, the algorithm's capabilities and performance must be evaluated. Specifically, for classification algorithms, a high confidence level must exist along with a technique to tag misclassifications automatically. The presented learning algorithm includes error control and validation steps for generating statistics on the multiclass evaluation path and the signal subspace dimension. The error statistics provide a confidence level for the classification accuracy.

  16. Validation of the Visible Occlusal Plaque Index (VOPI) in estimating caries lesion activity

    DEFF Research Database (Denmark)

    Carvalho, J.C.; Mestrinho, H D; Oliveira, L S

    2017-01-01

    to heavy plaque. VOPI scores and caries status on permanent molars were mapped and recorded at individual anatomical sites of the groove-fossa-system and at surface level. Outcomes were presence of sound site/surface and site/surface with active or inactive caries lesions (non-cavitated or cavitated...... also showed convergent validity since the likelihood that anatomical sites with no or thin plaque had inactive lesions simultaneously with sites with thick plaque (score 2) or heavy plaque (score 3) having active lesions were overall significant (RR=1.0-7.8). At surface level, discriminant validity......: The VOPI has construct as well as convergent and discriminant validity and is therefore recommended as an additional clinical tool to estimate caries lesions activity and support treatment decisions. CLINICAL SIGNIFICANCE: The Visible Occlusal Plaque Index is an additional clinical tool to the assessment...

  17. Validation and Intercomparison of Ocean Color Algorithms for Estimating Particulate Organic Carbon in the Oceans

    Directory of Open Access Journals (Sweden)

    Hayley Evers-King

    2017-08-01

    Full Text Available Particulate Organic Carbon (POC plays a vital role in the ocean carbon cycle. Though relatively small compared with other carbon pools, the POC pool is responsible for large fluxes and is linked to many important ocean biogeochemical processes. The satellite ocean-color signal is influenced by particle composition, size, and concentration and provides a way to observe variability in the POC pool at a range of temporal and spatial scales. To provide accurate estimates of POC concentration from satellite ocean color data requires algorithms that are well validated, with uncertainties characterized. Here, a number of algorithms to derive POC using different optical variables are applied to merged satellite ocean color data provided by the Ocean Color Climate Change Initiative (OC-CCI and validated against the largest database of in situ POC measurements currently available. The results of this validation exercise indicate satisfactory levels of performance from several algorithms (highest performance was observed from the algorithms of Loisel et al., 2002; Stramski et al., 2008 and uncertainties that are within the requirements of the user community. Estimates of the standing stock of the POC can be made by applying these algorithms, and yield an estimated mixed-layer integrated global stock of POC between 0.77 and 1.3 Pg C of carbon. Performance of the algorithms vary regionally, suggesting that blending of region-specific algorithms may provide the best way forward for generating global POC products.

  18. Inertial sensor-based knee flexion/extension angle estimation.

    Science.gov (United States)

    Cooper, Glen; Sheret, Ian; McMillan, Louise; McMillian, Louise; Siliverdis, Konstantinos; Sha, Ning; Hodgins, Diana; Kenney, Laurence; Howard, David

    2009-12-11

    A new method for estimating knee joint flexion/extension angles from segment acceleration and angular velocity data is described. The approach uses a combination of Kalman filters and biomechanical constraints based on anatomical knowledge. In contrast to many recently published methods, the proposed approach does not make use of the earth's magnetic field and hence is insensitive to the complex field distortions commonly found in modern buildings. The method was validated experimentally by calculating knee angle from measurements taken from two IMUs placed on adjacent body segments. In contrast to many previous studies which have validated their approach during relatively slow activities or over short durations, the performance of the algorithm was evaluated during both walking and running over 5 minute periods. Seven healthy subjects were tested at various speeds from 1 to 5 mile/h. Errors were estimated by comparing the results against data obtained simultaneously from a 10 camera motion tracking system (Qualysis). The average measurement error ranged from 0.7 degrees for slow walking (1 mph) to 3.4 degrees for running (5 mph). The joint constraint used in the IMU analysis was derived from the Qualysis data. Limitations of the method, its clinical application and its possible extension are discussed.

  19. Population-based absolute risk estimation with survey data.

    Science.gov (United States)

    Kovalchik, Stephanie A; Pfeiffer, Ruth M

    2014-04-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level.

  20. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  1. Estimating Evapotranspiration Using an Observation Based Terrestrial Water Budget

    Science.gov (United States)

    Rodell, Matthew; McWilliams, Eric B.; Famiglietti, James S.; Beaudoing, Hiroko K.; Nigro, Joseph

    2011-01-01

    Evapotranspiration (ET) is difficult to measure at the scales of climate models and climate variability. While satellite retrieval algorithms do exist, their accuracy is limited by the sparseness of in situ observations available for calibration and validation, which themselves may be unrepresentative of 500m and larger scale satellite footprints and grid pixels. Here, we use a combination of satellite and ground-based observations to close the water budgets of seven continental scale river basins (Mackenzie, Fraser, Nelson, Mississippi, Tocantins, Danube, and Ubangi), estimating mean ET as a residual. For any river basin, ET must equal total precipitation minus net runoff minus the change in total terrestrial water storage (TWS), in order for mass to be conserved. We make use of precipitation from two global observation-based products, archived runoff data, and TWS changes from the Gravity Recovery and Climate Experiment satellite mission. We demonstrate that while uncertainty in the water budget-based estimates of monthly ET is often too large for those estimates to be useful, the uncertainty in the mean annual cycle is small enough that it is practical for evaluating other ET products. Here, we evaluate five land surface model simulations, two operational atmospheric analyses, and a recent global reanalysis product based on our results. An important outcome is that the water budget-based ET time series in two tropical river basins, one in Brazil and the other in central Africa, exhibit a weak annual cycle, which may help to resolve debate about the strength of the annual cycle of ET in such regions and how ET is constrained throughout the year. The methods described will be useful for water and energy budget studies, weather and climate model assessments, and satellite-based ET retrieval optimization.

  2. Validity of selected cardiovascular field-based test among Malaysian ...

    African Journals Online (AJOL)

    Based on emerge obese problem among Malaysian, this research is formulated to validate published tests among healthy female adult. Selected test namely; 20 meter multi-stage shuttle run, 2.4km run test, 1 mile walk test and Harvard Step test were correlated with laboratory test (Bruce protocol) to find the criterion validity ...

  3. Estimating relative physical workload using heart rate monitoring: a validation by whole-body indirect calorimetry.

    Science.gov (United States)

    Garet, Martin; Boudet, Gil; Montaurier, Christophe; Vermorel, Michel; Coudert, Jean; Chamoux, Alain

    2005-05-01

    Measuring physical workload in occupational medicine is fundamental for risk prevention. An indirect measurement of total and relative energy expenditure (EE) from heart rate (HR) is widely used but it has never been validated. The aim of this study was to validate this HR-estimated energy expenditure (HREEE) method against whole-body indirect calorimetry. Twenty-four-hour HR and EE values were recorded continuously in a calorimetric chambers for 52 adult males and females (19-65 years). An 8-h working period was retained, comprising several exercise sessions on a cycloergometer at intensities up to 65% of the peak rate of oxygen uptake. HREEE was calculated with reference to cardiac reserve. A corrected HREEE (CHREEE) was also calculated with a modification to the lowest value of cardiac reserve. Both values were further compared to established methods: the flex-HR method, and the use of a 3rd order polynomial relationship to estimate total and relative EE. No significant difference was found in total EE when measured in a calorimetric chamber or estimated from CHREEE for the working period. A perfect linear and identity relationship was found between CHREEE and energy reserve values for intensities ranging from 15% to 65%. Relative physical workload can be accurately assessed from HR recordings when expressed in CHREEE between 15% to 65%, and EE can be accurately estimated using the CHREEE method.

  4. Validation and comparison of shank and lumbar-worn IMUs for step time estimation.

    Science.gov (United States)

    Johnston, William; Patterson, Matthew; O'Mahony, Niamh; Caulfield, Brian

    2017-10-26

    Gait assessment is frequently used as an outcome measure to determine changes in an individual's mobility and disease processes. Inertial measurement units (IMUs) are quickly becoming commonplace in gait analysis. The purpose of this study was to determine and compare the validity of shank and lumbar IMU mounting locations in the estimation of temporal gait features. Thirty-seven adults performed 20 walking trials each over a gold standard force platform while wearing shank and lumbar-mounted IMUs. Data from the IMUs were used to estimate step times using previously published algorithms and were compared with those derived from the force platform. There was an excellent level of correlation between the force platform and shank (r=0.95) and lumbar-mounted (r=0.99) IMUs. Bland-Altman analysis demonstrated high levels of agreement between the IMU and the force platform step times. Confidence interval widths were 0.0782 s for the shank and 0.0367 s for the lumbar. Both IMU mounting locations provided accurate step time estimations, with the lumbar demonstrating a marginally superior level of agreement with the force platform. This validation indicates that the IMU system is capable of providing step time estimates within 2% of the gold standard force platform measurement.

  5. Estimating Transitional Probabilities with Cross-Sectional Data to Assess Smoking Behavior Progression: A Validation Analysis.

    Science.gov (United States)

    Chen, Xinguang; Lin, Feng

    2012-09-03

    New analytical tools are needed to advance tobacco research, tobacco control planning and tobacco use prevention practice. In this study, we validated a method to extract information from cross-sectional survey for quantifying population dynamics of adolescent smoking behavior progression. With a 3-stage 7-path model, probabilities of smoking behavior progression were estimated employing the Probabilistic Discrete Event System (PDES) method and the cross-sectional data from 1997-2006 National Survey on Drug Use and Health (NSDUH). Validity of the PDES method was assessed using data from the National Longitudinal Survey of Youth 1997 and trends in smoking transition covering the period during which funding for tobacco control was cut substantively in 2003 in the United States. Probabilities for all seven smoking progression paths were successfully estimated with the PDES method and the NSDUH data. The absolute difference in the estimated probabilities between the two approaches varied from 0.002 to 0.076 (p>0.05 for all) and were highly correlated with each other (R(2) =0.998, pcross-sectional survey data. The estimated transitional probabilities add new evidence supporting more advanced tobacco research, tobacco control planning and tobacco use prevention practice. This method can be easily extended to study other health risk behaviors.

  6. Effective connectivity maps in the swine somatosensory cortex estimated from electrocorticography and validated with intracortical local field potential measurements.

    Science.gov (United States)

    Tanosaki, Masato; Ishibashi, Hideaki; Zhang, Tongsheng; Okada, Yoshio

    2014-03-01

    Macroscopic techniques are increasingly being used to estimate functional connectivity in the brain, which provides valuable information about brain networks. In any such endeavors it is important to understand capabilities and limitations of each technique through direct validation, which is often lacking. This study evaluated a multiple dipole source analysis technique based on electrocorticography (ECOG) data in estimating effective connectivity maps and validated the technique with intracortical local field potential (LFP) recordings. The study was carried out in an animal model (swine) with a large brain to avoid complications caused by spreading of the volume current. The evaluation was carried out for the cortical projections from the trigeminal nerve and corticocortical connectivity from the first rostrum area (R1) in the primary somatosensory cortex. Stimulation of the snout and layer IV of the R1 did not activate all projection areas in each animal, although whenever an area was activated in a given animal, its location was consistent with the intracortical LFP. The two types of connectivity maps based on ECOG analysis were consistent with each other and also with those estimated from the intracortical LFP, although there were small discrepancies. The discrepancies in mean latency based on ECOG and LFP were all very small and nonsignificant: snout stimulation, -1.1-2.0 msec (contralateral hemisphere) and 3.9-8.5 msec (ipsilateral hemisphere); R1 stimulation, -1.4-2.2 msec for the ipsilateral and 0.6-1.4 msec for the contralateral hemisphere. Dipole source analysis based on ECOG appears to be quite useful for estimating effective connectivity maps in the brain.

  7. Validation Methodology for Agent-based Simulations

    Science.gov (United States)

    2007-06-01

    ISAAC, Pythagoras , MANA, consider decision rules, knowledge-based systems, cellular automata, population dynamics • Discussion about VV&A Goal of...the target of interest Directly applicable to IW problem set In addition to ISAAC, Pythagoras , MANA, consider decision rules, knowledge-based systems

  8. Long-term monitoring of endangered Laysan ducks: Index validation and population estimates 1998–2012

    Science.gov (United States)

    Reynolds, Michelle H.; Courtot, Karen; Brinck, Kevin W.; Rehkemper, Cynthia; Hatfield, Jeffrey

    2015-01-01

    Monitoring endangered wildlife is essential to assessing management or recovery objectives and learning about population status. We tested assumptions of a population index for endangered Laysan duck (or teal; Anas laysanensis) monitored using mark–resight methods on Laysan Island, Hawai’i. We marked 723 Laysan ducks between 1998 and 2009 and identified seasonal surveys through 2012 that met accuracy and precision criteria for estimating population abundance. Our results provide a 15-y time series of seasonal population estimates at Laysan Island. We found differences in detection among seasons and how observed counts related to population estimates. The highest counts and the strongest relationship between count and population estimates occurred in autumn (September–November). The best autumn surveys yielded population abundance estimates that ranged from 674 (95% CI = 619–730) in 2003 to 339 (95% CI = 265–413) in 2012. A population decline of 42% was observed between 2010 and 2012 after consecutive storms and Japan’s To¯hoku earthquake-generated tsunami in 2011. Our results show positive correlations between the seasonal maximum counts and population estimates from the same date, and support the use of standardized bimonthly counts of unmarked birds as a valid index to monitor trends among years within a season at Laysan Island.

  9. Validity of a multi-sensor armband in estimating rest and exercise energy expenditure.

    Science.gov (United States)

    Fruin, Margaret L; Rankin, Janet Walberg

    2004-06-01

    The SenseWear Armband (SWA; BodyMedia, Inc.), using multiple sensors, was designed to estimate energy expenditure (EE) in free-living individuals. To examine the reliability and validity of the SWA during rest and exercise compared with indirect calorimetry (IC). EE was assessed with SWA and IC in 13 males during two resting and one cycle ergometry (40 min at 60% VO2peak) sessions. In a second experiment, 20 adults walked on a treadmill for 30 min at three intensities (80.5 m x min, 0% grade; 107.3 m x min, 0% grade; 107.3 m x min, 5% grade) while IC and SWA measured EE. At rest, no significant differences were found between EE measurements from the SWA (1.3 +/- 0.1 kcal x min) and IC (1.3 +/- 0.1 kcal x min), and the two methods were highly correlated (r = 0.76; P < 0.004). The SWA EE estimation was reliable when comparing the two resting visits (r = 0.93; P < 0.001). For the ergometer protocol, no significant differences were found between the SWA and IC measurements of EE early, mid, or late in exercise or for the total bout, although the measurements were poorly correlated (r = 0.03-0.12). The SWA EE estimate of walking increased with treadmill speed but not with incline. The SWA significantly overestimated (13-27%) the EE of walking with no grade (P < 0.02) and significantly underestimated (22%) EE on the 5% grade (P < 0.002). The SWA estimation of EE correlated moderately with IC (r = 0.47-0.69). The SWA provided valid and reliable estimates of EE at rest and generated similar mean estimates of EE as IC on the ergometer; however, individual error was large. The SWA overestimated the EE of flat walking and underestimated inclined walking EE.

  10. Constrained map-based inventory estimation

    Science.gov (United States)

    Paul C. Van Deusen; Francis A. Roesch

    2007-01-01

    A region can conceptually be tessellated into polygons at different scales or resolutions. Likewise, samples can be taken from the region to determine the value of a polygon variable for each scale. Sampled polygons can be used to estimate values for other polygons at the same scale. However, estimates should be compatible across the different scales. Estimates are...

  11. Global temperature estimates in the troposphere and stratosphere: a validation study of COSMIC/FORMOSAT-3 measurements

    Directory of Open Access Journals (Sweden)

    P. Kishore

    2009-02-01

    Full Text Available This paper mainly focuses on the validation of temperature estimates derived with the newly launched Constellation Observing System for Meteorology Ionosphere and Climate (COSMIC/Formosa Satellite 3 (FORMOSAT-3 system. The analysis is based on the radio occultation (RO data samples collected during the first year observation from April 2006 to April 2007. For the validation, we have used the operational stratospheric analyses including the National Centers for Environmental Prediction - Reanalysis (NCEP, the Japanese 25-year Reanalysis (JRA-25, and the United Kingdom Met Office (MetO data sets. Comparisons done in different formats reveal good agreement between the COSMIC and reanalysis outputs. Spatially, the largest deviations are noted in the polar latitudes, and height-wise, the tropical tropopause region noted the maximum differences (2–4 K. We found that among the three reanalysis data sets the NCEP data sets have the best resemblance with the COSMIC measurements.

  12. Validation of a scenario-based assessment of critical thinking using an externally validated tool.

    Science.gov (United States)

    Buur, Jennifer L; Schmidt, Peggy; Smylie, Dean; Irizarry, Kris; Crocker, Carlos; Tyler, John; Barr, Margaret

    2012-01-01

    With medical education transitioning from knowledge-based curricula to competency-based curricula, critical thinking skills have emerged as a major competency. While there are validated external instruments for assessing critical thinking, many educators have created their own custom assessments of critical thinking. However, the face validity of these assessments has not been challenged. The purpose of this study was to compare results from a custom assessment of critical thinking with the results from a validated external instrument of critical thinking. Students from the College of Veterinary Medicine at Western University of Health Sciences were administered a custom assessment of critical thinking (ACT) examination and the externally validated instrument, California Critical Thinking Skills Test (CCTST), in the spring of 2011. Total scores and sub-scores from each exam were analyzed for significant correlations using Pearson correlation coefficients. Significant correlations between ACT Blooms 2 and deductive reasoning and total ACT score and deductive reasoning were demonstrated with correlation coefficients of 0.24 and 0.22, respectively. No other statistically significant correlations were found. The lack of significant correlation between the two examinations illustrates the need in medical education to externally validate internal custom assessments. Ultimately, the development and validation of custom assessments of non-knowledge-based competencies will produce higher quality medical professionals.

  13. Validating diagnoses from hospital discharge registers change risk estimates for acute coronary syndrome

    DEFF Research Database (Denmark)

    Joensen, Albert Marni; Schmidt, Erik Berg; Dethlefsen, Claus

    2007-01-01

    Objectives Hospital discharge registers are cost-efficient data sources; however, their usability is highly dependent on the quality of the registered data. No previous studies have examined the effect of validating discharge diagnoses on relative risk estimates. We examined if a validation...... of acute coronary syndrome (ACS) diagnoses identified in a hospital discharge register changed the relative risk estimates of well-established risk factors for ACS. Methods All first-time ACS diagnoses (n=1138) in the Danish National Patient Registry were identified among male participants in the Danish......-established cardiovascular risk factors appeared higher when using validated compared to crude hospital discharge data: smoking: 2.47 (2.13 - 2.87) vs. 2.06 (1.83 - 2.31), hypertension:  1.77 (1.57 - 1.98) vs. 1.74 (1.58 - 1.91), hypercholesterolemia: 1.74 (1.42 - 2.14) vs. 1.68 (1.43 - 1.90) diabetes mellitus: 1.57 (1...

  14. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.

    Science.gov (United States)

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.

  15. A novel flux estimator based on SOGI with FLL for induction machine drives

    DEFF Research Database (Denmark)

    Zhao, Rende; Xin, Zhen; Loh, Poh Chiang

    2016-01-01

    by the initial conditions with no need for the magnitude and phase compensation. Because the dc and harmonic components are inversely proportional to the speed in the estimated flux, the performance of the single SOGI-based estimator become worse at low speed. A multiple SOGI-based flux estimator is the proposed...... to solve the problem. It can deeply attenuate the dc and harmonic components and then it has an excellent performance in a wide speed range. Theoretical study, simulation and experimental results validate the effectiveness of the proposed estimator....

  16. Supersensitive ancilla-based adaptive quantum phase estimation

    Science.gov (United States)

    Larson, Walker; Saleh, Bahaa E. A.

    2017-10-01

    The supersensitivity attained in quantum phase estimation is known to be compromised in the presence of decoherence. This is particularly patent at blind spots—phase values at which sensitivity is totally lost. One remedy is to use a precisely known reference phase to shift the operation point to a less vulnerable phase value. Since this is not always feasible, we present here an alternative approach based on combining the probe with an ancillary degree of freedom containing adjustable parameters to create an entangled quantum state of higher dimension. We validate this concept by simulating a configuration of a Mach-Zehnder interferometer with a two-photon probe and a polarization ancilla of adjustable parameters, entangled at a polarizing beam splitter. At the interferometer output, the photons are measured after an adjustable unitary transformation in the polarization subspace. Through calculation of the Fisher information and simulation of an estimation procedure, we show that optimizing the adjustable polarization parameters using an adaptive measurement process provides globally supersensitive unbiased phase estimates for a range of decoherence levels, without prior information or a reference phase.

  17. Criterion-related validity of field-based fitness tests in youth: a systematic review.

    Science.gov (United States)

    Castro-Piñero, J; Artero, E G; España-Romero, V; Ortega, F B; Sjöström, M; Suni, J; Ruiz, J R

    2010-10-01

    The objective of this systematic review was to comprehensively study the criterion-related validity of the existing field-based fitness tests used in children and adolescents. The studies were scored according to the number of subjects, description of the study population and statistical analysis. Each study was classified as high, low and very low quality. Three levels of evidence were constructed: strong evidence, when consistent findings were observed in three or more high quality studies; moderate evidence, when consistent findings were observed in two high quality studies; and limited evidence when consistency of findings and/or the number of studies did not achieve the criteria for moderate. The results of 73 studies (50 of high quality) addressing the criterion-related validity of field-based fitness tests in children and adolescents indicate the following: that there is strong evidence indicating that the 20 m shuttle run test is a valid test to estimate cardiorespiratory fitness, that the hand-grip strength test is a valid measure of musculoskeletal fitness, that skin fold thickness and body mass index are good estimates of body composition, and that waist circumference is a valid measure to estimate central body fat. Moderate evidence was found that the 1-mile run/walk test is a valid test to estimate cardiorespiratory fitness. A large number of other field-based fitness tests presented limited evidence, mainly due to a limited number of studies (one for each test). The results of the present systematic review should be interpreted with caution due to the substantial lack of consistency in reporting and designing the existing validity studies.

  18. Reliability and validity of food portion size estimation from images using manual flexible digital virtual meshes.

    Science.gov (United States)

    Beltran, Alicia; Dadabhoy, Hafza; Ryan, Courtney; Dholakia, Ruchita; Baranowski, Janice; Li, Yuecheng; Yan, Guifang; Jia, Wenyan; Sun, Mingui; Baranowski, Tom

    2018-02-12

    The eButton takes frontal images at 4s intervals throughout the day. A three-dimensional manually administered wire mesh procedure has been developed to quantify portion sizes from the two-dimensional images. The present paper reports a test of the inter-rater reliability and validity of use of the wire mesh procedure. Seventeen foods of diverse shapes and sizes served on plates, bowls and cups were selected to rigorously test the portion assessment procedure. A dietitian not involved in inter-rater reliability assessment used standard cups to independently measure the quantities of foods to generate the 'true' value for a total of seventy-five 'served' and seventy-five smaller 'left' images with diverse portion sizes. The images appeared on the computer to which the digital wire meshes were applied. Two dietitians and three engineers independently estimated portion size of the larger ('served') and smaller ('left') images for the same foods. The engineers had higher reliability and validity than the dietitians. The dietitians had lower reliabilities and validities for the smaller more irregular images, but the engineers did not, suggesting training could overcome this limitation. The lower reliabilities and validities for foods served in bowls, compared with plates, suggest difficulties with the curved nature of the bowls. The wire mesh procedure is an important step forward in quantifying portion size, which has been subject to substantial self-report error. Improved training procedures are needed to overcome the identified problems.

  19. Relative validity of fruit and vegetable intake estimated by the food frequency questionnaire used in the Danish National Birth Cohort

    DEFF Research Database (Denmark)

    Mikkelsen, Tina B.; Olsen, Sjurdur F.; Rasmussen, Salka E.

    2007-01-01

    study with 88 participants was made. A seven-day weighed food diary (FD) and three different biomarkers were employed as comparison methods. Results: Significant correlations between FFQ and FD-based estimates were found for fruit (r=0.66); vegetables (r=0.32); juice (r=0.52); fruit and vegetables (F......Objective: To validate the fruit and vegetable intake estimated from the Food Frequency Questionnaire (FFQ) used in the Danish National Birth Cohort (DNBC). Subjects and setting: The DNBC is a cohort of 101,042 pregnant women in Denmark, who received a FFQ by mail in gestation week 25. A validation......&V) (r=0.57); and fruit, vegetables, and juice (F&V&J) (r=0.62). Sensitivities of correct classification by FFQ into the two lowest and the two highest quintiles of F&V&J intake were 58-67% and 50-74%, respectively, and specificities were 71-79% and 65-83%, respectively. F&V&J intake estimated from...

  20. Consistency-based respiratory motion estimation in rotational angiography.

    Science.gov (United States)

    Unberath, Mathias; Aichert, André; Achenbach, Stephan; Maier, Andreas

    2017-09-01

    Rotational coronary angiography enables 3D reconstruction but suffers from intra-scan cardiac and respiratory motion. While gating handles cardiac motion, respiratory motion requires compensation. State-of-the-art algorithms rely on 3D-2D registration that depends on initial reconstructions of sufficient quality. We propose a compensation method that is applied directly in projection domain. It overcomes the need for reconstruction and thus complements the state-of-the-art. Virtual single-frame background subtraction based on vessel segmentation and spectral deconvolution yields non-truncated images of the contrasted lumen. This allows motion compensation based on data consistency conditions. We compensate craniocaudal shifts by optimizing epipolar consistency to (a) devise an image-based surrogate for cardiac motion and (b) compensate for respiratory motion. We validate our approach in two numerical phantom studies and three clinical cases. Correlation of the image-based surrogate for cardiac motion with the ECG-based ground truth was excellent yielding a Pearson correlation of 0.93 ± 0.04. Considering motion compensation, the target error measure decreased by 98% and 69%, respectively, for the phantom experiments while for the clinical cases the same figure of merit improved by 46 ± 21%. The proposed method is entirely image-based and accurately estimates craniocaudal shifts due to respiration and cardiac contraction. Future work will investigate experimental trajectories and possibilities for simplification of the single-frame subtraction pipeline. © 2016 American Association of Physicists in Medicine.

  1. Validation of an elastic registration technique to estimate anatomical lung modification in Non-Small-Cell Lung Cancer Tomotherapy

    Directory of Open Access Journals (Sweden)

    Persano Diego

    2011-04-01

    Full Text Available Abstract Background The study of lung parenchyma anatomical modification is useful to estimate dose discrepancies during the radiation treatment of Non-Small-Cell Lung Cancer (NSCLC patients. We propose and validate a method, based on free-form deformation and mutual information, to elastically register planning kVCT with daily MVCT images, to estimate lung parenchyma modification during Tomotherapy. Methods We analyzed 15 registrations between the planning kVCT and 3 MVCT images for each of the 5 NSCLC patients. Image registration accuracy was evaluated by visual inspection and, quantitatively, by Correlation Coefficients (CC and Target Registration Errors (TRE. Finally, a lung volume correspondence analysis was performed to specifically evaluate registration accuracy in lungs. Results Results showed that elastic registration was always satisfactory, both qualitatively and quantitatively: TRE after elastic registration (average value of 3.6 mm remained comparable and often smaller than voxel resolution. Lung volume variations were well estimated by elastic registration (average volume and centroid errors of 1.78% and 0.87 mm, respectively. Conclusions Our results demonstrate that this method is able to estimate lung deformations in thorax MVCT, with an accuracy within 3.6 mm comparable or smaller than the voxel dimension of the kVCT and MVCT images. It could be used to estimate lung parenchyma dose variations in thoracic Tomotherapy.

  2. Decay ratio estimation based on time-frequency representations

    Energy Technology Data Exchange (ETDEWEB)

    Torres-Fernandez, Jose E.; Prieto-Guerrero, Alfonso [Division de Ciencias Basicas e Ingenieria, Universidad Autonoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Mexico D.F. 09340 (Mexico); Espinosa-Paredes, Gilberto, E-mail: gepe@xanum.uam.m [Division de Ciencias Basicas e Ingenieria, Universidad Autonoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Mexico D.F. 09340 (Mexico)

    2010-02-15

    A novel method based on bilinear time-frequency representations (TFRs) is proposed to determine the time evolution of the linear stability parameters of a boiling water reactor (BWR) using neutronic noise signals. TFRs allow us to track the instantaneous frequencies contained in a signal to estimate an instantaneous decay ratio (IDR) that closely follows the signal envelope changes in time, making the IDR a measure of local stability. In order to account for long term changes in BWR stability, the ACDR measure is introduced as the accumulated product of the local IDRs. As it is shown in this paper, the ACDR measure clearly reflects major long term changes in BWR stability. Last to validate our method, synthetic and real neutronic signals were used. The methodology was tested on the Laguna Verde Unit 1, two events were reported in the Forsmark stability benchmark.

  3. Estimation of cardiac reserve by peak power: validation and initial application of a simplified index

    Science.gov (United States)

    Armstrong, G. P.; Carlier, S. G.; Fukamachi, K.; Thomas, J. D.; Marwick, T. H.

    1999-01-01

    OBJECTIVES: To validate a simplified estimate of peak power (SPP) against true (invasively measured) peak instantaneous power (TPP), to assess the feasibility of measuring SPP during exercise and to correlate this with functional capacity. DESIGN: Development of a simplified method of measurement and observational study. SETTING: Tertiary referral centre for cardiothoracic disease. SUBJECTS: For validation of SPP with TPP, seven normal dogs and four dogs with dilated cardiomyopathy were studied. To assess feasibility and clinical significance in humans, 40 subjects were studied (26 patients; 14 normal controls). METHODS: In the animal validation study, TPP was derived from ascending aortic pressure and flow probe, and from Doppler measurements of flow. SPP, calculated using the different flow measures, was compared with peak instantaneous power under different loading conditions. For the assessment in humans, SPP was measured at rest and during maximum exercise. Peak aortic flow was measured with transthoracic continuous wave Doppler, and systolic and diastolic blood pressures were derived from brachial sphygmomanometry. The difference between exercise and rest simplified peak power (Delta SPP) was compared with maximum oxygen uptake (VO(2)max), measured from expired gas analysis. RESULTS: SPP estimates using peak flow measures correlated well with true peak instantaneous power (r = 0.89 to 0.97), despite marked changes in systemic pressure and flow induced by manipulation of loading conditions. In the human study, VO(2)max correlated with Delta SPP (r = 0.78) better than Delta ejection fraction (r = 0.18) and Delta rate-pressure product (r = 0.59). CONCLUSIONS: The simple product of mean arterial pressure and peak aortic flow (simplified peak power, SPP) correlates with peak instantaneous power over a range of loading conditions in dogs. In humans, it can be estimated during exercise echocardiography, and correlates with maximum oxygen uptake better than ejection

  4. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Eric; Gonder, Jeff; Jehlik, Forrest

    2017-01-01

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  5. Estimation of the GPS to Galileo Time Offset and its validation on a mass market receiver

    OpenAIRE

    GIOIA CIRO; FORTUNY GUASCH Joaquim; PISONI Fabio

    2014-01-01

    The European GNSS, Galileo is currently in its In-Orbit Validation (IOV) phase where four satellites are available for computing the user position. Galileo E1 OS and GPS C/A represent a very effective constellation pair in a consumer grade receiver: the signals are conveyed on the same analog path and measurements can be combined in a single position, velocity and time (PVT) solution, provided that the Galileo to GPS Time Offset (GGTO) is available. Algorithms for GGTO estimation are presente...

  6. Validation of an equation for estimating maximal oxygen consumption of nonexpert adult swimmers.

    Science.gov (United States)

    da Costa, Adalberto Veronese; Costa, Manoel da Cunha; de Oliveira, Saulo Fernandes Melo; de Albuquerque, Fabíola Lima; de Sá Pereira Guimarães, Fernando José; Barbosa, Tiago Manuel

    2013-01-01

    To validate an equation to estimate the maximal oxygen consumption (VO2max) of nonexpert adult swimmers. Participants were 22 nonexpert swimmers, male, aged between 18 and 30 years (age: 23.1 ± 3:59 years; body mass: 73.6 ± 7:39 kg; height 176.6 ± 5.53 cm; and body fat percentage: 15.9% ± 4.39%), divided into two subgroups: G1 - eleven swimmers for the VO2max oximetry and modeling of the equation; and G2 - eleven swimmers for application of the equation modeled on G1 and verification of their validation. The test used was the adapted Progressive Swim Test, in which there occurs an increase in the intensity of the swim every two laps. For normality and homogeneity of data, Shapiro-Wilk and Levene tests were used, the descriptive values of the average and standard deviation. The statistical steps were: (1) reliability of the Progressive Swim Test - through the paired t-test, intraclass correlation coefficient (ICC), and the Pearson linear correlation (R) relative to the reproducibility, the coefficient of variation (CV), and standard error measurement (SEM) for the absolute reproducibility; (2) in the model equation to estimate VO2max, a relative VO2 was established, and a stepwise multiple regression model was performed with G1 - so the variables used were analysis of variance regression (AR), coefficient of determination (R(2)), adjusted coefficient of determination (R(2)a), standard error of estimate (SEE), and Durbin-Watson (DW); (3) validation of the equation - the results were presented in graphs, where direct (G1) and estimated (G2) VO2max were compared using independent t-test, linear regression (stressing the correlation between groups), and Bland-Altman (the bias agreement of the results). All considered a statistical significance level of P 0.80, CV equation model, VO2max has been considered the third model as recommended due to the values found (AR equation, no significant differences occurred between G1 and G2 (P > 0.01), linear regression stressed a

  7. ANN Approach for State Estimation of Hybrid Systems and Its Experimental Validation

    Directory of Open Access Journals (Sweden)

    Shijoh Vellayikot

    2015-01-01

    Full Text Available A novel artificial neural network based state estimator has been proposed to ensure the robustness in the state estimation of autonomous switching hybrid systems under various uncertainties. Taking the autonomous switching three-tank system as benchmark hybrid model working under various additive and multiplicative uncertainties such as process noise, measurement error, process–model parameter variation, initial state mismatch, and hand valve faults, real-time performance evaluation by the comparison of it with other state estimators such as extended Kalman filter and unscented Kalman Filter was carried out. The experimental results reported with the proposed approach show considerable improvement in the robustness in performance under the considered uncertainties.

  8. Is visual estimation of passive range of motion in the pediatric lower limb valid and reliable

    Directory of Open Access Journals (Sweden)

    Dagher Fernand

    2009-10-01

    Full Text Available Abstract Background Visual estimation (VE is an essential tool for evaluation of range of motion. Few papers discussed its validity in children orthopedics' practice. The purpose of our study was to assess validity and reliability of VE for passive range of motions (PROMs of children's lower limbs. Methods Fifty typically developing children (100 lower limbs were examined. Visual estimations for PROMs of hip (flexion, adduction, abduction, internal and external rotations, knee (flexion and popliteal angle and ankle (dorsiflexion and plantarflexion were made by a pediatric orthopaedic surgeon (POS and a 5th year resident in orthopaedics. A last year medical student did goniometric measurements. Three weeks later, same measurements were performed to assess reliability of visual estimation for each examiner. Results Visual estimations of the POS were highly reliable for hip flexion, hip rotations and popliteal angle (ρc ≥ 0.8. Reliability was good for hip abduction, knee flexion, ankle dorsiflexion and plantarflexion (ρc ≥ 0.7 but poor for hip adduction (ρc = 0.5. Reproducibility for all PROMs was verified. Resident's VE showed high reliability (ρc ≥ 0.8 for hip flexion and popliteal angle. Good correlation was found for hip rotations and knee flexion (ρc ≥ 0.7. Poor results were obtained for ankle PROMs (ρc Conclusion Accuracy of VE of passive hip flexion and knee PROMs is high regardless of the examiner's experience. Same accuracy can be found for hip rotations and abduction whenever VE is performed by an experienced examiner. Goniometric evaluation is recommended for passive hip adduction and for ankle PROMs.

  9. A comparison of biomarker based incidence estimators.

    Directory of Open Access Journals (Sweden)

    Thomas A McWalter

    Full Text Available BACKGROUND: Cross-sectional surveys utilizing biomarkers that test for recent infection provide a convenient and cost effective way to estimate HIV incidence. In particular, the BED assay has been developed for this purpose. Controversy surrounding the way in which false positive results from the biomarker should be handled has lead to a number of different estimators that account for imperfect specificity. We compare the estimators proposed by McDougal et al., Hargrove et al. and McWalter & Welte. METHODOLOGY/PRINCIPAL FINDINGS: The three estimators are analyzed and compared. An identity showing a relationship between the calibration parameters in the McDougal methodology is shown. When the three estimators are tested under a steady state epidemic, which includes individuals who fail to progress on the biomarker, only the McWalter/Welte method recovers an unbiased result. CONCLUSIONS/SIGNIFICANCE: Our analysis shows that the McDougal estimator can be reduced to a formula that only requires calibration of a mean window period and a long-term specificity. This allows simpler calibration techniques to be used and shows that all three estimators can be expressed using the same set of parameters. The McWalter/Welte method is applicable under the least restrictive assumptions and is the least prone to bias of the methods reviewed.

  10. Simulation Based Studies in Software Engineering: A Matter of Validity

    Directory of Open Access Journals (Sweden)

    Breno Bernard Nicolau de França

    2015-04-01

    Full Text Available Despite the possible lack of validity when compared with other science areas, Simulation-Based Studies (SBS in Software Engineering (SE have supported the achievement of some results in the field. However, as it happens with any other sort of experimental study, it is important to identify and deal with threats to validity aiming at increasing their strength and reinforcing results confidence. OBJECTIVE: To identify potential threats to SBS validity in SE and suggest ways to mitigate them. METHOD: To apply qualitative analysis in a dataset resulted from the aggregation of data from a quasi-systematic literature review combined with ad-hoc surveyed information regarding other science areas. RESULTS: The analysis of data extracted from 15 technical papers allowed the identification and classification of 28 different threats to validity concerned with SBS in SE according Cook and Campbell’s categories. Besides, 12 verification and validation procedures applicable to SBS were also analyzed and organized due to their ability to detect these threats to validity. These results were used to make available an improved set of guidelines regarding the planning and reporting of SBS in SE. CONCLUSIONS: Simulation based studies add different threats to validity when compared with traditional studies. They are not well observed and therefore, it is not easy to identify and mitigate all of them without explicit guidance, as the one depicted in this paper.

  11. Validity of photographs for food portion estimation in a rural West African setting.

    Science.gov (United States)

    Huybregts, L; Roberfroid, D; Lachat, C; Van Camp, J; Kolsteren, P

    2008-06-01

    To validate food photographs for food portion size estimation of frequently consumed dishes, to be used in a 24-hour recall food consumption study of pregnant women in a rural environment in Burkina Faso. This food intake study is part of an intervention evaluating the efficacy of prenatal micronutrient supplementation on birth outcomes. Women of childbearing age (15-45 years). A food photograph album containing four photographs of food portions per food item was compiled for eight selected food items. Subjects were presented two food items each in the morning and two in the afternoon. These foods were weighed to the exact weight of a food depicted in one of the photographs and were in the same receptacles. The next day another fieldworker presented the food photographs to the subjects to test their ability to choose the correct photograph. The correct photograph out of the four proposed was chosen in 55% of 1028 estimations. For each food, proportions of underestimating and overestimating participants were balanced, except for rice and couscous. On a group level, mean differences between served and estimated portion sizes were between -8.4% and 6.3%. Subjects who attended school were almost twice as likely to choose the correct photograph. The portion size served (small vs. largest sizes) had a significant influence on the portion estimation ability. The results from this study indicate that in a West African rural setting, food photographs can be a valuable tool for the quantification of food portion size on group level.

  12. Trunk inclination estimate during the sprint start using an inertial measurement unit: a validation study.

    Science.gov (United States)

    Bergamini, Elena; Guillon, Pélagie; Camomilla, Valentina; Pillet, Hélène; Skalli, Wafa; Cappozzo, Aurelio

    2013-10-01

    The proper execution of the sprint start is crucial in determining the performance during a sprint race. In this respect, when moving from the crouch to the upright position, trunk kinematics is a key element. The purpose of this study was to validate the use of a trunk-mounted inertial measurement unit (IMU) in estimating the trunk inclination and angular velocity in the sagittal plane during the sprint start. In-laboratory sprint starts were performed by five sprinters. The local acceleration and angular velocity components provided by the IMU were processed using an adaptive Kalman filter. The accuracy of the IMU inclination estimate and its consistency with trunk inclination were assessed using reference stereophotogrammetric measurements. A Bland-Altman analysis, carried out using parameters (minimum, maximum, and mean values) extracted from the time histories of the estimated variables, and curve similarity analysis (correlation coefficient > 0.99, root mean square difference < 7 deg) indicated the agreement between reference and IMU estimates, opening a promising scenario for an accurate in-field use of IMUs for sprint start performance assessment.

  13. Development and Statistical Validation of Spectrophotometric Methods for the Estimation of Nabumetone in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    A. R. Rote

    2010-01-01

    Full Text Available Three new simple, economic spectrophotometric methods were developed and validated for the estimation of nabumetone in bulk and tablet dosage form. First method includes determination of nabumetone at absorption maxima 330 nm, second method applied was area under curve for analysis of nabumetone in the wavelength range of 326-334 nm and third method was First order derivative spectra with scaling factor 4. Beer law obeyed in the concentration range of 10-30 μg/mL for all three methods. The correlation coefficients were found to be 0.9997, 0.9998 and 0.9998 by absorption maxima, area under curve and first order derivative spectra. Results of analysis were validated statistically and by performing recovery studies. The mean percent recoveries were found satisfactory for all three methods. The developed methods were also compared statistically using one way ANOVA. The proposed methods have been successfully applied for the estimation of nabumetone in bulk and pharmaceutical tablet dosage form.

  14. Validation of an equation for estimating maximal oxygen consumption of nonexpert adult swimmers

    Directory of Open Access Journals (Sweden)

    Veronese da Costa A

    2013-01-01

    Full Text Available Adalberto Veronese da Costa,1,2 Manoel da Cunha Costa,3 Saulo Fernandes Melo de Oliveira,3 Fabíola Lima de Albuquerque,3 Fernando José de Sá Pereira Guimarães,3 Tiago Manuel Barbosa41Department of Physical Education, Bioscience Laboratory of Human Kinetics, Rio Grande do Norte State University, Mossoró, Brazil; 2Sport Sciences Trás-os-Montes e Alto Douro University, Research Center in Sport, Health and Human Development, Vila Real, Portugal; 3Superior School of Physical Education, Human Performance Laboratory, Pernambuco State University, Recife, Brazil; 4National Institute of Education, Nanyang Technological University, SingaporeObjective: To validate an equation to estimate the maximal oxygen consumption (VO2max of nonexpert adult swimmers.Methods: Participants were 22 nonexpert swimmers, male, aged between 18 and 30 years (age: 23.1 ± 3:59 years; body mass: 73.6 ± 7:39 kg; height 176.6 ± 5.53 cm; and body fat percentage: 15.9% ± 4.39%, divided into two subgroups: G1 – eleven swimmers for the VO2max oximetry and modeling of the equation; and G2 – eleven swimmers for application of the equation modeled on G1 and verification of their validation. The test used was the adapted Progressive Swim Test, in which there occurs an increase in the intensity of the swim every two laps. For normality and homogeneity of data, Shapiro-Wilk and Levene tests were used, the descriptive values of the average and standard deviation. The statistical steps were: (1 reliability of the Progressive Swim Test – through the paired t-test, intraclass correlation coefficient (ICC, and the Pearson linear correlation (R relative to the reproducibility, the coefficient of variation (CV, and standard error measurement (SEM for the absolute reproducibility; (2 in the model equation to estimate VO2max, a relative VO2 was established, and a stepwise multiple regression model was performed with G1 – so the variables used were analysis of variance regression (AR

  15. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    Science.gov (United States)

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological

  16. Validation of the Maslach Burnout Inventory-Human Services Survey for Estimating Burnout in Dental Students.

    Science.gov (United States)

    Montiel-Company, José María; Subirats-Roig, Cristian; Flores-Martí, Pau; Bellot-Arcís, Carlos; Almerich-Silla, José Manuel

    2016-11-01

    The aim of this study was to examine the validity and reliability of the Maslach Burnout Inventory-Human Services Survey (MBI-HSS) as a tool for assessing the prevalence and level of burnout in dental students in Spanish universities. The survey was adapted from English to Spanish. A sample of 533 dental students from 15 Spanish universities and a control group of 188 medical students self-administered the survey online, using the Google Drive service. The test-retest reliability or reproducibility showed an Intraclass Correlation Coefficient of 0.95. The internal consistency of the survey was 0.922. Testing the construct validity showed two components with an eigenvalue greater than 1.5, which explained 51.2% of the total variance. Factor I (36.6% of the variance) comprised the items that estimated emotional exhaustion and depersonalization. Factor II (14.6% of the variance) contained the items that estimated personal accomplishment. The cut-off point for the existence of burnout achieved a sensitivity of 92.2%, a specificity of 92.1%, and an area under the curve of 0.96. Comparison of the total dental students sample and the control group of medical students showed significantly higher burnout levels for the dental students (50.3% vs. 40.4%). In this study, the MBI-HSS was found to be viable, valid, and reliable for measuring burnout in dental students. Since the study also found that the dental students suffered from high levels of this syndrome, these results suggest the need for preventive burnout control programs.

  17. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population.

    Science.gov (United States)

    Carvalho, Suzana Papile Maciel; Brito, Liz Magalhães; Paiva, Luiz Airton Saavedra de; Bicudo, Lucilene Arilho Ribeiro; Crosato, Edgard Michel; Oliveira, Rogério Nogueira de

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this

  18. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    Directory of Open Access Journals (Sweden)

    Suzana Papile Maciel Carvalho

    2013-07-01

    Full Text Available Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. OBJECTIVE: This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995, previously used in a population sample from Northeast Brazil. MATERIAL AND METHODS: The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. RESULTS: The results demonstrated that the application of the method of Oliveira, et al. (1995 in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. CONCLUSION: It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995 presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South

  19. Validation of the materials-process-product model (coal SNG). [Estimating method for comparing processes, changing assumptions and technology assessment

    Energy Technology Data Exchange (ETDEWEB)

    Albanese, A.; Bhagat, N.; Friend, L.; Lamontagne, J.; Pouder, R.; Vinjamuri, G.

    1980-03-01

    The use of coal as a source of high Btu gas is currently viewed as one possible means of supplementing dwindling natural gas supplies. While certain coal gasification processes have demonstrated technical feasibility, much uncertainty and inconsistency remains regarding the capital and operating costs of large scale coal conversion facilities; cost estimates may vary by as much as 50%. Studies conducted for the American Gas Association (AGA) and US Energy Research and Development Administration by C.F. Braun and Co. have defined technical specifications and cost guidelines for estimating costs of coal gasification technologies (AGA Guidelines). Based on the AGA Guidelines, Braun has also prepared cost estimates for selected coal gasification processes. Recent efforts by International Research and Technology Inc. (IR and T) have led to development of the Materials-Process-Product Model (MPPM), a comprehensive anaytic tool for evaluation of processes and costs for coal gasification and other coal conversion technologies. This validation of the MPPM presents a comparison of engineering and cost computation methodologies employed in the MPPM to those employed by Braun and comparison of MPPM results to Braun cost estimates. These comparisons indicate that the MPPM has the potential to be a valuable tool for assisting in the evaluation of coal gasification technologies.

  20. Are cannabis prevalence estimates comparable across countries and regions? A cross-cultural validation using search engine query data.

    Science.gov (United States)

    Steppan, Martin; Kraus, Ludwig; Piontek, Daniela; Siciliano, Valeria

    2013-01-01

    Prevalence estimation of cannabis use is usually based on self-report data. Although there is evidence on the reliability of this data source, its cross-cultural validity is still a major concern. External objective criteria are needed for this purpose. In this study, cannabis-related search engine query data are used as an external criterion. Data on cannabis use were taken from the 2007 European School Survey Project on Alcohol and Other Drugs (ESPAD). Provincial data came from three Italian nation-wide studies using the same methodology (2006-2008; ESPAD-Italia). Information on cannabis-related search engine query data was based on Google search volume indices (GSI). (1) Reliability analysis was conducted for GSI. (2) Latent measurement models of "true" cannabis prevalence were tested using perceived availability, web-based cannabis searches and self-reported prevalence as indicators. (3) Structure models were set up to test the influences of response tendencies and geographical position (latitude, longitude). In order to test the stability of the models, analyses were conducted on country level (Europe, US) and on provincial level in Italy. Cannabis-related GSI were found to be highly reliable and constant over time. The overall measurement model was highly significant in both data sets. On country level, no significant effects of response bias indicators and geographical position on perceived availability, web-based cannabis searches and self-reported prevalence were found. On provincial level, latitude had a significant positive effect on availability indicating that perceived availability of cannabis in northern Italy was higher than expected from the other indicators. Although GSI showed weaker associations with cannabis use than perceived availability, the findings underline the external validity and usefulness of search engine query data as external criteria. The findings suggest an acceptable relative comparability of national (provincial) prevalence

  1. Validation and Refinement of Prediction Models to Estimate Exercise Capacity in Cancer Survivors Using the Steep Ramp Test

    NARCIS (Netherlands)

    Stuiver, Martijn M.; Kampshoff, Caroline S.; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J. M.; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M.

    2017-01-01

    Objective: To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo2(peak)) and peak power output (W-peak).&

  2. Validation and refinement of prediction models to estimate exercise capacity in cancer survivors using the steep ramp test

    NARCIS (Netherlands)

    Stuiver, M.M.; Kampshoff, C.S.; Persoon, S.; Groen, W.; van Mechelen, W.; Chinapaw, M.J.M.; Brug, J.; Nollet, F.; Kersten, M.-J.; Schep, G.; Buffart, L.M.

    2017-01-01

    Objective To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo2peak) and peak power output (Wpeak). Design

  3. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry--BREALD-30.

    Science.gov (United States)

    Junkes, Monica C; Fraiz, Fabian C; Sardenberg, Fernanda; Lee, Jessica Y; Paiva, Saul M; Ferreira, Fernanda M

    2015-01-01

    The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis. The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.

  4. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry--BREALD-30.

    Directory of Open Access Journals (Sweden)

    Monica C Junkes

    Full Text Available The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version.After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30 were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes.The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect. In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593 and income (rs = 0.327 and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis.The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.

  5. Validation and measurement uncertainty estimation in food microbiology: differences between quantitative and qualitative methods

    Directory of Open Access Journals (Sweden)

    Vesna Režić Dereani

    2010-09-01

    Full Text Available The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the technical competence of the staff involved in the tests, recently are widely introduced in food microbiology laboratories in Croatia. In addition to quality manual introduction, and a lot of general documents, some of the most demanding procedures in routine microbiology laboratories are measurement uncertainty (MU procedures and validation experiment design establishment. Those procedures are not standardized yet even at international level, and they require practical microbiological knowledge, altogether with statistical competence. Differences between validation experiments design for quantitative and qualitative food microbiology analysis are discussed in this research, and practical solutions are shortly described. MU for quantitative determinations is more demanding issue than qualitative MU calculation. MU calculations are based on external proficiency testing data and internal validation data. In this paper, practical schematic descriptions for both procedures are shown.

  6. Bootstrap-Based Inference for Cube Root Consistent Estimators

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Jansson, Michael; Nagasawa, Kenichi

    This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known to be inconsis......This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known...

  7. Blind Reverberation Time Estimation Based on Laplace Distribution

    OpenAIRE

    Jan, Tariqullah; Wang, Wenwu

    2012-01-01

    We propose an algorithm for the estimation of reverberation time (RT) from the reverberant speech signal by using a maximum likelihood (ML) estimator. Based on the analysis of an existing RT estimation method, which models the reverberation decay as a Gaussian random process modulated by a deterministic envelope, a Laplacian distribution based decay model is proposed in which an efficient procedure for locating free decay from reverberant speech is also incorporated. Then the RT is estimated ...

  8. Quantification of construction waste prevented by BIM-based design validation: Case studies in South Korea.

    Science.gov (United States)

    Won, Jongsung; Cheng, Jack C P; Lee, Ghang

    2016-03-01

    Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Validation of Walk Score® for Estimating Neighborhood Walkability: An Analysis of Four US Metropolitan Areas

    Directory of Open Access Journals (Sweden)

    Steven J. Melly

    2011-11-01

    Full Text Available Neighborhood walkability can influence physical activity. We evaluated the validity of Walk Score® for assessing neighborhood walkability based on GIS (objective indicators of neighborhood walkability with addresses from four US metropolitan areas with several street network buffer distances (i.e., 400-, 800-, and 1,600-meters. Address data come from the YMCA-Harvard After School Food and Fitness Project, an obesity prevention intervention involving children aged 5–11 years and their families participating in YMCA-administered, after-school programs located in four geographically diverse metropolitan areas in the US (n = 733. GIS data were used to measure multiple objective indicators of neighborhood walkability. Walk Scores were also obtained for the participant’s residential addresses. Spearman correlations between Walk Scores and the GIS neighborhood walkability indicators were calculated as well as Spearman correlations accounting for spatial autocorrelation. There were many significant moderate correlations between Walk Scores and the GIS neighborhood walkability indicators such as density of retail destinations and intersection density (p < 0.05. The magnitude varied by the GIS indicator of neighborhood walkability. Correlations generally became stronger with a larger spatial scale, and there were some geographic differences. Walk Score® is free and publicly available for public health researchers and practitioners. Results from our study suggest that Walk Score® is a valid measure of estimating certain aspects of neighborhood walkability, particularly at the 1600-meter buffer. As such, our study confirms and extends the generalizability of previous findings demonstrating that Walk Score is a valid measure of estimating neighborhood walkability in multiple geographic locations and at multiple spatial scales.

  10. Validation of Walk Score® for Estimating Neighborhood Walkability: An Analysis of Four US Metropolitan Areas

    Science.gov (United States)

    Duncan, Dustin T.; Aldstadt, Jared; Whalen, John; Melly, Steven J.; Gortmaker, Steven L.

    2011-01-01

    Neighborhood walkability can influence physical activity. We evaluated the validity of Walk Score® for assessing neighborhood walkability based on GIS (objective) indicators of neighborhood walkability with addresses from four US metropolitan areas with several street network buffer distances (i.e., 400-, 800-, and 1,600-meters). Address data come from the YMCA-Harvard After School Food and Fitness Project, an obesity prevention intervention involving children aged 5–11 years and their families participating in YMCA-administered, after-school programs located in four geographically diverse metropolitan areas in the US (n = 733). GIS data were used to measure multiple objective indicators of neighborhood walkability. Walk Scores were also obtained for the participant’s residential addresses. Spearman correlations between Walk Scores and the GIS neighborhood walkability indicators were calculated as well as Spearman correlations accounting for spatial autocorrelation. There were many significant moderate correlations between Walk Scores and the GIS neighborhood walkability indicators such as density of retail destinations and intersection density (p walkability. Correlations generally became stronger with a larger spatial scale, and there were some geographic differences. Walk Score® is free and publicly available for public health researchers and practitioners. Results from our study suggest that Walk Score® is a valid measure of estimating certain aspects of neighborhood walkability, particularly at the 1600-meter buffer. As such, our study confirms and extends the generalizability of previous findings demonstrating that Walk Score is a valid measure of estimating neighborhood walkability in multiple geographic locations and at multiple spatial scales. PMID:22163200

  11. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    Science.gov (United States)

    Wiles, Frederick

    2013-01-01

    or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.

  12. Validation of a Mexican food photograph album as a tool to visually estimate food amounts in adolescents

    National Research Council Canada - National Science Library

    Bernal-Orozco, M Fernanda; Vizmanos-Lamotte, Barbara; Rodríguez-Rocha, Norma P; Macedo-Ojeda, Gabriela; Orozco-Valerio, María; Rovillé-Sausse, Françoise; León-Estrada, Sandra; Márquez-Sandoval, Fabiola; Fernández-Ballart, Joan D

    2013-01-01

    The aim of the present study was to validate a food photograph album (FPA) as a tool to visually estimate food amounts, and to compare this estimation with that attained through the use of measuring cups (MC) and food models (FM...

  13. Parameter extraction and estimation based on the PV panel outdoor ...

    African Journals Online (AJOL)

    This work presents a novel approach to predict the voltage-current (V-I) characteristics of a PV panel under varying weather conditions to estimate the PV parameters. Outdoor performance of the PV module (AP-PM-15) was carried out for several times. The experimental data obtained are validated and compared with the ...

  14. Estimation water vapor content using the mixing ratio method and validated with the ANFIS PWV model

    Science.gov (United States)

    Suparta, W.; Alhasa, K. M.; Singh, M. S. J.

    2017-05-01

    This study reported the comparison between water vapor content, the surface meteorological data (pressure, temperature, and relative humidity), and precipitable water vapor (PWV) produced by PWV from adaptive neuro fuzzy inference system (ANFIS) for areas in the Universiti Kebangsaan Malaysia Bangi (UKMB) station. The water vapor content value was estimated with mixing ratio method and the surface meteorological data as the parameter inputs. The accuracy of water vapor content was validated with PWV from ANFIS PWV model for the period of 20-23 December 2016. The result showed that the water vapor content has a similar trend with the PWV which produced by ANFIS PWV model (r = 0.975 at the 99% confidence level). This indicates that the water vapor content that obtained with mixing ratio agreed very well with the ANFIS PWV model. In addition, this study also found, the pattern of water vapor content and PWV have more influenced by the relative humidity.

  15. Rapid validated HPTLC method for estimation of betulinic acid in Nelumbo nucifera (Nymphaeaceae) rhizome extract.

    Science.gov (United States)

    Mukherjee, Debajyoti; Kumar, N Satheesh; Khatua, Taraknath; Mukherjee, Pulok K

    2010-01-01

    Betulinic acid (pentacyclic triterpenoid) is an important marker component present in Nelumbo nucifera Gaertn. rhizome. N. nucifera rhizome has several medicinal uses including hypoglycaemic, antidiarrhoeal, antimicrobial, diuretic, antipyretic, psychopharmacological activities. To establish a simple, sensitive, reliable, rapid and validated high-performance thin-layer chromatography method for estimation of betulinic acid in hydro-alcoholic extract of N. nucifera Gaertn. rhizome. The separation was carried out on a thin-layer chromatography aluminium plate pre-coated with silica gel 60F(254) , eluted with chloroform, methanol and formic acid (49 : 1 : 1 v/v). Post chromatographic derivatisation was done with anisaldehyde-sulphuric acid reagent and densitometric scanning was performed using a Camag TLC scanner III, at 420 nm. The system was found to produce a compact spot for betulinic acid (R(f) = 0.30). A good linear precision relationship between the concentrations (2-10 µg) and peak areas were obtained with the correlation coefficient (r) of 0.99698. The limit of detection and limit of quantification of betulinic acid were detected to be 0.4 and 2.30 µg per spot. The percentage of recovery was found to be 98.36%. The percentage relative standard deviations of intra-day and inter-day precisions were 0.82-0.394 and 0.85-0.341, respectively. This validated HPTLC method provides a new and powerful approach to estimate betulinic acid as phytomarker in the extract. Copyright © 2010 John Wiley & Sons, Ltd.

  16. Validation of the Male Osteoporosis Risk Estimation Score (MORES) in a primary care setting.

    Science.gov (United States)

    Cass, Alvah R; Shepherd, Angela J

    2013-01-01

    Primary care physicians are positioned to promote early recognition and treatment of men at risk for osteoporosis-related fractures; however, efficient screening strategies are needed. This study was designed to validate the Male Osteoporosis Risk Estimation Score (MORES) for identifying men at increased risk of osteoporosis. This was a blinded analysis of the MORES, administered prospectively in a cross-sectional sample of men aged 60 years or older. Participants completed a research questionnaire at an outpatient visit and had a dual-energy X-ray absorptiometry (DXA) scan to assess bone density. Sensitivity, specificity, and area under-the-curve (AUC) were estimated for the MORES. Effectiveness was assessed by the number needed-to-screen (NNS) to prevent one additional major osteoporotic fracture. A total of 346 men completed the study. The mean age was 70.2 ± 6.9 years; 76% were non-Hispanic white. Fifteen men (4.3%) had osteoporosis of the hip. The operating characteristics were sensitivity 0.80 (95% confidence interval [CI], 0.52-0.96); specificity 0.70 (95% CI, 0.64-0.74), and AUC of 0.82 (95% CI, 0.71-0.92). Screening with the MORES yielded a NNS to prevent one additional major osteoporotic fracture over 10 years with 259 (95% CI, 192-449) compared to 636 for universal screening with a DXA. This study validated the MORES as an effective and efficient approach to identifying men at increased risk of osteoporosis who may benefit from a diagnostic DXA scan.

  17. Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.

    2016-09-20

    These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such as MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.

  18. A field-validated approach using surveillance and genotyping data to estimate tuberculosis attributable to recent transmission in the United States.

    Science.gov (United States)

    France, Anne Marie; Grant, Juliana; Kammerer, J Steve; Navin, Thomas R

    2015-11-01

    Tuberculosis genotyping data are frequently used to estimate the proportion of tuberculosis cases in a population that are attributable to recent transmission (RT). Multiple factors influence genotype-based estimates of RT and limit the comparison of estimates over time and across geographic units. Additionally, methods used for these estimates have not been validated against field-based epidemiologic assessments of RT. Here we describe a novel genotype-based approach to estimation of RT based on the identification of plausible-source cases, which facilitates systematic comparisons over time and across geographic areas. We compared this and other genotype-based RT estimation approaches with the gold standard of field-based assessment of RT based on epidemiologic investigation in Arkansas, Maryland, and Massachusetts during 1996-2000. We calculated the sensitivity and specificity of each approach for epidemiologic evidence of RT and calculated the accuracy of each approach across a range of hypothetical RT prevalence rates plausible for the United States. The sensitivity, specificity, and accuracy of genotype-based RT estimates varied by approach. At an RT prevalence of 10%, accuracy ranged from 88.5% for state-based clustering to 94.4% with our novel approach. Our novel, field-validated approach allows for systematic assessments over time and across public health jurisdictions of varying geographic size, with an established level of accuracy. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  19. EEG-based Workload Estimation Across Affective Contexts

    Directory of Open Access Journals (Sweden)

    Christian eMühl

    2014-06-01

    Full Text Available Workload estimation from electroencephalographic signals (EEG offers a highly sensitive tool to adapt the human-computer interaction to the user state. To create systems that reliably work in the complexity of the real world, a robustness against contextual changes (e.g., mood, has to be achieved. To study the resilience of state-of-the-art EEG-based workload classification against stress we devise a novel experimental protocol, in which we manipulated the affective context (stressful/non-stressful while the participant solved a task with 2 workload levels. We recorded self-ratings, behavior, and physiology from 24 participants to validate the protocol. We test the capability of different, subject-specific workload classifiers using either frequency-domain, time-domain, or both feature varieties to generalize across contexts. We show that the classifiers are able to transfer between affective contexts, though performance suffers independent of the used feature domain. However, cross-context training is a simple and powerful remedy allowing the extraction of features in all studied feature varieties that are more resilient to task-unrelated variations in signal characteristics. Especially for frequency-domain features, across-context training is leading to a performance comparable to within-context training and testing. We discuss the significance of the result for neurophysiology-based workload detection in particular and for the construction of reliable passive brain-computer interfaces in general.

  20. Development and Validation of Self Instructional Computer Based ...

    African Journals Online (AJOL)

    The study is on the development and validation of self-instructional computer based package for teaching social studies in senior Primary Schools. The study investigated the effect intellectual development of study habits in Senior Primary school students where social studies were taught with and without the ...

  1. A Rasch-Based Validation of the Vocabulary Size Test

    Science.gov (United States)

    Beglar, David

    2010-01-01

    The primary purpose of this study was to provide preliminary validity evidence for a 140-item form of the Vocabulary Size Test, which is designed to measure written receptive knowledge of the first 14,000 words of English. Nineteen native speakers of English and 178 native speakers of Japanese participated in the study. Analyses based on the Rasch…

  2. Validation of transition analysis as a method of adult age estimation in a modern South African sample.

    Science.gov (United States)

    Jooste, N; L'Abbé, E N; Pretorius, S; Steyn, M

    2016-09-01

    The use of advanced statistical methods, such as transition analysis, has transformed adult age estimation into a systematically and statistically appropriate practice. The method developed by Boldsen and colleagues (2002) uses 36 features from the cranial sutures, pubic symphysis and auricular surface to calculate maximum likelihood point estimates and 95% confidence intervals, using the ADBOU computer software. However, when using the method in a geographically and contextually distinct sample, such as South Africa, accuracy and precision is of concern. This study aimed to test the repeatability, accuracy and precision of the transition analysis method, using the ADBOU computer software, on a South African sample. Age estimations were generated, for 149 black individuals from the Pretoria Bone Collection, using three individual components as well as different combinations of components and prior distributions (uniform and informative). The informative prior distributions represented both an archaeological and a forensic context. Cohen's kappa statistic uncovered some failings in the scoring procedure. While the accuracy compared favourably with existing methods, the method lacked satisfactory precision. Although combining the components improved accuracy and precision, removing the cranium from the combination was beneficial in some instances. The influence of population variation was observed in the scoring procedure, reference sample and the prior distributions. Validity may be improved for a South African sample by adding age-related components that have been developed on a relevant population. A prior distribution based on South African mortality rates might also be beneficial. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. The development and validation of new equations for estimating body fat percentage among Chinese men and women.

    Science.gov (United States)

    Liu, Xin; Sun, Qi; Sun, Liang; Zong, Geng; Lu, Ling; Liu, Gang; Rosner, Bernard; Ye, Xingwang; Li, Huaixing; Lin, Xu

    2015-05-14

    Equations based on simple anthropometric measurements to predict body fat percentage (BF%) are lacking in Chinese population with increasing prevalence of obesity and related abnormalities. We aimed to develop and validate BF% equations in two independent population-based samples of Chinese men and women. The equations were developed among 960 Chinese Hans living in Shanghai (age 46.2 (SD 5.3) years; 36.7% male) using a stepwise linear regression and were subsequently validated in 1150 Shanghai residents (58.7 (SD 6.0) years; 41.7% male; 99% Chinese Hans, 1% Chinese minorities). The associations of equation-derived BF% with changes of 6-year cardiometabolic outcomes and incident type 2 diabetes (T2D) were evaluated in a sub-cohort of 780 Chinese, compared with BF% measured by dual-energy X-ray absorptiometry (DXA; BF%-DXA). Sex-specific equations were established with age, BMI and waist circumference as independent variables. The BF% calculated using new sex-specific equations (BF%-CSS) were in reasonable agreement with BF%-DXA (mean difference: 0.08 (2 SD 6.64) %, P= 0.606 in men; 0.45 (2 SD 6.88) %, Pequations might be used as surrogates for DXA to estimate BF% among adult Chinese. More studies are needed to evaluate the application of our equations in different populations.

  4. Development and validation of QuEChERS method for estimation of chlorantraniliprole residue in vegetables.

    Science.gov (United States)

    Singh, Balwinder; Kar, Abhijit; Mandal, Kousik; Kumar, Rajinder; Sahoo, Sanjay Kumar

    2012-12-01

    An easy, simple and efficient analytical method was standardized and validated for the estimation of residues of chlorantraniliprole in different vegetables comprising brinjal, cabbage, capsicum, cauliflower, okra, and tomato. QuEChERS method was used for the extraction and cleanup of chlorantraniliprole residues on these vegetables. Final clear extracts of ethyl acetate were concentrated under vacuum and reconstituted into high performance liquid chromatograph (HPLC) grade acetonitrile, and residues were estimated using HPLC equipped with PDA detector system, C(18) column and confirmed by liquid chromatograph mass spectrometer (LC-MS/MS), and high performance thin layer chromatograph (HPTLC). HPLC grade acetonitrile:water (80:20, v/v) was used as mobile phase @ 0.4 mL/min. Chlorantraniliprole presented distinct peak at retention time of 9.82 min. Consistent recoveries ranging from 85% to 96% for chlorantraniliprole were observed when samples were spiked at 0.10, 0.25, 0.50, and 1.00 mg/kg levels. The limit of quantification of this method was worked out to be 0.10 mg/kg. © 2012 Institute of Food Technologists®

  5. VALIDITY OF FIELD TESTS TO ESTIMATE CARDIORESPIRATORY FITNESS IN CHILDREN AND ADOLESCENTS: A SYSTEMATIC REVIEW

    Science.gov (United States)

    Batista, Mariana Biagi; Romanzini, Catiana Leila Possamai; Castro-Piñero, José; Ronque, Enio Ricardo Vaz

    2017-01-01

    ABSTRACT Objective: To systematically review the literature to verify the validity of field-tests to evaluate cardiorespiratory fitness (CRF) in children and adolescents. Data sources: The electronic search was conducted in the databases: Medline (PubMed), SPORTDiscus, Scopus, Web of Science, in addition to the Latin American databases LILACS and SciELO. The search comprised the period from the inception of each database until February 2015, in English and Portuguese. All stages of the process were performed in accordance with the PRISMA flow diagram. Data synthesis: After confirming the inclusion criteria, eligibility, and quality of the studies, 43 studies were analyzed in full; 38 obtained through the searches in the electronic databases, and 5 through private libraries, and references from other articles. Of the total studies, only 13 were considered high quality according to the adopted criteria. The most commonly investigated test in the literature was the 20-meter shuttle run (SR-20 m), accounting for 23 studies, followed by tests of distances between 550 meters and 1 mile, in 9 studies, timed tests of 6, 9, and 12 minutes, also 9 studies, and finally bench protocols and new test proposals represented in 7 studies. Conclusions: The SR-20-m test seems to be the most appropriate to evaluate the CRF of young people with the equation of Barnett, recommended to estimate VO2 peak. As an alternative for evaluating CRF, the 1-mile test is indicated with the equation proposed by Cureton for estimating VO2 peak. PMID:28977338

  6. Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis.

    Science.gov (United States)

    Mayorga-Vega, Daniel; Merino-Marban, Rafael; Viciana, Jesús

    2014-01-01

    The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key PointsOverall sit

  7. Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis

    Science.gov (United States)

    Mayorga-Vega, Daniel; Merino-Marban, Rafael; Viciana, Jesús

    2014-01-01

    The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key Points Overall sit

  8. Practical application of uncertainty-based validation assessment

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, M. C. (Mark C.); Hylok, J. E. (Jeffrey E.); Maupin, R. D. (Ryan D.); Rutherford, A. C. (Amanda C.)

    2004-01-01

    comparison between analytical and experimental data; (4) Selection of a comprehensive, but tenable set of parameters for uncertainty propagation; and (5) Limitations of modeling capabilities and the finite element method for approximating high frequency dynamic behavior of real systems. This paper illustrates these issues by describing the details of the validation assessment for an example system. The system considered is referred to as the 'threaded assembly'. It consists of a titanium mount to which a lower mass is attached by a tape joint, an upper mass is connected via bolted joints, and a pair of aluminum shells is attached via a complex threaded joint. The system is excited impulsively by an explosive load applied over a small area of the aluminum shells. The validation assessment of the threaded assembly is described systematically so that the reader can see the logic behind the process. The simulation model is described to provide context. The feature and parameter selection processes are discussed in detail because they determine not only a large measure of the efficacy of the process, but its cost as well. The choice of uncertainty propagation method for the simulation is covered in some detail and results are presented. Validation experiments are described and results are presented along with experimental uncertainties. Finally, simulation results are compared with experimental data, and conclusions about the validity of these results are drawn within the context of the estimated uncertainties.

  9. Validity in work-based assessment: expanding our horizons.

    Science.gov (United States)

    Govaerts, Marjan; van der Vleuten, Cees P M

    2013-12-01

    Although work-based assessments (WBA) may come closest to assessing habitual performance, their use for summative purposes is not undisputed. Most criticism of WBA stems from approaches to validity consistent with the quantitative psychometric framework. However, there is increasing research evidence that indicates that the assumptions underlying the predictive, deterministic framework of psychometrics may no longer hold. In this discussion paper we argue that meaningfulness and appropriateness of current validity evidence can be called into question and that we need alternative strategies to assessment and validity inquiry that build on current theories of learning and performance in complex and dynamic workplace settings. Drawing from research in various professional fields we outline key issues within the mechanisms of learning, competence and performance in the context of complex social environments and illustrate their relevance to WBA. In reviewing recent socio-cultural learning theory and research on performance and performance interpretations in work settings, we demonstrate that learning, competence (as inferred from performance) as well as performance interpretations are to be seen as inherently contextualised, and can only be under-stood 'in situ'. Assessment in the context of work settings may, therefore, be more usefully viewed as a socially situated interpretive act. We propose constructivist-interpretivist approaches towards WBA in order to capture and understand contextualised learning and performance in work settings. Theoretical assumptions underlying interpretivist assessment approaches call for a validity theory that provides the theoretical framework and conceptual tools to guide the validation process in the qualitative assessment inquiry. Basic principles of rigour specific to qualitative research have been established, and they can and should be used to determine validity in interpretivist assessment approaches. If used properly, these

  10. Trace-based post-silicon validation for VLSI circuits

    CERN Document Server

    Liu, Xiao

    2014-01-01

    This book first provides a comprehensive coverage of state-of-the-art validation solutions based on real-time signal tracing to guarantee the correctness of VLSI circuits.  The authors discuss several key challenges in post-silicon validation and provide automated solutions that are systematic and cost-effective.  A series of automatic tracing solutions and innovative design for debug (DfD) techniques are described, including techniques for trace signal selection for enhancing visibility of functional errors, a multiplexed signal tracing strategy for improving functional error detection, a tracing solution for debugging electrical errors, an interconnection fabric for increasing data bandwidth and supporting multi-core debug, an interconnection fabric design and optimization technique to increase transfer flexibility and a DfD design and associated tracing solution for improving debug efficiency and expanding tracing window. The solutions presented in this book improve the validation quality of VLSI circuit...

  11. Evaluation of satellite-based evapotranspiration estimates in China

    Science.gov (United States)

    Huang, Lei; Li, Zhe; Tang, Qiuhong; Zhang, Xuejun; Liu, Xingcai; Cui, Huijuan

    2017-04-01

    Accurate and continuous estimation of evapotranspiration (ET) is crucial for effective water resource management. We used the moderate resolution imaging spectroradiometer (MODIS) standard ET algorithm forced by the MODIS land products and the three-hourly solar radiation datasets to estimate daily actual evapotranspiration of China (ET_MOD) for the years 2001 to 2015. From the point scale validations using seven eddy covariance tower sites, the results showed that the agreement of ET_MOD estimates and observations was higher for monthly and daily values than that of instantaneous values. Under the major river basin and subbasin levels' comparisons with the variable infiltration capacity hydrological model estimates, the ET_MOD exhibited a slight overestimation in northern China and underestimation in southern China. The mean annual ET_MOD estimates agreed favorably with the hydrological model with coefficients of determination (R2) of 0.93 and 0.83 at major river basin and subbasin scale, respectively. At national scale, the spatiotemporal variations of ET_MOD estimates matched well with those ET estimates from various sources. However, ET_MOD estimates were generally lower than the other estimates in the Tibetan Plateau. This underestimation may be attributed to the plateau climate along with low air temperature and sparsely vegetated surface on the Tibetan Plateau.

  12. Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation

    Energy Technology Data Exchange (ETDEWEB)

    Hines, J. Wesley [Univ. of Tennessee, Knoxville, TN (United States); Upadhyaya, Belle [Univ. of Tennessee, Knoxville, TN (United States); Sharp, Michael [Univ. of Tennessee, Knoxville, TN (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jeffries, Brien [Univ. of Tennessee, Knoxville, TN (United States); Nam, Alan [Univ. of Tennessee, Knoxville, TN (United States); Strong, Eric [Univ. of Tennessee, Knoxville, TN (United States); Tong, Matthew [Univ. of Tennessee, Knoxville, TN (United States); Welz, Zachary [Univ. of Tennessee, Knoxville, TN (United States); Barbieri, Federico [Univ. of Tennessee, Knoxville, TN (United States); Langford, Seth [Univ. of Tennessee, Knoxville, TN (United States); Meinweiser, Gregory [Univ. of Tennessee, Knoxville, TN (United States); Weeks, Matthew [Univ. of Tennessee, Knoxville, TN (United States)

    2014-11-06

    On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning of component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for

  13. Validation Database Based Thermal Analysis of an Advanced RPS Concept

    Science.gov (United States)

    Balint, Tibor S.; Emis, Nickolas D.

    2006-01-01

    Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.

  14. Synthesis and Validation of Vision Based Spacecraft Navigation

    DEFF Research Database (Denmark)

    Massaro, Alessandro Salvatore

    This dissertation targets spacecraft navigation by means of vision based sensors. The goal is to achieve autonomous, robust and ecient navigation through a multidisciplinary research and development effort, covering the fields of computer vision, electronics, optics and mechanics. The attention...... parameters of the camera system. In connection with the PRISMA experimental mission for rendezvous and docking and formation flight, DTU Space has implemented, own and validated the Vision Based Sensor (VBS). This sensor has required development of novel techniques for calibration of the target optical model...... to verify algorithms for asteroid detection, installed on the Juno spacecraft on its way to Jupiter. Another important outcome of the R&D effort of this project has been the integration of a calibration and validation facility for the vision based sensors developed at DTU Space. The author's work has...

  15. Parameter estimation based synchronization for an epidemic model with application to tuberculosis in Cameroon

    Science.gov (United States)

    Bowong, Samuel; Kurths, Jurgen

    2010-10-01

    We propose a method based on synchronization to identify the parameters and to estimate the underlying variables for an epidemic model from real data. We suggest an adaptive synchronization method based on observer approach with an effective guidance parameter to update rule design only from real data. In order, to validate the identifiability and estimation results, numerical simulations of a tuberculosis (TB) model using real data of the region of Center in Cameroon are performed to estimate the parameters and variables. This study shows that some tools of synchronization of nonlinear systems can help to deal with the parameter and state estimation problem in the field of epidemiology. We exploit the close link between mathematical modelling, structural identifiability analysis, synchronization, and parameter estimation to obtain biological insights into the system modelled.

  16. Parameter estimation based synchronization for an epidemic model with application to tuberculosis in Cameroon

    Energy Technology Data Exchange (ETDEWEB)

    Bowong, Samuel, E-mail: sbowong@gmail.co [Laboratory of Applied Mathematics, Department of Mathematics and Computer Science, Faculty of Science, University of Douala, P.O. Box 24157 Douala (Cameroon); Postdam Institute for Climate Impact Research (PIK), Telegraphenberg A 31, 14412 Potsdam (Germany); Kurths, Jurgen [Postdam Institute for Climate Impact Research (PIK), Telegraphenberg A 31, 14412 Potsdam (Germany); Department of Physics, Humboldt Universitat zu Berlin, 12489 Berlin (Germany)

    2010-10-04

    We propose a method based on synchronization to identify the parameters and to estimate the underlying variables for an epidemic model from real data. We suggest an adaptive synchronization method based on observer approach with an effective guidance parameter to update rule design only from real data. In order, to validate the identifiability and estimation results, numerical simulations of a tuberculosis (TB) model using real data of the region of Center in Cameroon are performed to estimate the parameters and variables. This study shows that some tools of synchronization of nonlinear systems can help to deal with the parameter and state estimation problem in the field of epidemiology. We exploit the close link between mathematical modelling, structural identifiability analysis, synchronization, and parameter estimation to obtain biological insights into the system modelled.

  17. The observer-based synchronization and parameter estimation of a ...

    Indian Academy of Sciences (India)

    Observer-based synchronization and parameter estimation of chaotic systems has been an interesting and important issue in theory and various fields of application. In this paper first we investigate the observer-based synchronization of a class of chaotic systems, and then discuss its parameter estimation via a single ...

  18. A Bayesian compositional estimator for microbial taxonomy based on biomarkers

    NARCIS (Netherlands)

    Van den Meersche, K.; Middelburg, J.J.; Soetaert, K.E.R.

    2008-01-01

    Determination of microbial taxonomy based on lipid or pigment spectra requires use of a compositional estimator. We present a new approach based on Bayesian inference and an implementation in the open software platform R. The Bayesian Compositional Estimator (BCE) aims not only to obtain a maximum

  19. Wavelet-Variance-Based Estimation for Composite Stochastic Processes.

    Science.gov (United States)

    Guerrier, Stéphane; Skaloud, Jan; Stebler, Yannick; Victoria-Feser, Maria-Pia

    2013-09-01

    This article presents a new estimation method for the parameters of a time series model. We consider here composite Gaussian processes that are the sum of independent Gaussian processes which, in turn, explain an important aspect of the time series, as is the case in engineering and natural sciences. The proposed estimation method offers an alternative to classical estimation based on the likelihood, that is straightforward to implement and often the only feasible estimation method with complex models. The estimator furnishes results as the optimization of a criterion based on a standardized distance between the sample wavelet variances (WV) estimates and the model-based WV. Indeed, the WV provides a decomposition of the variance process through different scales, so that they contain the information about different features of the stochastic model. We derive the asymptotic properties of the proposed estimator for inference and perform a simulation study to compare our estimator to the MLE and the LSE with different models. We also set sufficient conditions on composite models for our estimator to be consistent, that are easy to verify. We use the new estimator to estimate the stochastic error's parameters of the sum of three first order Gauss-Markov processes by means of a sample of over 800,000 issued from gyroscopes that compose inertial navigation systems. Supplementary materials for this article are available online.

  20. Validation of equations to estimate body density of soccer players, category sub-20

    Directory of Open Access Journals (Sweden)

    Paulo Henrique Santos da Fonseca

    2003-12-01

    Full Text Available The purpose of this study was to validate equations developed from different populations, to estimate body density of soccer players from the south of Brazil, in order to determine their accuracy for this category of athletes. Thirty one national and international equations were analyzed. The sample was composed of 25 male soccer players, aged 16 to 20 years, from the state of Rio Grande do Sul. They had been training at least 1.5 and practiced 5 times a week, 4 hours per session. For cross-validation, the criteria suggested byLohman (1992 were used as a reference standard: Student’s t test for dependent samples, Pearson’s Linear correlation coeffi cient, analysis of standard deviations and standard error of estimate. The equations analyzed tended to underestimate body density and, as such, are not valid for estimating body density in soccer players,producing errors when evaluating the body composition of individuals with the same characteristics as the sample studied here. RESUMO Este estudo teve como objetivo a validação de equações desenvolvidas a partir de diferentes populações, para a estimativa da densidade corporal em atletas de futebol do sul do Brasil, desta forma, buscando determinar acuracidade destas frente a esta categoria de atletas. Foram analisadas 31 equações, tanto nacionais quanto internacionais. A amostra do estudo foi composta de 25 atletas de futebol de campo do estado do Rio Grande do Sul, do sexo masculino, com idade entre 16 e 20 anos, que possuíam no mínimo um ano e meio de treinamento na modalidade de futebol, com volume de treino em torno de 5 vezes por semana e 4 horas diárias. Foi utilizado para a validação cruzada as equações que estimam a densidade corporal com a pesagem hidrostática, método utilizado como padrão de referência, os critérios sugeridos por Lohman (1992: Teste “t” de Student para amostra dependentes, Correlação Linear de Pearson, análise dos Desvios Padrões e Erro Padr

  1. Ethnic variation in validity of the estimated obesity prevalence using self-reported weight and height measurements

    Directory of Open Access Journals (Sweden)

    Verhoeff Arnoud P

    2011-05-01

    Full Text Available Abstract Background We examined ethnic differences between levels of body mass index (BMI based on self-reported and measured body height and weight and the validity of self-reports used to estimate the prevalence of obesity (BMI≥30 kg/m2 in Turkish, Moroccan, and Dutch people in the Netherlands. Furthermore, we investigated whether BMI levels and the prevalence of obesity in Turkish and Moroccan people with incomplete self-reports (missing height or weight differ from those with complete self-reports. Methods Data on self-reported and measured height and weight were collected in a population-based survey among 441 Dutch, 414 Turks and 344 Moroccans aged 18 to 69 years in Amsterdam, the Netherlands in 2004. BMI and obesity were calculated from self-reported and measured height and weight. Results The difference between measured and estimated BMI was larger in Turkish and Moroccan women than in Dutch women, which was explained by the higher BMI of the Turkish and Moroccan women. In men we found no ethnic differences between measured and estimated BMI. Sensitivity to detect obesity was low and specificity was high. In participants with available self-reported and measured height and weight, self-reports produced a similar underestimation of the obesity prevalence in all ethnic groups. However, many obese Turkish and Moroccan women had incomplete self-reports, missing height or weight, resulting in an additional underestimation of the prevalence of obesity. Among men (all ethnicities and Dutch women, the availability of height or weight by self-report did not differ between obese and non obese participants. Conclusions BMI based on self-reports is underestimated more by Turkish and Moroccan women than Dutch women, which is explained by the higher BMI of Turkish and Moroccan women. Further, in women, ethnic differences in the estimation of obesity prevalence based on self-reports do exist and are due to incomplete self-reports in obese Turkish and

  2. A quantitative approach for sex estimation based on cranial morphology.

    Science.gov (United States)

    Nikita, Efthymia; Michopoulou, Efrossyni

    2017-12-19

    This paper proposes a method for the quantification of the shape of sexually dimorphic cranial traits, namely the glabella, mastoid process and external occipital protuberance. The proposed method was developed using 165 crania from the documented Athens Collection and tested on 20 Cretan crania. It is based on digital photographs of the lateral view of the cranium, drawing of the profile of three sexually dimorphic structures and calculation of variables that express the shape of these structures. The combinations of variables that provide optimum discrimination between sexes are identified by means of binary logistic regression and discriminant analysis. The best cross-validated results are obtained when variables from all three structures are combined and range from 75.8 to 85.1% and 81.1 to 94.6% for males and females, respectively. The success rate is 86.3-94.1% for males and 83.9-93.5% for females when half of the sample is used for training and the rest for prediction. Correct classification for the Cretan material based upon the standards developed for the Athens sample was 80-90% for the optimum combinations of discriminant variables. The proposed method provides an effective way to capture quantitatively the shape of sexually dimorphic cranial structures; it gives more accurate results relative to other existing methods and it does not require specialized equipment. Equations for sex estimation based on combinations of variables are provided, along with instructions on how to use the method and Excel macros for calculation of discriminant variables with automated implementation of the optimum equations. © 2017 Wiley Periodicals, Inc.

  3. Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis

    OpenAIRE

    Mayorga-Vega, Daniel; Merino-Marban, Rafael; Viciana, Jesús

    2014-01-01

    The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measure...

  4. Access Based Cost Estimation for Beddown Analysis

    Science.gov (United States)

    2006-03-23

    everyone in Air Mobility Command Planning and Programs Requirements Division that contributed to this effort: especially Major Brad Buckman and...investigate available on-line sources of data and other existing databases. The necessary protocols and network access authorizations must be...System (C2IPS), and Standard Base Supply System (SBSS), etc. Figure 2 outlines the basic architecture and interface protocols . The system provides

  5. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  6. Land Surface Albedo Estimation from Chinese HJ Satellite Data Based on the Direct Estimation Approach

    Directory of Open Access Journals (Sweden)

    Tao He

    2015-05-01

    Full Text Available Monitoring surface albedo at medium-to-fine resolution (<100 m has become increasingly important for medium-to-fine scale applications and coarse-resolution data evaluation. This paper presents a method for estimating surface albedo directly using top-of-atmosphere reflectance. This is the first attempt to derive surface albedo for both snow-free and snow-covered conditions from medium-resolution data with a single approach. We applied this method to the multispectral data from the wide-swath Chinese HuanJing (HJ satellites at a spatial resolution of 30 m to demonstrate the feasibility of this data for surface albedo monitoring over rapidly changing surfaces. Validation against ground measurements shows that the method is capable of accurately estimating surface albedo over both snow-free and snow-covered surfaces with an overall root mean square error (RMSE of 0.030 and r-square (R2 of 0.947. The comparison between HJ albedo estimates and the Moderate Resolution Imaging Spectral Radiometer (MODIS albedo product suggests that the HJ data and proposed algorithm can generate robust albedo estimates over various land cover types with an RMSE of 0.011–0.014. The accuracy of HJ albedo estimation improves with the increase in view zenith angles, which further demonstrates the unique advantage of wide-swath satellite data in albedo estimation.

  7. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  8. Validity of anthropometric equations for the estimation of body fat in older adults individuals from southern Brazil

    Directory of Open Access Journals (Sweden)

    Cassiano Ricardo Rech

    2010-12-01

    Full Text Available http://dx.doi.org/10.5007/1980-0037.2010v12n1p1 The objective of this study was to analyze the cross-validity of anthropometric equations for the estimation of body fat (%F in Brazilian elderly individuals. A total of 180 elderly individuals (120 women and 60 men ranging in age from 60 to 81 years were studied. Height, body weight and body perimeters were measured. Seven equations based on anthropometric measures were tested using the cross-validation criteria suggested by Lohman. Bland-Altman plots were used to determine the agreement of the equations with the reference method, with %F estimated by dual-energy X-ray absorptiometry (DEXA serving as a reference. The body mass index of the group studied ranged from 18.4 to 39.3 kg/m2. Mean %F was 23.1% in men and 37.3% in women (range: 6 to 51.4%. The results showed no difference (p>0.05 between the equations of Tran and Weltmann and Deurenberg et al. and the DEXA measurement for men, with agreement of 68.2% (r=0.78 and 72.8% (r=0.74, respectively. For women, the equations of Tran and Weltmann and Gonçalves did not differ from the DEXA measurement (p> 0.05, with agreement of 66.0% (r=0.76 and 72.9% (r=0.75, respectively. In conclusion, the anthropometric equations proposed in the literature differ in terms of their predictive capacity of %F. Caution in using the %F predictive anthropometric equation in elderly is advised.

  9. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry – BREALD-30

    Science.gov (United States)

    Junkes, Monica C.; Fraiz, Fabian C.; Sardenberg, Fernanda; Lee, Jessica Y.; Paiva, Saul M.; Ferreira, Fernanda M.

    2015-01-01

    Objective The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. Methods After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. Results The BREALD-30 demonstrated good internal reliability. Cronbach’s alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent’s perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent’s perception regarding his/her child's oral health remained significant in the multivariate analysis. Conclusion The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil. PMID:26158724

  10. Validation and Expected Error Estimation of Suomi-NNP VIIRS Aerosol Optical Thickness and Angstrom Exponent with AERONET

    Science.gov (United States)

    Huang, Jingfeng; Kondragunta, Shobha; Laszlo, Istvan; Liu, Hongqing; Remer, Lorraine A.; Zhang, Hai; Superczynski, Stephen; Ciren, Pubu; Holben, Brent N.; Petrenko, Maksym

    2016-01-01

    The new-generation polar-orbiting operational environmental sensor, the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite, provides critical daily global aerosol observations. As older satellite sensors age out, the VIIRS aerosol product will become the primary observational source for global assessments of aerosol emission and transport, aerosol meteorological and climatic effects, air quality monitoring, and public health. To prove their validity and to assess their maturity level, the VIIRS aerosol products were compared to the spatiotemporally matched Aerosol Robotic Network (AERONET)measurements. Over land, the VIIRS aerosol optical thickness (AOT) environmental data record (EDR) exhibits an overall global bias against AERONET of 0.0008 with root-mean-square error(RMSE) of the biases as 0.12. Over ocean, the mean bias of VIIRS AOT EDR is 0.02 with RMSE of the biases as 0.06.The mean bias of VIIRS Ocean Angstrom Exponent (AE) EDR is 0.12 with RMSE of the biases as 0.57. The matchups between each product and its AERONET counterpart allow estimates of expected error in each case. Increased uncertainty in the VIIRS AOT and AE products is linked to specific regions, seasons, surface characteristics, and aerosol types, suggesting opportunity for future modifications as understanding of algorithm assumptions improves. Based on the assessment, the VIIRS AOT EDR over land reached Validated maturity beginning 23 January 2013; the AOT EDR and AE EDR over ocean reached Validated maturity beginning 2 May 2012, excluding the processing error period 15 October to 27 November 2012. These findings demonstrate the integrity and usefulness of the VIIRS aerosol products that will transition from S-NPP to future polar-orbiting environmental satellites in the decades to come and become the standard global aerosol data set as the previous generations missions come to an end.

  11. Contingency inferences driven by base rates: Valid by sampling

    Directory of Open Access Journals (Sweden)

    Florian Kutzner

    2011-04-01

    Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.

  12. OWL-based reasoning methods for validating archetypes.

    Science.gov (United States)

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Daniel Mayorga-Vega

    2014-03-01

    Full Text Available The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp, unaffected by statistical artefacts (i.e., sampling error and measurement error, was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67, but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35. Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility.

  14. Validation and Uncertainty Estimates for MODIS Collection 6 "Deep Blue" Aerosol Data

    Science.gov (United States)

    Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Jeong, M.-J.

    2013-01-01

    The "Deep Blue" aerosol optical depth (AOD) retrieval algorithm was introduced in Collection 5 of the Moderate Resolution Imaging Spectroradiometer (MODIS) product suite, and complemented the existing "Dark Target" land and ocean algorithms by retrieving AOD over bright arid land surfaces, such as deserts. The forthcoming Collection 6 of MODIS products will include a "second generation" Deep Blue algorithm, expanding coverage to all cloud-free and snow-free land surfaces. The Deep Blue dataset will also provide an estimate of the absolute uncertainty on AOD at 550 nm for each retrieval. This study describes the validation of Deep Blue Collection 6 AOD at 550 nm (Tau(sub M)) from MODIS Aqua against Aerosol Robotic Network (AERONET) data from 60 sites to quantify these uncertainties. The highest quality (denoted quality assurance flag value 3) data are shown to have an absolute uncertainty of approximately (0.086+0.56Tau(sub M))/AMF, where AMF is the geometric air mass factor. For a typical AMF of 2.8, this is approximately 0.03+0.20Tau(sub M), comparable in quality to other satellite AOD datasets. Regional variability of retrieval performance and comparisons against Collection 5 results are also discussed.

  15. Estimating sleep from multisensory armband measurements: validity and reliability in teens.

    Science.gov (United States)

    Roane, Brandy M; Van Reen, Eliza; Hart, Chantelle N; Wing, Rena; Carskadon, Mary A

    2015-12-01

    Given the recognition that sleep may influence obesity risk, there is increasing interest in measuring sleep parameters within obesity studies. The goal of the current analyses was to determine whether the SenseWear(®) Pro3 Armband (armband), typically used to assess physical activity, is reliable at assessing sleep parameters. The armband was compared with the AMI Motionlogger(®) (actigraph), a validated activity monitor for sleep assessment, and with polysomnography, the gold standard for assessing sleep. Participants were 20 adolescents (mean age = 15.5 years) with a mean body mass index percentile of 63.7. All participants wore the armband and actigraph on their non-dominant arm while in-lab during a nocturnal polysomnographic recording (600 min). Epoch-by-epoch sleep/wake data and concordance of sleep parameters were examined. No significant sleep parameter differences were found between the armband and polysomnography; the actigraph tended to overestimate sleep and underestimate wake compared with polysomnography. Both devices showed high sleep sensitivity, but lower wake detection rates. Bland-Altman plots showed large individual differences in armband sleep parameter concordance rates. The armband did well estimating sleep overall, with group results more similar to polysomnography than the actigraph; however, the armband was less accurate at an individual level than the actigraph. © 2015 European Sleep Research Society.

  16. Validation of the iPhone app using the force platform to estimate vertical jump height.

    Science.gov (United States)

    Carlos-Vivas, Jorge; Martin-Martinez, Juan P; Hernandez-Mocholi, Miguel A; Perez-Gomez, Jorge

    2016-09-22

    Vertical jump performance has been evaluated with several devices: force platforms, contact mats, Vertec, accelerometers, infrared cameras and high-velocity cameras; however, the force platform is considered the gold standard for measuring vertical jump height. The purpose of this study was to validate the iPhone app, My Jump, that measures vertical jump height by comparing it with other methods that use the force platform to estimate vertical jump height, namely, vertical velocity at take-off and time in the air. A total of 40 sport sciences students (age 21.4 ± 1.9 years) completed five countermovement jumps (CMJs) over a force platform. Thus, 200 CMJ heights were evaluated from the vertical velocity at take-off and the time in the air using the force platform, and from the time in the air with the mobile application My Jump. The height obtained was compared using the intraclass correlation coefficient (ICC). Correlation between APP and force platform using the time in the air was perfect (ICC = 1.000, P Jump, is an appropriate method to evaluate the vertical jump performance; however, vertical jump height is slightly overestimated compared with that of the force platform.

  17. Smartphone based automatic organ validation in ultrasound video.

    Science.gov (United States)

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  18. Evidence Based Validation of Indian Traditional Medicine – Way Forward

    Directory of Open Access Journals (Sweden)

    Pulok K Mukherjee

    2016-01-01

    Full Text Available Evidence based validation of the ethno-pharmacological claims on traditional medicine (TM is the need of the day for its globalization and reinforcement. Combining the unique features of identifying biomarkers that are highly conserved across species, this can offer an innovative approach to biomarker-driven drug discovery and development. TMs are an integral component of alternative health care systems. India has a rich wealth of TMs and the potential to accept the challenge to meet the global demand for them. Ayurveda, Yoga, Unani, Siddha and Homeopathy (AYUSH medicine are the major healthcare systems in Indian Traditional Medicine. The plant species mentioned in the ancient texts of these systems may be explored with the modern scientific approaches for better leads in the healthcare. TM is the best sources of chemical diversity for finding new drugs and leads. Authentication and scientific validation of medicinal plant is a fundamental requirement of industry and other organizations dealing with herbal drugs. Quality control (QC of botanicals, validated processes of manufacturing, customer awareness and post marketing surveillance are the key points, which could ensure the quality, safety and efficacy of TM. For globalization of TM, there is a need for harmonization with respect to its chemical and metabolite profiling, standardization, QC, scientific validation, documentation and regulatory aspects of TM. Therefore, the utmost attention is necessary for the promotion and development of TM through global collaboration and co-ordination by national and international programme.

  19. Algorithms for converting estimates of child malnutrition based on the NCHS reference into estimates based on the WHO Child Growth Standards

    Directory of Open Access Journals (Sweden)

    de Onis Mercedes

    2008-05-01

    Full Text Available Abstract Background The child growth standards released by the World Health Organization (WHO in 2006 have several technical advantages over the previous 1977 National Center for Health Statistics (NCHS/WHO reference and are recommended for international comparisons and secular trend analysis of child malnutrition. To obtain comparable data over time, earlier surveys should be reanalyzed using the WHO standards; however, reanalysis is impossible for older surveys since the raw data are not available. This paper provides algorithms for converting estimates of child malnutrition based on the NCHS reference into estimates based on the WHO standards. Methods Sixty-eight surveys from the WHO Global Database on Child Growth and Malnutrition were analyzed using the WHO standards to derive estimates of underweight, stunting, wasting and overweight. The prevalences based on the NCHS reference were taken directly from the database. National/regional estimates with a minimum sample size of 400 children were used to develop the algorithms. For each indicator, a simple linear regression model was fitted, using the logit of WHO and NCHS estimates as, respectively, dependent and independent variables. The resulting algorithms were validated using a different set of surveys, on the basis of which the point estimate and 95% confidence interval (CI of the predicted WHO prevalence were compared to the observed prevalence. Results In total, 271 data points were used to develop the algorithms. The correlation coefficients (R2 were all greater than 0.90, indicating that most of the variability of the dependent variable is explained by the fitted model. The average difference between the predicted WHO estimate and the observed value was Conclusion To obtain comparable data concerning child malnutrition, individual survey data should be analyzed using the WHO standards. When the raw data are not available, the algorithms presented here provide a highly accurate tool

  20. Development and validation of spectrophotometric method for simultaneous estimation of paracetamol and lornoxicam in different dissolution media.

    Science.gov (United States)

    Patel, Dasharath M; Sardhara, Bhavesh M; Thumbadiya, Diglesh H; Patel, Chhagan N

    2012-07-01

    Paracetamol and lornoxicam in combined tablet dosage form are available in the market. This combination is used to treat inflammatory diseases of the joints, osteoarthritis and sciatica. Spectrophotometric and high performance liquid chromatography (HPLC) methods have been reported for their simultaneous estimation in tablet dosage form in specific solvent. This paper presents simple, accurate and reproducible spectrophotometric method for simultaneous determination of paracetamol and lornoxicam in tablet dosage form in different dissolution media. The reported method is helpful in determination of paracetamol and lornoxicam during dissolution study. Simple, sensitive, accurate and economical spectrophotometric method based on an absorption correction equation was developed for the estimation of paracetamol and lornoxicam simultaneously in tablet dosage form in different dissolution media at different pH. Paracetamol showed absorption maxima at 243 nm in 0.1N HCland phosphate buffer pH 6.8, while lornoxicam showed absorption maxima at 374 nm in 0.1N HCland phosphate buffer pH 6.8. The linearity was obtained in the concentration range of 4-12 μg/ml for paracetamol and 4-16 μg/ ml for lornoxicam. The concentrations of the drugs were determined by an absorption correction equation method. The results of analysis have been validated statistically by recovery studies.

  1. Real-time estimation of projectile roll angle using magnetometers: in-lab experimental validation

    Science.gov (United States)

    Changey, S.; Pecheur, E.; Wey, P.; Sommer, E.

    2013-12-01

    The knowledge of the roll angle of a projectile is decisive to apply guidance and control law. For example, the goal of ISL's project GSP (Guided Supersonic Projectile) is to change the flight path of an airdefence projectile in order to correct the aim error due to the target manoeuvres. The originality of the concept is based on pyrotechnical actuators and onboard sensors which control the angular motion of the projectile. First of all, the control of the actuators requires the precise control of the roll angle of the projectile. To estimate the roll angle of the projectile, two magnetometers are embedded in the projectile to measure the projection of the Earth magnetic field along radial axes of the projectiles. Then, an extended Kalman filter (EKF) is used to compute the roll angle estimation. As the rolling frequency of the GSP is about 22 Hz, it was easy to test the navigation algorithm in laboratory. In a previous paper [1], the In-Lab demonstration of this concept showed that the roll angle estimation was possible with an accuracy of about 1◦ . In this paper, the demonstration is extended to high-speed roll rate, up to 1000 Hz. Thus, two magnetometers, a DSP (Digital Signal Processor) and a LED (Light Eminent Diode), are rotated using a pneumatic motor; the DSP runs an EKF and a guidance algorithm to compute the trigger times of the LED. By using a high-speed camera, the accuracy of the method can be observed and improved.

  2. Validation of a Dish-Based Semiquantitative Food Questionnaire in Rural Bangladesh

    Directory of Open Access Journals (Sweden)

    Pi-I. D. Lin

    2017-01-01

    Full Text Available A locally validated tool was needed to evaluate long-term dietary intake in rural Bangladesh. We assessed the validity of a 42-item dish-based semi-quantitative food frequency questionnaire (FFQ using two 3-day food diaries (FDs. We selected a random subset of 47 families (190 participants from a longitudinal arsenic biomonitoring study in Bangladesh to administer the FFQ. Two 3-day FDs were completed by the female head of the households and we used an adult male equivalent method to estimate the FD for the other participants. Food and nutrient intakes measured by FFQ and FD were compared using Pearson’s and Spearman’s correlation, paired t-test, percent difference, cross-classification, weighted Kappa, and Bland–Altman analysis. Results showed good validity for total energy intake (paired t-test, p < 0.05; percent difference <10%, with no presence of proportional bias (Bland–Altman correlation, p > 0.05. After energy-adjustment and de-attenuation for within-person variation, macronutrient intakes had excellent correlations ranging from 0.55 to 0.70. Validity for micronutrients was mixed. High intraclass correlation coefficients (ICCs were found for most nutrients between the two seasons, except vitamin A. This dish-based FFQ provided adequate validity to assess and rank long-term dietary intake in rural Bangladesh for most food groups and nutrients, and should be useful for studying dietary-disease relationships.

  3. Performing Verification and Validation in Reuse-Based Software Engineering

    Science.gov (United States)

    Addy, Edward A.

    1999-01-01

    The implementation of reuse-based software engineering not only introduces new activities to the software development process, such as domain analysis and domain modeling, it also impacts other aspects of software engineering. Other areas of software engineering that are affected include Configuration Management, Testing, Quality Control, and Verification and Validation (V&V). Activities in each of these areas must be adapted to address the entire domain or product line rather than a specific application system. This paper discusses changes and enhancements to the V&V process, in order to adapt V&V to reuse-based software engineering.

  4. Lightning stroke distance estimation from single station observation and validation with WWLLN data

    Directory of Open Access Journals (Sweden)

    V. Ramachandran

    2007-07-01

    Full Text Available A simple technique to estimate the distance of the lightning strikes d with a single VLF electromagnetic wave receiver at a single station is described. The technique is based on the recording of oscillatory waveforms of the electric fields of sferics. Even though the process of estimating d using the waveform is a rather classical one, a novel and simple procedure for finding d is proposed in this paper. The procedure adopted provides two independent estimates of the distance of the stroke. The accuracy of measurements has been improved by employing high speed (333 ns sampling rate signal processing techniques. GPS time is used as the reference time, which enables us to compare the calculated distances of the lightning strikes, by both methods, with those calculated from the data obtained by the World-Wide Lightning Location Network (WWLLN, which uses a multi-station technique. The estimated distances of the lightning strikes (77, whose times correlated, ranged from ~3000–16 250 km. When dd compared with those calculated with the multi-station lightning location system is ~4.7%, while for all the strokes it was ~8.8%. One of the lightnings which was recorded by WWLLN, whose field pattern was recorded and the spectrogram of the sferic was also recorded at the site, is analyzed in detail. The deviations in d calculated from the field pattern and from the arrival time of the sferic were 3.2% and 1.5%, respectively, compared to d calculated from the WWLLN location. FFT analysis of the waveform showed that only a narrow band of frequencies is received at the site, which is confirmed by the intensity of the corresponding sferic in the spectrogram.

  5. Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.

    Science.gov (United States)

    Xiao Yang; Jianjiang Feng; Jie Zhou

    2014-05-01

    Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly.

  6. Validation of the colorado psychiatry evidence-based medicine test.

    Science.gov (United States)

    Rothberg, Brian; Feinstein, Robert E; Guiton, Gretchen

    2013-09-01

    Evidence-based medicine (EBM) has become an important part of residency education, yet many EBM curricula lack a valid and standardized tool to identify learners' prior knowledge and assess progress. We developed an EBM examination in psychiatry to measure our effectiveness in teaching comprehensive EBM to residents. We developed a psychiatry EBM test using the validated EBM Fresno Test of Competence for family medicine. The test consists of case scenarios with open-ended questions. We also developed a scoring rubric and obtained reliability with multiple raters. Fifty-seven residents provided test data after completing 3, 6, 25, or 31 EBM sessions. The number of sessions for each resident was based on their length of training in our program. The examination had strong interrater reliability, internal reliability, and item discrimination. Many residents showed significant improvement on their examination scores when data were compared from tests taken before and after a sequence of teaching sessions. Also, a threshold for the level of expert on the examination was established using test data from 5 EBM teacher-experts. We successfully developed a valid and reliable EBM examination for use with psychiatry residents to measure essential EBM skills as part of a larger project to encourage EBM practice for residents in routine patient care. The test provides information on residents' knowledge in EBM from entry level concepts through expert performance. It can be used to place incoming residents in appropriate levels of an EBM curriculum and to monitor the effectiveness of EBM instruction.

  7. Validation of an Innovative Satellite-Based UV Dosimeter

    Science.gov (United States)

    Morelli, Marco; Masini, Andrea; Simeone, Emilio; Khazova, Marina

    2016-08-01

    We present an innovative satellite-based UV (ultraviolet) radiation dosimeter with a mobile app interface that has been validated by exploiting both ground-based measurements and an in-vivo assessment of the erythemal effects on some volunteers having a controlled exposure to solar radiation.Both validations showed that the satellite-based UV dosimeter has a good accuracy and reliability needed for health-related applications.The app with this satellite-based UV dosimeter also includes other related functionalities such as the provision of safe sun exposure time updated in real-time and end exposure visual/sound alert. This app will be launched on the global market by siHealth Ltd in May 2016 under the name of "HappySun" and available both for Android and for iOS devices (more info on http://www.happysun.co.uk).Extensive R&D activities are on-going for further improvement of the satellite-based UV dosimeter's accuracy.

  8. Artificial Neural Network Based State Estimators Integrated into Kalmtool

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Ravn, Ole; Poulsen, Niels Kjølstad

    2012-01-01

    In this paper we present a toolbox enabling easy evaluation and comparison of dierent ltering algorithms. The toolbox is called Kalmtool and is a set of MATLAB tools for state estimation of nonlinear systems. The toolbox now contains functions for Articial Neural Network Based State Estimation...

  9. Validation of equations using anthropometric and bioelectrical impedance for estimating body composition of the elderly

    Directory of Open Access Journals (Sweden)

    Cassiano Ricardo Rech

    2006-08-01

    Full Text Available The increase of the elderly population has enhanced the need for studying aging-related issues. In this context, the analysis of morphological alterations occurring with the age has been discussed thoroughly. Evidences point that there are few information on valid methods for estimating body composition of senior citizens in Brazil. Therefore, the objective of this study was to cross-validate equations using either anthropometric or bioelectrical impedance (BIA data for estimation of body fat (%BF and of fat-free mass (FFM in a sample of older individuals from Florianópolis-SC, having the dual energy x-ray absorptiometry (DEXA as the criterion-measurement. The group was composed by 180 subjects (60 men and 120 women who participated in four community Groups for the elderly and were systematically randomly selected by a telephone interview, with age ranging from 60 to 81 years. The variables stature, body mass, body circumferences, skinfold thickness, reactance and resistance were measured in the morning at The Sports Center of the Federal University of Santa Catarina. The DEXA evaluation was performed in the afternoon at The Diagnosis Center through Image in Florianópolis-SC. Twenty anthropometric and 8 BIA equations were analyzed for cross-validation. For those equations that estimate body density, the equation of Siri (1961 and the adapted-equation by Deurenberg et al. (1989 were used for conversion into %BF. The analyses were performed with the statistical package SPSS, version 11.5, establishing the level of significance at 5%. The criteria of cross-validation suggested by Lohman (1992 and the graphic dispersion analyses in relation to the mean, as proposed by Bland and Altman (1986 were used. The group presented values for the body mass index (BMI between 18.4kg.m-2 and 39.3kg.m-2. The mean %BF was of 23.1% (sd=5.8 for men and 37.3% (sd=6.9 in women, varying from 6% to 51.4%. There were no differences among the estimates of the equations

  10. [Comparison of methods for estimating soybean chlorophyll content based on visual/near infrared reflection spectra].

    Science.gov (United States)

    Tang, Xu-Guang; Song, Kai-Shan; Liu, Dian-Wei; Wang, Zong-Ming; Zhang, Bai; Du, Jia; Zeng, Li-Hong; Jiang, Guang-Jia; Wang, Yuan-Dong

    2011-02-01

    The estimation of crop chlorophyll content could provide technical support for precision agriculture. Canopy spectral reflectance was simulated for different chlorophyll levels using radiative transfer models. Then with multiperiod measured hyperspectral data and corresponding chlorophyll content, after extracting six wavelet energy coefficients from the responded bands, an evaluation of soybean chlorophyll content retrieval methods was conducted using multiple linear regression, BP neural network, RBF neural network and PLS method. The estimate effects of the three methods were compared afterwards. The result showed that the three methods based on wavelet analysis have an ideal effect on the chlorophyll content estimation. R2 of validated model of multiple linear regression, BP neural network, RBF neural network and PLS method were 0. 634, 0. 715, 0. 873 and 0.776, respectively. PLS based on Gaussian kernel function and RBF NN methods were better with higher precision, which could estimate chlorophyll content stably.

  11. Clinical validation of a genetic model to estimate the risk of developing choroidal neovascular age-related macular degeneration

    Directory of Open Access Journals (Sweden)

    Hageman Gregory S

    2011-07-01

    Full Text Available Abstract Predictive tests for estimating the risk of developing late-stage neovascular age-related macular degeneration (AMD are subject to unique challenges. AMD prevalence increases with age, clinical phenotypes are heterogeneous and control collections are prone to high false-negative rates, as many control subjects are likely to develop disease with advancing age. Risk prediction tests have been presented previously, using up to ten genetic markers and a range of self-reported non-genetic variables such as body mass index (BMI and smoking history. In order to maximise the accuracy of prediction for mainstream genetic testing, we sought to derive a test comparable in performance to earlier testing models but based purely on genetic markers, which are static through life and not subject to misreporting. We report a multicentre assessment of a larger panel of single nucleotide polymorphisms (SNPs than previously analysed, to improve further the classification performance of a predictive test to estimate the risk of developing choroidal neovascular (CNV disease. We developed a predictive model based solely on genetic markers and avoided inclusion of self-reported variables (eg smoking history or non-static factors (BMI, education status that might otherwise introduce inaccuracies in calculating individual risk estimates. We describe the performance of a test panel comprising 13 SNPs genotyped across a consolidated collection of four patient cohorts obtained from academic centres deemed appropriate for pooling. We report on predictive effect sizes and their classification performance. By incorporating multiple cohorts of homogeneous ethnic origin, we obtained >80 per cent power to detect differences in genetic variants observed between cases and controls. We focused our study on CNV, a subtype of advanced AMD associated with a severe and potentially treatable form of the disease. Lastly, we followed a two-stage strategy involving both test model

  12. Validation Issues of a Space-based Methane Lidar

    Science.gov (United States)

    Kiemle, C.; Fix, A.; Ehret, G.; Flamant, P.

    2014-12-01

    Space-based lidar missions targeting greenhouse gases are expected to close observational gaps, e.g., over subarctic permafrost and tropical wetlands, where in-situ and passive remote sensing techniques have difficulties. In the frame of a joint climate monitoring initiative, a "Methane Remote Lidar Mission" (MERLIN) was proposed by the German and French space agencies DLR and CNES. MERLIN is now in Phase B, in which all mission components are planned in detail. Launch is foreseen in 2019. The instrument is an integrated path differential absorption (IPDA) lidar which, installed on a low earth orbit platform provided by CNES, uses the surface backscatter to measure the atmospheric methane column. The globally observed concentration gradients will primarily help inverse numerical models to better infer regional methane fluxes. The lidar signals are able to travel through optically thin cloud and aerosol layers without producing a bias, and MERLIN's small field of view, of order 100 m, is expected to provide observations in broken cloud environments, often encountered in the tropics. As IPDA is a novel technique, calibration and validation will be essential. It is foreseen to validate MERLIN by under-flying the satellite with another IPDA lidar, CHARM-F, and a passive remote sensor, both airborne. However, active and passive remote sensors have different, pressure and temperature dependent measurements sensitivities (weighting functions), different fields of view, and do not sample the total methane column on-board an aircraft. Furthermore, since the methane profile is not constant, its column depends on the height of the boundary layer and of the tropopause. We investigate the impact of these issues on the expected validation accuracy, and we examine whether the ground-based Total Carbon Column Observing Network (TCCON) may be useful for validation, too. Finally, validation opportunities are dependent on the location and size of cloud-free regions, since clouds with

  13. MODEL-BASED VALIDATION AND VERIFICATION OF ANOMALIES IN LEGISLATION

    Directory of Open Access Journals (Sweden)

    Vjeran Strahonja

    2006-12-01

    Full Text Available An anomaly in legislation is absence of completeness, consistency and other desirable properties, caused by different semantic, syntactic or pragmatic reasons. In general, the detection of anomalies in legislation comprises validation and verification. The basic idea of research, as presented in this paper, is modelling legislation by capturing domain knowledge of legislation and specifying it in a generic way by using commonly agreed and understandable modelling concepts of the Unified Modelling Language (UML. Models of legislation enable to understand the system better, support the detection of anomalies and help to improve the quality of legislation by validation and verification. By implementing model-based approach, the object of validation and verification moves from legislation to its model. The business domain of legislation has two distinct aspects: a structural or static aspect (functionality, business data etc., and a behavioural or dynamic part (states, transitions, activities, sequences etc.. Because anomalism can occur on two different levels, on the level of a model, or on the level of legislation itself, a framework for validation and verification of legal regulation and its model is discussed. The presented framework includes some significant types of semantic and syntactic anomalies. Some ideas for assessment of pragmatic anomalies of models were found in the field of software quality metrics. Thus pragmatic features and attributes can be determined that could be relevant for evaluation purposes of models. Based on analogue standards for the evaluation of software, a qualitative and quantitative scale can be applied to determine the value of some feature for a specific model.

  14. Estimation of incidences of infectious diseases based on antibody measurements

    DEFF Research Database (Denmark)

    Simonsen, J; Mølbak, K; Falkenhorst, G

    2009-01-01

    Owing to under-ascertainment it is difficult if not impossible to determine the incidence of a given disease based on cases notified to routine public health surveillance. This is especially true for diseases that are often present in mild forms as for example diarrhoea caused by foodborne...... it was possible to estimate the time since last infection for each individual in the cross-sectional study. These time estimates were then converted into incidence estimates. Information about the incidence of Salmonella infections in Denmark was obtained by using blood samples from 1780 persons. The estimated...

  15. Prevalence Estimation and Validation of New Instruments in Psychiatric Research: An Application of Latent Class Analysis and Sensitivity Analysis

    Science.gov (United States)

    Pence, Brian Wells; Miller, William C.; Gaynes, Bradley N.

    2009-01-01

    Prevalence and validation studies rely on imperfect reference standard (RS) diagnostic instruments that can bias prevalence and test characteristic estimates. The authors illustrate 2 methods to account for RS misclassification. Latent class analysis (LCA) combines information from multiple imperfect measures of an unmeasurable latent condition to…

  16. Global stereo matching algorithm based on disparity range estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Gu, Feifei

    2017-09-01

    The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.

  17. The Model Human Processor and the Older Adult: Parameter Estimation and Validation Within a Mobile Phone Task

    Science.gov (United States)

    Jastrzembski, Tiffany S.; Charness, Neil

    2009-01-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048

  18. Ontology-based validation and identification of regulatory phenotypes

    KAUST Repository

    Kulmanov, Maxat

    2018-01-31

    Motivation: Function annotations of gene products, and phenotype annotations of genotypes, provide valuable information about molecular mechanisms that can be utilized by computational methods to identify functional and phenotypic relatedness, improve our understanding of disease and pathobiology, and lead to discovery of drug targets. Identifying functions and phenotypes commonly requires experiments which are time-consuming and expensive to carry out; creating the annotations additionally requires a curator to make an assertion based on reported evidence. Support to validate the mutual consistency of functional and phenotype annotations as well as a computational method to predict phenotypes from function annotations, would greatly improve the utility of function annotations Results: We developed a novel ontology-based method to validate the mutual consistency of function and phenotype annotations. We apply our method to mouse and human annotations, and identify several inconsistencies that can be resolved to improve overall annotation quality. Our method can also be applied to the rule-based prediction of phenotypes from functions. We show that the predicted phenotypes can be utilized for identification of protein-protein interactions and gene-disease associations. Based on experimental functional annotations, we predict phenotypes for 1,986 genes in mouse and 7,301 genes in human for which no experimental phenotypes have yet been determined.

  19. Head pose estimation algorithm based on deep learning

    Science.gov (United States)

    Cao, Yuanming; Liu, Yijun

    2017-05-01

    Head pose estimation has been widely used in the field of artificial intelligence, pattern recognition and intelligent human-computer interaction and so on. Good head pose estimation algorithm should deal with light, noise, identity, shelter and other factors robustly, but so far how to improve the accuracy and robustness of attitude estimation remains a major challenge in the field of computer vision. A method based on deep learning for pose estimation is presented. Deep learning with a strong learning ability, it can extract high-level image features of the input image by through a series of non-linear operation, then classifying the input image using the extracted feature. Such characteristics have greater differences in pose, while they are robust of light, identity, occlusion and other factors. The proposed head pose estimation is evaluated on the CAS-PEAL data set. Experimental results show that this method is effective to improve the accuracy of pose estimation.

  20. Combining LIDAR estimates of aboveground biomass and Landsat estimates of stand age for spatially extensive validation of modeled forest productivity.

    Science.gov (United States)

    M.A. Lefsky; D.P. Turner; M. Guzy; W.B. Cohen

    2005-01-01

    Extensive estimates of forest productivity are required to understand the relationships between shifting land use, changing climate and carbon storage and fluxes. Aboveground net primary production of wood (NPPAw) is a major component of total NPP and of net ecosystem production (NEP). Remote sensing of NPP and NPPAw is...

  1. Flood quantiles estimation based on theoretically derived distributions: regional analysis in Southern Italy

    Directory of Open Access Journals (Sweden)

    V. Iacobellis

    2011-03-01

    Full Text Available A regional probabilistic model for the estimation of medium-high return period flood quantiles is presented. The model is based on the use of theoretically derived probability distributions of annual maximum flood peaks (DDF. The general model is called TCIF (Two-Component IF model and encompasses two different threshold mechanisms associated with ordinary and extraordinary events, respectively. Based on at-site calibration of this model for 33 gauged sites in Southern Italy, a regional analysis is performed obtaining satisfactory results for the estimation of flood quantiles for return periods of technical interest, thus suggesting the use of the proposed methodology for the application to ungauged basins. The model is validated by using a jack-knife cross-validation technique taking all river basins into consideration.

  2. Flood quantiles estimation based on theoretically derived distributions: regional analysis in Southern Italy

    Science.gov (United States)

    Iacobellis, V.; Gioia, A.; Manfreda, S.; Fiorentino, M.

    2011-03-01

    A regional probabilistic model for the estimation of medium-high return period flood quantiles is presented. The model is based on the use of theoretically derived probability distributions of annual maximum flood peaks (DDF). The general model is called TCIF (Two-Component IF model) and encompasses two different threshold mechanisms associated with ordinary and extraordinary events, respectively. Based on at-site calibration of this model for 33 gauged sites in Southern Italy, a regional analysis is performed obtaining satisfactory results for the estimation of flood quantiles for return periods of technical interest, thus suggesting the use of the proposed methodology for the application to ungauged basins. The model is validated by using a jack-knife cross-validation technique taking all river basins into consideration.

  3. Process-based Cost Estimation for Ramjet/Scramjet Engines

    Science.gov (United States)

    Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John

    2003-01-01

    Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.

  4. Validation of a novel risk estimation tool for predicting low bone density in Caucasian and African American men veterans.

    Science.gov (United States)

    Zimering, Mark B; Shin, John J; Shah, Jyoti; Wininger, Eric; Engelhart, Charles

    2007-01-01

    Osteoporosis in men is a frequently missed diagnosis. We developed an additive risk index, Mscore (male, "simple calculated osteoporosis risk estimation"), based on bone mineral density (BMD) at the femoral neck (FN) in 639 ambulatory older male veterans. Mscore was derived from the nearest whole number ratio among regression coefficients for 5 variables independently associated with osteoporosis. Mscore=[2 x (patient age in decades)-(weight in lb/10)+4 if gastrectomy, +4 if emphysema, +3 if two or more prior fractures+14]. Age and weight variable scores are truncated to integers (i.e., 7 if 75 yr, 18 if 185 lb). Increased risk is reflected in higher Mscore values. We validated Mscore in 197 Caucasian male patients (mean age, 69 yr): values of 9 or higher had 88% sensitivity, 57% specificity, and an area under the curve (AUC) of 0.84 for predicting osteoporosis at the FN (population prevalence, 11%). Mscore values ranged from -9 to 20 allowing us to define low (13) risk categories. Two percent of low-risk men had osteoporosis, 36% or 55% of high-risk men had osteoporosis or osteopenia, respectively. In younger African American (n=134) male veterans (mean age, 61 yr), age and weight were the only variables independently predictive of FN BMD. A reduced Mscore(age-weight) (age and weight variable scores+14) at a cutoff threshold of 9 predicted osteoporosis in African American men (population prevalence, 3%) with a sensitivity of 100%, a specificity of 73%, and an AUC of 0.99. Finally, we compared Mscore with another validated osteoporosis self-assessment tool (OST). OST at a cutoff threshold of 4 or Mscore(age-weight) at a cutoff threshold of 9 performed similarly in both of our populations of Caucasian and African American men. In conclusion, a validated Mscore index with 5 variables was only slightly more robust for predicting osteoporosis in older Caucasian men than 2 (independently derived) risk indices based on age and weight. Mscore(age-weight) or OST is easy to

  5. Voxel Based Morphometry in Optical Coherence Tomography: Validation & Core Findings.

    Science.gov (United States)

    Antony, Bhavna J; Chen, Min; Carass, Aaron; Jedynak, Bruno M; Al-Louzi, Omar; Solomon, Sharon D; Saidha, Shiv; Calabresi, Peter A; Prince, Jerry L

    2016-02-27

    Optical coherence tomography (OCT) of the human retina is now becoming established as an important modality for the detection and tracking of various ocular diseases. Voxel based morphometry (VBM) is a long standing neuroimaging analysis technique that allows for the exploration of the regional differences in the brain. There has been limited work done in developing registration based methods for OCT, which has hampered the advancement of VBM analyses in OCT based population studies. Following on from our recent development of an OCT registration method, we explore the potential benefits of VBM analysis in cohorts of healthy controls (HCs) and multiple sclerosis (MS) patients. Specifically, we validate the stability of VBM analysis in two pools of HCs showing no significant difference between the two populations. Additionally, we also present a retrospective study of age and sex matched HCs and relapsing remitting MS patients, demonstrating results consistent with the reported literature while providing insight into the retinal changes associated with this MS subtype.

  6. Community-Based Validation of the Social Phobia Screener (SOPHS).

    Science.gov (United States)

    Batterham, Philip J; Mackinnon, Andrew J; Christensen, Helen

    2017-10-01

    There is a need for brief, accurate screening scales for social anxiety disorder to enable better identification of the disorder in research and clinical settings. A five-item social anxiety screener, the Social Phobia Screener (SOPHS), was developed to address this need. The screener was validated in two samples: (a) 12,292 Australian young adults screened for a clinical trial, including 1,687 participants who completed a phone-based clinical interview and (b) 4,214 population-based Australian adults recruited online. The SOPHS (78% sensitivity, 72% specificity) was found to have comparable screening performance to the Social Phobia Inventory (77% sensitivity, 71% specificity) and Mini-Social Phobia Inventory (74% sensitivity, 73% specificity) relative to clinical criteria in the trial sample. In the population-based sample, the SOPHS was also accurate (95% sensitivity, 73% specificity) in identifying Diagnostic and Statistical Manual of Mental Disorders-Fifth edition social anxiety disorder. The SOPHS is a valid and reliable screener for social anxiety that is freely available for use in research and clinical settings.

  7. Validation of GPU based TomoTherapy dose calculation engine.

    Science.gov (United States)

    Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond

    2012-04-01

    The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) GPU dose engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.

  8. HOTELLING'S T2 CONTROL CHARTS BASED ON ROBUST ESTIMATORS

    Directory of Open Access Journals (Sweden)

    SERGIO YÁÑEZ

    2010-01-01

    Full Text Available Under the presence of multivariate outliers, in a Phase I analysis of historical set of data, the T 2 control chart based on the usual sample mean vector and sample variance covariance matrix performs poorly. Several alternative estimators have been proposed. Among them, estimators based on the minimum volume ellipsoid (MVE and the minimum covariance determinant (MCD are powerful in detecting a reasonable number of outliers. In this paper we propose a T 2 control chart using the biweight S estimators for the location and dispersion parameters when monitoring multivariate individual observations. Simulation studies show that this method outperforms the T 2 control chart based on MVE estimators for a small number of observations.

  9. Development and validation of a method to estimate the potential wind erosion risk in Germany

    Science.gov (United States)

    Funk, Roger; Deumlich, Detlef; Völker, Lidia

    2017-04-01

    The introduction of the Cross Compliance (CC) regulations for soil protection resulted in the demand for the classification of the the wind erosion risk on agricultural areas in Germany nationwide. A spatial highly resolved method was needed based on uniform data sets and validation principles, which provides a fair and equivalent procedure for all affected farmers. A GIS-procedure was developed, which derives the site specific wind erosion risk from the main influencing factors: soil texture, wind velocity, wind direction and landscape structure following the German standard DIN 19706. The procedure enables different approaches in the Federal States and comparable classification results. Here, we present the approach of the Federal State of Brandenburg. In the first step a complete soil data map was composed in a grid size of 10 x 10 m. Data were taken from 1.) the Soil quality Appraisal (scale 1:10.000), 2.) the Medium-scale Soil Mapping (MMK, 1:25.000), 3.) extrapolating the MMK, 4.) new Soil quality Appraisal (new areas after coal-mining). Based on the texture and carbon content the wind erosion susceptibility was divided in 6 classes. This map was combined with data of the annual average wind velocity resulting in an increase of the risk classes for wind velocities > 5 ms-1 and a decrease for wind from each direction was used as a weighting factor and multiplied with the numerical values of the shadowed cells. Depending on the distance to the landscape element the shadowing effect was combined with the risk classes. The results show that the wind erosion risk is obviously reduced by integrating landscape structures into the risk assessment. After the renewed classification for the entire Federal State, about 60% of the area in the highest, and 40% in the medium risk classes changed into lower classes. The area of the highest potential risk class decreased from 40% to 17% in relation to the total area. A validation of this approach was made by data of the

  10. Web-based Food Behaviour Questionnaire: validation with grades six to eight students.

    Science.gov (United States)

    Hanning, Rhona M; Royall, Dawna; Toews, Jenn E; Blashill, Lindsay; Wegener, Jessica; Driezen, Pete

    2009-01-01

    The web-based Food Behaviour Questionnaire (FBQ) includes a 24-hour diet recall, a food frequency questionnaire, and questions addressing knowledge, attitudes, intentions, and food-related behaviours. The survey has been revised since it was developed and initially validated. The current study was designed to obtain qualitative feedback and to validate the FBQ diet recall. "Think aloud" techniques were used in cognitive interviews with dietitian experts (n=11) and grade six students (n=21). Multi-ethnic students (n=201) in grades six to eight at urban southern Ontario schools completed the FBQ and, subsequently, one-on-one diet recall interviews with trained dietitians. Food group and nutrient intakes were compared. Users provided positive feedback on the FBQ. Suggestions included adding more foods, more photos for portion estimation, and online student feedback. Energy and nutrient intakes were positively correlated between FBQ and dietitian interviews, overall and by gender and grade (all p<0.001). Intraclass correlation coefficients were ≥0.5 for energy and macro-nutrients, although the web-based survey underestimated energy (10.5%) and carbohydrate (-15.6%) intakes (p<0.05). Under-estimation of rice and pasta portions on the web accounted for 50% of this discrepancy. The FBQ is valid, relative to 24-hour recall interviews, for dietary assessment in diverse populations of Ontario children in grades six to eight.

  11. Model Stature Estimation formula for Adult Male Nigerians based on ...

    African Journals Online (AJOL)

    r2 = 0.502 P>0.001) and the equation that best predicted stature was a quadratic equation based on the length of this bone. Conclusion: A model stature prediction formula is hereby presented for validation in indigenous adult male Nigerians.

  12. GPM Ground Validation Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS) IFloodS V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS)...

  13. Adaptive Kalman filtering based internal temperature estimation with an equivalent electrical network thermal model for hard-cased batteries

    Science.gov (United States)

    Dai, Haifeng; Zhu, Letao; Zhu, Jiangong; Wei, Xuezhe; Sun, Zechang

    2015-10-01

    The accurate monitoring of battery cell temperature is indispensible to the design of battery thermal management system. To obtain the internal temperature of a battery cell online, an adaptive temperature estimation method based on Kalman filtering and an equivalent time-variant electrical network thermal (EENT) model is proposed. The EENT model uses electrical components to simulate the battery thermodynamics, and the model parameters are obtained with a least square algorithm. With a discrete state-space description of the EENT model, a Kalman filtering (KF) based internal temperature estimator is developed. Moreover, considering the possible time-varying external heat exchange coefficient, a joint Kalman filtering (JKF) based estimator is designed to simultaneously estimate the internal temperature and the external thermal resistance. Several experiments using the hard-cased LiFePO4 cells with embedded temperature sensors have been conducted to validate the proposed method. Validation results show that, the EENT model expresses the battery thermodynamics well, the KF based temperature estimator tracks the real central temperature accurately even with a poor initialization, and the JKF based estimator can simultaneously estimate both central temperature and external thermal resistance precisely. The maximum estimation errors of the KF- and JKF-based estimators are less than 1.8 °C and 1 °C respectively.

  14. Estimation of computer-based display complexity in digitalized MCR of NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Inseok; Kim, Chang Hwoi; Jung, Wondea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Hyuong Ju [KEPCO NF, Daejeon (Korea, Republic of)

    2016-10-15

    By adopting new human system interface (HSI) that are based on computer-based technologies, the operation environment of main control rooms (MCRs) in nuclear power plants (NPPs) has considerably changed. It also causes the unexpected human factor issues such as problem due to high level of information density in limited display area, operators’ situation awareness problem due to complexity of HSI and others. In this light, it is necessary to develop a quantitative method to evaluate display complexity and resolve corresponding human factor issues. Accordingly, the objectives of this study are to develop a quantitative measure to estimate the complexity of computer-based display and to validate the proposed measure by comparing the estimated complexity with the subjects’ performance time in the corresponding displays. In this study, a quantitative measure to estimate the complexity of computer-based display was proposed. In addition, the proposed measure was validated by comparing the estimated complexity with the subjects’ performance time in the corresponding displays. With adaptation of the results of this study, it is expected that the proposed method will be helpful for HSI design (in particular, screen/display design) and quantification of performance shaping factor related to HSI/display complexity.

  15. Interpreting Overdiagnosis Estimates in Population-based Mammography Screening

    Science.gov (United States)

    de Gelder, Rianne; Heijnsdijk, Eveline A. M.; van Ravesteyn, Nicolien T.; Fracheboud, Jacques; Draisma, Gerrit; de Koning, Harry J.

    2011-01-01

    Estimates of overdiagnosis in mammography screening range from 1% to 54%. This review explains such variations using gradual implementation of mammography screening in the Netherlands as an example. Breast cancer incidence without screening was predicted with a micro-simulation model. Observed breast cancer incidence (including ductal carcinoma in situ and invasive breast cancer) was modeled and compared with predicted incidence without screening during various phases of screening program implementation. Overdiagnosis was calculated as the difference between the modeled number of breast cancers with and the predicted number of breast cancers without screening. Estimating overdiagnosis annually between 1990 and 2006 illustrated the importance of the time at which overdiagnosis is measured. Overdiagnosis was also calculated using several estimators identified from the literature. The estimated overdiagnosis rate peaked during the implementation phase of screening, at 11.4% of all predicted cancers in women aged 0–100 years in the absence of screening. At steady-state screening, in 2006, this estimate had decreased to 2.8%. When different estimators were used, the overdiagnosis rate in 2006 ranged from 3.6% (screening age or older) to 9.7% (screening age only). The authors concluded that the estimated overdiagnosis rate in 2006 could vary by a factor of 3.5 when different denominators were used. Calculations based on earlier screening program phases may overestimate overdiagnosis by a factor 4. Sufficient follow-up and agreement regarding the chosen estimator are needed to obtain reliable estimates. PMID:21709144

  16. Robust Foot Clearance Estimation Based on the Integration of Foot-Mounted IMU Acceleration Data.

    Science.gov (United States)

    Benoussaad, Mourad; Sijobert, Benoît; Mombaur, Katja; Coste, Christine Azevedo

    2015-12-23

    This paper introduces a method for the robust estimation of foot clearance during walking, using a single inertial measurement unit (IMU) placed on the subject's foot. The proposed solution is based on double integration and drift cancellation of foot acceleration signals. The method is insensitive to misalignment of IMU axes with respect to foot axes. Details are provided regarding calibration and signal processing procedures. Experimental validation was performed on 10 healthy subjects under three walking conditions: normal, fast and with obstacles. Foot clearance estimation results were compared to measurements from an optical motion capture system. The mean error between them is significantly less than 15 % under the various walking conditions.

  17. Position Estimation for Switched Reluctance Motor Based on the Single Threshold Angle

    Science.gov (United States)

    Zhang, Lei; Li, Pang; Yu, Yue

    2017-05-01

    This paper presents a position estimate model of switched reluctance motor based on the single threshold angle. In view of the relationship of between the inductance and rotor position, the position is estimated by comparing the real-time dynamic flux linkage with the threshold angle position flux linkage (7.5° threshold angle, 12/8SRM). The sensorless model is built by Maltab/Simulink, the simulation are implemented under the steady state and transient state different condition, and verified its validity and feasibility of the method..

  18. Construct validation of judgement-based assessments of medical trainees' competency in the workplace using a "Kanesian" approach to validation

    NARCIS (Netherlands)

    McGill, D.A.; Vleuten, C.P.M. van der; Clarke, M.J.

    2015-01-01

    BACKGROUND: Evaluations of clinical assessments that use judgement-based methods have frequently shown them to have sub-optimal reliability and internal validity evidence for their interpretation and intended use. The aim of this study was to enhance that validity evidence by an evaluation of the

  19. IMPROVING EMISSIONS ESTIMATES WITH COMPUTATIONAL INTELLIGENCE, DATABASE EXPANSION, AND COMPREHENSIVE VALIDATION

    Science.gov (United States)

    The report discusses an EPA investigation of techniques to improve methods for estimating volatile organic compound (VOC) emissions from area sources. Using the automobile refinishing industry for a detailed area source case study, an emission estimation method is being developed...

  20. A Transverse Oscillation Approach for Estimation of Three-Dimensional Velocity Vectors, Part II: Experimental Validation

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev

    2014-01-01

    The 3-D transverse oscillation method is investigated by estimating 3-D velocities in an experimental flowrigsystem. Measurements of the synthesized transverse oscillatingfields are presented as well. The method employs a 2-D transducer; decouples the velocity estimation; and estimates the axial,...

  1. Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology

    Science.gov (United States)

    García-Barberena, Javier; Ubani, Nora

    2016-05-01

    The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.

  2. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  3. Response-Based Estimation of Sea State Parameters

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...... calculated by a 3-D time domain code and by closed-form (analytical) expressions, respectively. Based on comparisons with wave radar measurements and satellite measurements it is seen that the wave estimations based on closedform expressions exhibit a reasonable energy content, but the distribution of energy...

  4. Validation of Underwater Sensor Package Using Feature Based SLAM

    Directory of Open Access Journals (Sweden)

    Christopher Cain

    2016-03-01

    Full Text Available Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package.

  5. Model-based approach for elevator performance estimation

    Science.gov (United States)

    Esteban, E.; Salgado, O.; Iturrospe, A.; Isasa, I.

    2016-02-01

    In this paper, a dynamic model for an elevator installation is presented in the state space domain. The model comprises both the mechanical and the electrical subsystems, including the electrical machine and a closed-loop field oriented control. The proposed model is employed for monitoring the condition of the elevator installation. The adopted model-based approach for monitoring employs the Kalman filter as an observer. A Kalman observer estimates the elevator car acceleration, which determines the elevator ride quality, based solely on the machine control signature and the encoder signal. Finally, five elevator key performance indicators are calculated based on the estimated car acceleration. The proposed procedure is experimentally evaluated, by comparing the key performance indicators calculated based on the estimated car acceleration and the values obtained from actual acceleration measurements in a test bench. Finally, the proposed procedure is compared with the sliding mode observer.

  6. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  7. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jinhua [Fudan University, Department of Electronic Engineering, Shanghai (China); Computing and Computer-Assisted Intervention, Key Laboratory of Medical Imaging, Shanghai (China); Shi, Zhifeng; Chen, Liang; Mao, Ying [Fudan University, Department of Neurosurgery, Huashan Hospital, Shanghai (China); Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan [Fudan University, Department of Electronic Engineering, Shanghai (China)

    2017-08-15

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. (orig.)

  8. Estimating genetic correlations based on phenotypic data: a ...

    Indian Academy of Sciences (India)

    [Zintzaras E. 2011 Estimating genetic correlations based on phenotypic data: a simulation-based method. J. Genet. 90, 51–58]. Introduction. The evolutionary response to selection is a function of the genetic covariance between characters as well as environ- mental associations between them (Young and Weiler 1960;.

  9. Estimating Driving Performance Based on EEG Spectrum Analysis

    Directory of Open Access Journals (Sweden)

    Jung Tzyy-Ping

    2005-01-01

    Full Text Available The growing number of traffic accidents in recent years has become a serious concern to society. Accidents caused by driver's drowsiness behind the steering wheel have a high fatality rate because of the marked decline in the driver's abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedbacks to maintain their maximum performance. This paper proposes an EEG-based drowsiness estimation system that combines electroencephalogram (EEG log subband power spectrum, correlation analysis, principal component analysis, and linear regression models to indirectly estimate driver's drowsiness level in a virtual-reality-based driving simulator. Our results demonstrated that it is feasible to accurately estimate quantitatively driving performance, expressed as deviation between the center of the vehicle and the center of the cruising lane, in a realistic driving simulator.

  10. Estimating Driving Performance Based on EEG Spectrum Analysis

    Science.gov (United States)

    Lin, Chin-Teng; Wu, Ruei-Cheng; Jung, Tzyy-Ping; Liang, Sheng-Fu; Huang, Teng-Yi

    2005-12-01

    The growing number of traffic accidents in recent years has become a serious concern to society. Accidents caused by driver's drowsiness behind the steering wheel have a high fatality rate because of the marked decline in the driver's abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedbacks to maintain their maximum performance. This paper proposes an EEG-based drowsiness estimation system that combines electroencephalogram (EEG) log subband power spectrum, correlation analysis, principal component analysis, and linear regression models to indirectly estimate driver's drowsiness level in a virtual-reality-based driving simulator. Our results demonstrated that it is feasible to accurately estimate quantitatively driving performance, expressed as deviation between the center of the vehicle and the center of the cruising lane, in a realistic driving simulator.

  11. A Kalman-based Fundamental Frequency Estimation Algorithm

    DEFF Research Database (Denmark)

    Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    Fundamental frequency estimation is an important task in speech and audio analysis. Harmonic model-based methods typically have superior estimation accuracy. However, such methods usually as- sume that the fundamental frequency and amplitudes are station- ary over a short time frame. In this paper......, we propose a Kalman filter-based fundamental frequency estimation algorithm using the harmonic model, where the fundamental frequency and amplitudes can be truly nonstationary by modeling their time variations as first- order Markov chains. The Kalman observation equation is derived from the harmonic...... model and formulated as a compact nonlinear matrix form, which is further used to derive an extended Kalman filter. Detailed and continuous fundamental frequency and ampli- tude estimates for speech, the sustained vowel /a/ and solo musical tones with vibrato are demonstrated....

  12. Comparing the standards of one metabolic equivalent of task in accurately estimating physical activity energy expenditure based on acceleration.

    Science.gov (United States)

    Kim, Dohyun; Lee, Jongshill; Park, Hoon Ki; Jang, Dong Pyo; Song, Soohwa; Cho, Baek Hwan; Jung, Yoo-Suk; Park, Rae-Woong; Joo, Nam-Seok; Kim, In Young

    2017-07-01

    The purpose of the study is to analyse how the standard of resting metabolic rate (RMR) affects estimation of the metabolic equivalent of task (MET) using an accelerometer. In order to investigate the effect on estimation according to intensity of activity, comparisons were conducted between the 3.5 ml O 2  · kg -1  · min -1 and individually measured resting VO 2 as the standard of 1 MET. MET was estimated by linear regression equations that were derived through five-fold cross-validation using 2 types of MET values and accelerations; the accuracy of estimation was analysed through cross-validation, Bland and Altman plot, and one-way ANOVA test. There were no significant differences in the RMS error after cross-validation. However, the individual RMR-based estimations had as many as 0.5 METs of mean difference in modified Bland and Altman plots than RMR of 3.5 ml O 2  · kg -1  · min -1 . Finally, the results of an ANOVA test indicated that the individual RMR-based estimations had less significant differences between the reference and estimated values at each intensity of activity. In conclusion, the RMR standard is a factor that affects accurate estimation of METs by acceleration; therefore, RMR requires individual specification when it is used for estimation of METs using an accelerometer.

  13. Validation of HOAPS- and ERA-Interim precipitation estimates over the ocean

    Science.gov (United States)

    Bumke, Karl; Schröder, Marc; Fennig, Karsten

    2014-05-01

    Although precipitation is one of the key parameters of the global hydrological cycle there are still large gaps in the global observation networks, especially over the oceans. But the progress in satellite technology has provided the possibility to retrieve global data sets from space, including precipitation. Levizzani et al. (2007) showed that precipitation over the oceans can be derived with sufficient accuracy from passive microwave radiometry. Advances in analysis techniques have also improved our knowledge of the global precipitation. On the other hand, e.g. Andersson et al. (2011) or Pfeifroth et al. (2012) pointed out that even state-of-the-art satellite retrievals and reanalysis data sets still disagree on global or regional precipitation with respect to amounts, patterns, variability or temporal behavior compared to observations. That creates the need for a validation study over data sparse areas. Within this study, a validation of HOAPS-3.0 (Hamburg Ocean Atmosphere Parameters and fluxes from Satellite Data) based precipitation at pixel-level resolution and of ERA-Interim reanalysis data for 1995-1997 is performed mainly over the Atlantic Ocean using information from ship rain gauges and optical disdrometers mounted onboard of research vessels. The satellite and ERA-Interim data are compared to the in situ measurement by the nearest neighbor approach. Therefore, it must be ensured that both observations are related to each other, which can be determined by the decorrelation lengths in space and time. At least a number of 658 precipitation events are at our disposal including 127 snow events. The statistical analysis follows the recommendations given by the World Meteorological Organization (WMO) for dichotomous or binary forecasts (WWRP/WGNE: http://www.cawcr.gov.au/projects/verification/#Methods_for_dichotomous_forecasts). Based on contingency tables a number of statistical parameters like the accuracy, the bias, the false alarm rate, success ratio or

  14. Validation of a Mexican food photograph album as a tool to visually estimate food amounts in adolescents.

    Science.gov (United States)

    Bernal-Orozco, M Fernanda; Vizmanos-Lamotte, Barbara; Rodríguez-Rocha, Norma P; Macedo-Ojeda, Gabriela; Orozco-Valerio, María; Rovillé-Sausse, Françoise; León-Estrada, Sandra; Márquez-Sandoval, Fabiola; Fernández-Ballart, Joan D

    2013-03-14

    The aim of the present study was to validate a food photograph album (FPA) as a tool to visually estimate food amounts, and to compare this estimation with that attained through the use of measuring cups (MC) and food models (FM). We tested 163 foods over fifteen sessions (thirty subjects/session; 10-12 foods presented in two portion sizes, 20-24 plates/session). In each session, subjects estimated food amounts with the assistance of FPA, MC and FM. We compared (by portion and method) the mean estimated weight and the mean real weight. We also compared the percentage error estimation for each portion, and the mean food percentage error estimation between methods. In addition, we determined the percentage error estimation of each method. We included 463 adolescents from three public high schools (mean age 17·1 (sd 1·2) years, 61·8 % females). All foods were assessed using FPA, 53·4 % of foods were assessed using MC, and FM was used for 18·4 % of foods. The mean estimated weight with all methods was statistically different compared with the mean real weight for almost all foods. However, a lower percentage error estimation was observed using FPA (2·3 v. 56·9 % for MC and 325 % for FM, P< 0·001). Also, when analysing error rate ranges between methods, there were more observations (P< 0·001) with estimation errors higher than 40 % with the MC (56·1 %), than with the FPA (27·5 %) and FM (44·9 %). In conclusion, although differences between estimated and real weight were statistically significant for almost all foods, comparisons between methods showed FPA to be the most accurate tool for estimating food amounts.

  15. Fast LCMV-based Methods for Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Glentis, George-Othon; Christensen, Mads Græsbøll

    2013-01-01

    as such either the classic time domain averaging covariance matrix estimator, or, if aiming for an increased spectral resolution, the covariance matrix resulting from the application of the recent iterative adaptive approach (IAA). The proposed exact implementations reduce the required computational complexity...... with several orders of magnitude, but, as we show, further computational savings can be obtained by the adoption of an approximative IAA-based data covariance matrix estimator, reminiscent of the recently proposed Quasi-Newton IAA technique. Furthermore, it is shown how the considered pitch estimators can...

  16. A Kalman-based Fundamental Frequency Estimation Algorithm

    DEFF Research Database (Denmark)

    Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    Fundamental frequency estimation is an important task in speech and audio analysis. Harmonic model-based methods typically have superior estimation accuracy. However, such methods usually as- sume that the fundamental frequency and amplitudes are station- ary over a short time frame. In this paper...... model and formulated as a compact nonlinear matrix form, which is further used to derive an extended Kalman filter. Detailed and continuous fundamental frequency and ampli- tude estimates for speech, the sustained vowel /a/ and solo musical tones with vibrato are demonstrated....

  17. Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis

    Science.gov (United States)

    Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel

    2013-01-01

    This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007

  18. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  19. Time of arrival based location estimation for cooperative relay networks

    KAUST Repository

    Çelebi, Hasari Burak

    2010-09-01

    In this paper, we investigate the performance of a cooperative relay network performing location estimation through time of arrival (TOA). We derive Cramer-Rao lower bound (CRLB) for the location estimates using the relay network. The analysis is extended to obtain average CRLB considering the signal fluctuations in both relay and direct links. The effects of the channel fading of both relay and direct links and amplification factor and location of the relay node on average CRLB are investigated. Simulation results show that the channel fading of both relay and direct links and amplification factor and location of relay node affect the accuracy of TOA based location estimation. ©2010 IEEE.

  20. Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles.

    Science.gov (United States)

    Cortés, Camilo; Unzueta, Luis; de Los Reyes-Guzmán, Ana; Ruiz, Oscar E; Flórez, Julián

    2016-01-01

    In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR.

  1. Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles

    Directory of Open Access Journals (Sweden)

    Camilo Cortés

    2016-01-01

    Full Text Available In Robot-Assisted Rehabilitation (RAR the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs (e.g., optical and electromagnetic to estimate the Glenohumeral (GH joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR.

  2. On the Relationships between Sum Score Based Estimation and Joint Maximum Likelihood Estimation

    Science.gov (United States)

    del Pino, Guido; San Martin, Ernesto; Gonzalez, Jorge; De Boeck, Paul

    2008-01-01

    This paper analyzes the sum score based (SSB) formulation of the Rasch model, where items and sum scores of persons are considered as factors in a logit model. After reviewing the evolution leading to the equality between their maximum likelihood estimates, the SSB model is then discussed from the point of view of pseudo-likelihood and of…

  3. A novel SURE-based criterion for parametric PSF estimation.

    Science.gov (United States)

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  4. Pros, Cons, and Alternatives to Weight Based Cost Estimating

    Science.gov (United States)

    Joyner, Claude R.; Lauriem, Jonathan R.; Levack, Daniel H.; Zapata, Edgar

    2011-01-01

    Many cost estimating tools use weight as a major parameter in projecting the cost. This is often combined with modifying factors such as complexity, technical maturity of design, environment of operation, etc. to increase the fidelity of the estimate. For a set of conceptual designs, all meeting the same requirements, increased weight can be a major driver in increased cost. However, once a design is fixed, increased weight generally decreases cost, while decreased weight generally increases cost - and the relationship is not linear. Alternative approaches to estimating cost without using weight (except perhaps for materials costs) have been attempted to try to produce a tool usable throughout the design process - from concept studies through development. This paper will address the pros and cons of using weight based models for cost estimating, using liquid rocket engines as the example. It will then examine approaches that minimize the impct of weight based cost estimating. The Rocket Engine- Cost Model (RECM) is an attribute based model developed internally by Pratt & Whitney Rocketdyne for NASA. RECM will be presented primarily to show a successful method to use design and programmatic parameters instead of weight to estimate both design and development costs and production costs. An operations model developed by KSC, the Launch and Landing Effects Ground Operations model (LLEGO), will also be discussed.

  5. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    Directory of Open Access Journals (Sweden)

    S. Chandra Sekhar

    2004-09-01

    Full Text Available We address the problem of estimating instantaneous frequency (IF of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE. The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD-based IF estimators for different signal-to-noise ratio (SNR.

  6. Comparison between Human and Bite-Based Methods of Estimating Caloric Intake.

    Science.gov (United States)

    Salley, James N; Hoover, Adam W; Wilson, Michael L; Muth, Eric R

    2016-10-01

    Current methods of self-monitoring kilocalorie intake outside of laboratory/clinical settings suffer from a systematic underreporting bias. Recent efforts to make kilocalorie information available have improved these methods somewhat, but it may be possible to derive an objective and more accurate measure of kilocalorie intake from bite count. This study sought to develop and examine the accuracy of an individualized bite-based measure of kilocalorie intake and to compare that measure to participant estimates of kilocalorie intake. It was hypothesized that kilocalorie information would improve human estimates of kilocalorie intake over those with no information, but a bite-based estimate of kilocalorie intake would still outperform human estimates. Two-hundred eighty participants were allowed to eat ad libitum in a cafeteria setting. Their bite count and kilocalorie intake were measured. After completion of the meal, participants estimated how many kilocalories they consumed, some with the aid of a menu containing kilocalorie information and some without. Using a train and test method for predictive model development, participants were randomly divided into one of two groups: one for model development (training group) and one for model validation (test group). Multiple regression was used to determine whether height, weight, age, sex, and waist-to-hip ratio could predict an individual's mean kilocalories per bite for the training sample. The model was then validated with the test group, and the model-predicted kilocalorie intake was compared with human-estimated kilocalorie intake. Only age and sex significantly predicted mean kilocalories per bite, but all variables were retained for the test group. The bite-based measure of kilocalorie intake outperformed human estimates with and without kilocalorie information. Bite count might serve as an easily measured, objective proxy for kilocalorie intake. A tool that can monitor bite count may be a powerful assistant to

  7. Snow Water Equivalent estimation based on satellite observation

    Science.gov (United States)

    Macchiavello, G.; Pesce, F.; Boni, G.; Gabellani, S.

    2009-09-01

    The availability of remotely sensed images and them analysis is a powerful tool for monitoring the extension and typology of snow cover over territory where the in situ measurements are often difficult. Information on snow are fundamental for monitoring and forecasting the available water above all in regions at mid latitudes as Mediterranean where snowmelt may cause floods. The hydrological model requirements and the daily acquisitions of MODIS (Moderate Resolution Imaging Spectroradiometer), drove, in previous research activities, to the development of a method to automatically map the snow cover from multi-spectral images. But, the major hydrological parameter related to the snow pack is the Snow Water Equivalent (SWE). This represents a direct measure of stored water in the basin. Because of it, the work was focused to the daily estimation of SWE from MODIS images. But, the complexity of this aim, based only on optical data, doesn’t find any information in literature. Since, from the spectral range of MODIS data it is not possible to extract a direct relation between spectral information and the SWE. Then a new method, respectful of the physic of the snow, was defined and developed. Reminding that the snow water equivalent is the product of the three factors as snow density, snow depth and the snow covered areas, the proposed approach works separately on each of these physical behaviors. Referring to the physical characteristic of snow, the snow density is function of the snow age, then it was studied a new method to evaluate this. Where, a module for snow age simulation from albedo information was developed. It activates an age counter updated by new snow information set to estimate snow age from zero accumulation status to the end of melting season. The height of the snow pack, can be retrieved by adopting relation between vegetation and snow depth distributions. This computes snow height distribution by the relation between snow cover fraction and the

  8. A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China

    Directory of Open Access Journals (Sweden)

    Huashan Li

    2014-01-01

    Full Text Available Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China.

  9. Is Earth-based scaling a valid procedure for calculating heat flows for Mars?

    Science.gov (United States)

    Ruiz, Javier; Williams, Jean-Pierre; Dohm, James M.; Fernández, Carlos; López, Valle

    2013-09-01

    Heat flow is a very important parameter for constraining the thermal evolution of a planetary body. Several procedures for calculating heat flows for Mars from geophysical or geological proxies have been used, which are valid for the time when the structures used as indicators were formed. The more common procedures are based on estimates of lithospheric strength (the effective elastic thickness of the lithosphere or the depth to the brittle-ductile transition). On the other hand, several works by Kargel and co-workers have estimated martian heat flows from scaling the present-day terrestrial heat flow to Mars, but the so-obtained values are much higher than those deduced from lithospheric strength. In order to explain the discrepancy, a recent paper by Rodriguez et al. (Rodriguez, J.A.P., Kargel, J.S., Tanaka, K.L., Crown, D.A., Berman, D.C., Fairén, A.G., Baker, V.R., Furfaro, R., Candelaria, P., Sasaki, S. [2011]. Icarus 213, 150-194) criticized the heat flow calculations for ancient Mars presented by Ruiz et al. (Ruiz, J., Williams, J.-P., Dohm, J.M., Fernández, C., López, V. [2009]. Icarus 207, 631-637) and other studies calculating ancient martian heat flows from lithospheric strength estimates, and casted doubts on the validity of the results obtained by these works. Here however we demonstrate that the discrepancy is due to computational and conceptual errors made by Kargel and co-workers, and we conclude that the scaling from terrestrial heat flow values is not a valid procedure for estimating reliable heat flows for Mars.

  10. A Channelization-Based DOA Estimation Method for Wideband Signals.

    Science.gov (United States)

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-07-04

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  11. A Channelization-Based DOA Estimation Method for Wideband Signals

    Science.gov (United States)

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  12. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2016-07-01

    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  13. Perceptually Valid Facial Expressions for Character-Based Applications

    Directory of Open Access Journals (Sweden)

    Ali Arya

    2009-01-01

    Full Text Available This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”

  14. In Vivo Validation of a Blood Vector Velocity Estimator with MR Angiography

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Udesen, Jesper; Thomsen, Carsten

    2009-01-01

    Conventional Doppler methods for blood velocity estimation only estimate the velocity component along the ultrasound beam direction. This implies that a Doppler angle under examination close to 90° results in unreliable information about the true blood direction and blood velocity. The novel method...... transverse oscillation (TO), which combines estimates of the axial and the transverse velocity components in the scan plane, makes it possible to estimate the vector velocity of the blood regardless of the Doppler angle. The present study evaluates the TO method with magnetic resonance phase contrast...

  15. In vivo validation of a blood vector velocity estimator with MR angiography

    DEFF Research Database (Denmark)

    Hansen, K.L.; Udesen, J.; Thomsen, C.

    2009-01-01

    Conventional Doppler methods for blood velocity estimation only estimate the velocity component along the ultrasound beam direction. This implies that a Doppler angle under examination close to 90 degrees results in unreliable information about the true blood direction and blood velocity. The novel...... method transverse oscillation (TO), which combines estimates of the axial and the transverse velocity components in the scan plane, makes it possible to estimate the vector velocity of the blood regardless of the Doppler angle. The present study evaluates the TO method with magnetic resonance phase...

  16. Validation of Nutrient Intake Estimates Derived Using a Semi-Quantitative FFQ against 3 Day Diet Records in the Baltimore Longitudinal Study of Aging.

    Science.gov (United States)

    Talegawkar, S A; Tanaka, T; Maras, J E; Ferrucci, L; Tucker, K L

    2015-12-01

    To examine the relative validity of a multicultural FFQ used to derive nutrient intake estimates in a community dwelling cohort of younger and older men and women compared with those derived from 3 day (3d) diet records during the same time-frame. Cross-sectional analyses. The Baltimore Longitudinal Study of Aging (BLSA) conducted in the Baltimore, MD and District of Columbia areas. A subset (n=468, aged 26 to 95 years (y), 47% female, 65% non-Hispanic white) from the BLSA, with complete data for nutrient estimates from a FFQ and 3d diet records. Pearson's correlation coefficients (energy adjusted and de-attenuated) for intakes of energy and 26 nutrients estimated from the FFQ and the mean of 3d diet records were calculated in a cross-sectional analysis. Rankings of individuals based on FFQ for various nutrient intakes were compared to corresponding rankings based on the average of the 3d diet records. Bland Altman plots were examined for a visual representation of agreement between both assessment methods. All analyses were stratified by sex and age (above and below 65 y). Median nutrient intake estimates tended to be higher from the FFQ compared to average 3d diet records. Energy adjusted and de-attenuated correlations between FFQ intake estimates and records ranged from 0.23 (sodium intake in men) to 0.81 (alcohol intake in women). The FFQ classified more than 70 percent of participants in either the same or adjacent quartile categories for all nutrients examined. Bland Altman plots demonstrated good agreement between the assessment methods for most nutrients. This FFQ provides reasonably valid estimates of dietary intakes of younger and older participants of the BLSA.

  17. HydrogeoEstimatorXL: an Excel-based tool for estimating hydraulic gradient magnitude and direction

    Science.gov (United States)

    Devlin, J. F.; Schillig, P. C.

    2017-05-01

    HydrogeoEstimatorXL is a free software tool for the interpretation of flow systems based on spatial hydrogeological field data from multi-well networks. It runs on the familiar Excel spreadsheet platform. The program accepts well location coordinates and hydraulic head data, and returns an analysis of the area flow system in two dimensions based on (1) a single best fit plane of the potentiometric surface and (2) three-point estimators, i.e., well triplets assumed to bound planar sections of the potentiometric surface. The software produces graphical outputs including histograms of hydraulic gradient magnitude and direction, groundwater velocity (based on a site average hydraulic properties), as well as mapped renditions of the estimator triangles and the velocity vectors associated with them. Within the software, a transect can be defined and the mass discharge of a groundwater contaminant crossing the transect can be estimated. This kind of analysis is helpful in gaining an overview of a site's hydrogeology, for problem definition, and as a review tool to check the reasonableness of other independent calculations.

  18. Facing the estimation of effective population size based on molecular markers: comparison of estimators

    DEFF Research Database (Denmark)

    Jimenez Mena, Belen; Verrier, Etienne; Hospital, Frederic

    We performed a simulation study of several estimators of the effective population size (Ne): NeH = estimator based on the rate of decrease in heterozygosity; NeT = estimator based on the temporal method; NeLD = linkage disequilibrium-based method. We first focused on NeH, which presented...... under scenarios of 3 and 20 bi-allelic loci. Increasing the number of loci largely improved the performance of NeT and NeLD. We highlight the value of NeT and NeLD when large numbers of bi-allelic loci are available, which is nowadays the case for SNPs markers....... an increase in the variability of values over time. The distance from the mean and the median to the true Ne increased over time too. This was caused by the fixation of alleles through time due to genetic drift and the changes in the distribution of allele frequencies. We compared the three estimators of Ne...

  19. Human Joint Angle Estimation with Inertial Sensors and Validation with A Robot Arm.

    Science.gov (United States)

    El-Gohary, Mahmoud; McNames, James

    2015-07-01

    Traditionally, human movement has been captured primarily by motion capture systems. These systems are costly, require fixed cameras in a controlled environment, and suffer from occlusion. Recently, the availability of low-cost wearable inertial sensors containing accelerometers, gyroscopes, and magnetometers have provided an alternative means to overcome the limitations of motion capture systems. Wearable inertial sensors can be used anywhere, cannot be occluded, and are low cost. Several groups have described algorithms for tracking human joint angles. We previously described a novel approach based on a kinematic arm model and the Unscented Kalman Filter (UKF). Our proposed method used a minimal sensor configuration with one sensor on each segment. This paper reports significant improvements in both the algorithm and the assessment. The new model incorporates gyroscope and accelerometer random drift models, imposes physical constraints on the range of motion for each joint, and uses zero-velocity updates to mitigate the effect of sensor drift. A high-precision industrial robot arm precisely quantifies the performance of the tracker during slow, normal, and fast movements over continuous 15-min recording durations. The agreement between the estimated angles from our algorithm and the high-precision robot arm reference was excellent. On average, the tracker attained an RMS angle error of about 3(°) for all six angles. The UKF performed slightly better than the more common Extended Kalman Filter.

  20. Parametric validations of analytical lifetime estimates for radiation belt electron diffusion by whistler waves

    Directory of Open Access Journals (Sweden)

    A. V. Artemyev

    2013-04-01

    Full Text Available The lifetimes of electrons trapped in Earth's radiation belts can be calculated from quasi-linear pitch-angle diffusion by whistler-mode waves, provided that their frequency spectrum is broad enough and/or their average amplitude is not too large. Extensive comparisons between improved analytical lifetime estimates and full numerical calculations have been performed in a broad parameter range representative of a large part of the magnetosphere from L ~ 2 to 6. The effects of observed very oblique whistler waves are taken into account in both numerical and analytical calculations. Analytical lifetimes (and pitch-angle diffusion coefficients are found to be in good agreement with full numerical calculations based on CRRES and Cluster hiss and lightning-generated wave measurements inside the plasmasphere and Cluster lower-band chorus waves measurements in the outer belt for electron energies ranging from 100 keV to 5 MeV. Comparisons with lifetimes recently obtained from electron flux measurements on SAMPEX, SCATHA, SAC-C and DEMETER also show reasonable agreement.

  1. DEVELOPMENT AND VALIDATION OF RP-HPLC METHOD FOR SIMULTANEOUS ESTIMATION OF IVERMECTIN AND CLORSULON IN IVERCAM INJECTION

    OpenAIRE

    Vegad Kunjal L*, Paranjape Dipty B., Shah Dhwani A., Patel Ekta D., Patel Yogesh K., Patel Kaushik R.

    2017-01-01

    A precise, simple, accurate and selective method was developed and validate for estimation of Ivermectin and Clorsulon in Ivercam injection, Reversed phase high performance liquid chromatographic (RP-HPLC) method was developed for routine quantification of Ivermectin and Clorsulon in laboratory prepared mixtures as well as in combined dosage form. Chromatographic separation was achieved on a BDS hypersil C18 (5μ, 250 x 4.6 mm) utilizing mobile phase of filtered and degassed mixture of 60 phos...

  2. Code-based Diagnostic Algorithms for Idiopathic Pulmonary Fibrosis. Case Validation and Improvement.

    Science.gov (United States)

    Ley, Brett; Urbania, Thomas; Husson, Gail; Vittinghoff, Eric; Brush, David R; Eisner, Mark D; Iribarren, Carlos; Collard, Harold R

    2017-06-01

    Population-based studies of idiopathic pulmonary fibrosis (IPF) in the United States have been limited by reliance on diagnostic code-based algorithms that lack clinical validation. To validate a well-accepted International Classification of Diseases, Ninth Revision, code-based algorithm for IPF using patient-level information and to develop a modified algorithm for IPF with enhanced predictive value. The traditional IPF algorithm was used to identify potential cases of IPF in the Kaiser Permanente Northern California adult population from 2000 to 2014. Incidence and prevalence were determined overall and by age, sex, and race/ethnicity. A validation subset of cases (n = 150) underwent expert medical record and chest computed tomography review. A modified IPF algorithm was then derived and validated to optimize positive predictive value. From 2000 to 2014, the traditional IPF algorithm identified 2,608 cases among 5,389,627 at-risk adults in the Kaiser Permanente Northern California population. Annual incidence was 6.8/100,000 person-years (95% confidence interval [CI], 6.1-7.7) and was higher in patients with older age, male sex, and white race. The positive predictive value of the IPF algorithm was only 42.2% (95% CI, 30.6 to 54.6%); sensitivity was 55.6% (95% CI, 21.2 to 86.3%). The corrected incidence was estimated at 5.6/100,000 person-years (95% CI, 2.6-10.3). A modified IPF algorithm had improved positive predictive value but reduced sensitivity compared with the traditional algorithm. A well-accepted International Classification of Diseases, Ninth Revision, code-based IPF algorithm performs poorly, falsely classifying many non-IPF cases as IPF and missing a substantial proportion of IPF cases. A modification of the IPF algorithm may be useful for future population-based studies of IPF.

  3. Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM

    Science.gov (United States)

    Sheng, Hanlin; Zhang, Tianhong

    2017-08-01

    In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.

  4. Validation of hospital register-based diagnosis of Parkinson's disease

    DEFF Research Database (Denmark)

    Wermuth, Lene; Lassen, Christina Funch; Himmerslev, Liselotte

    2012-01-01

    Denmark has a long-standing tradition of maintaining one of the world's largest health science specialized register data bases as the National Hospital Register (NHR). To estimate the prevalence and incidence of diseases, the correctness of the diagnoses recorded is critical. Parkinson's disease...... (PD) is a neurodegenerative disorder and only 75-80% of patients with parkinsonism will have idiopathic PD (iPD). It is necessary to follow patients in order to determine if some of them will develop other neurodegenerative diseases and a one-time-only diagnostic code for iPD reported in the register...

  5. Training Based Channel Estimation for Multitaper GFDM System

    Directory of Open Access Journals (Sweden)

    Shravan Kumar Bandari

    2017-01-01

    Full Text Available Recent activities in the cellular network world clearly show the need to design new physical layer waveforms in order to meet future wireless requirements. Generalized Frequency Division Multiplexing (GFDM is one of the leading candidates for 5G and one of its key features is the usage of circular pulse shaping of subcarriers to remove prototype filter transients. Due to the nonorthogonal nature of the conventional GFDM system, inherent interference will affect adversely channel estimation. With Discrete Prolate Spheroidal Sequences (DPSSs or multitapers as prototype filters an improved orthogonal GFDM system can be developed. In this work, we investigate channel estimation methods for multitaper GFDM (MGFDM systems with and without Discrete Fourier Transform (DFT. The simulation results are presented using Least Squares (LS and Minimum Mean Square Error (MMSE channel estimation (CE methods. DFT based CE methods provide better estimates of the channel but with an additional computational cost.

  6. Dynamic Mode Decomposition based on Kalman Filter for Parameter Estimation

    Science.gov (United States)

    Shibata, Hisaichi; Nonomura, Taku; Takaki, Ryoji

    2017-11-01

    With the development of computational fluid dynamics, large-scale data can now be obtained. In order to model physical phenomena from such data, it is required to extract features of flow field. Dynamic mode decomposition (DMD) is a method which meets the request. DMD can compute dominant eigenmodes of flow field by approximating system matrix. From this point of view, DMD can be considered as parameter estimation of system matrix. To estimate such parameters, we propose a novel method based on Kalman filter. Our numerical experiments indicated that the proposed method can estimate the parameters more accurately if it is compared with standard DMD methods. With this method, it is also possible to improve the parameter estimation accuracy if characteristics of noise acting on the system is given.

  7. Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration

    Science.gov (United States)

    Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.

    2017-04-01

    Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn

  8. Iowa Model of Evidence-Based Practice: Revisions and Validation.

    Science.gov (United States)

    Buckwalter, Kathleen C; Cullen, Laura; Hanrahan, Kirsten; Kleiber, Charmaine; McCarthy, Ann Marie; Rakel, Barbara; Steelman, Victoria; Tripp-Reimer, Toni; Tucker, Sharon

    2017-06-01

    The Iowa Model is a widely used framework for the implementation of evidence-based practice (EBP). Changes in health care (e.g., emergence of implementation science, emphasis on patient engagement) prompted the re-evaluation, revision, and validation of the model. A systematic multi-step process was used capturing information from the literature and user feedback via an electronic survey and live work groups. The Iowa Model Collaborative critically assessed and synthesized information and recommendations before revising the model. Survey participants (n = 431) had requested access to the Model between years 2001 and 2013. Eighty-eight percent (n = 379) of participants reported using the Iowa Model and identified the most problematic steps as: topic priority, critique, pilot, and institute change. Users provided 587 comments with rich contextual rationale and insightful suggestions. The revised model was then evaluated by participants (n = 299) of the 22nd National EBP Conference in 2015. They validated the model as a practical tool for the EBP process across diverse settings. Specific changes in the model are discussed. This user driven revision differs from other frameworks in that it links practice changes within the system. Major model changes are expansion of piloting, implementation, patient engagement, and sustaining change. The Iowa Model-Revised remains an application-oriented guide for the EBP process. Intended users are point of care clinicians who ask questions and seek a systematic, EBP approach to promote excellence in health care. © 2017 University of Iowa Hospitals and Clinics, Worldviews on Evidence-Based Nursing © 2017 Sigma Theta Tau International.

  9. Validation of an efficient visual method for estimating leaf area index ...

    African Journals Online (AJOL)

    This study aimed to evaluate the accuracy and applicability of a visual method for estimating LAI in clonal Eucalyptus grandis × E. urophylla plantations and to compare it with hemispherical photography, ceptometer and LAI-2000® estimates. Destructive sampling for direct determination of the actual LAI was performed in ...

  10. Edge-Based Defocus Blur Estimation With Adaptive Scale Selection.

    Science.gov (United States)

    Karaali, Ali; Jung, Claudio Rosito

    2018-03-01

    Objects that do not lie at the focal distance of a digital camera generate defocused regions in the captured image. This paper presents a new edge-based method for spatially varying defocus blur estimation using a single image based on reblurred gradient magnitudes. The proposed approach initially computes a scale-consistent edge map of the input image and selects a local reblurring scale aiming to cope with noise, edge mis-localization, and interfering edges. An initial blur estimate is computed at the detected scale-consistent edge points and a novel connected edge filter is proposed to smooth the sparse blur map based on pixel connectivity within detected edge contours. Finally, a fast guided filter is used to propagate the sparse blur map through the whole image. Experimental results show that the proposed approach presents a very good compromise between estimation error and running time when compared with the state-of-the-art methods. We also explore our blur estimation method in the context of image deblurring, and show that metrics typically used to evaluate blur estimation may not correlate as expected with the visual quality of the deblurred image.

  11. Bias adjustment of satellite-based precipitation estimation using gauge observations: A case study in Chile

    Science.gov (United States)

    Yang, Zhongwen; Hsu, Kuolin; Sorooshian, Soroosh; Xu, Xinyi; Braithwaite, Dan; Verbist, Koen M. J.

    2016-04-01

    Satellite-based precipitation estimates (SPEs) are promising alternative precipitation data for climatic and hydrological applications, especially for regions where ground-based observations are limited. However, existing satellite-based rainfall estimations are subject to systematic biases. This study aims to adjust the biases in the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) rainfall data over Chile, using gauge observations as reference. A novel bias adjustment framework, termed QM-GW, is proposed based on the nonparametric quantile mapping approach and a Gaussian weighting interpolation scheme. The PERSIANN-CCS precipitation estimates (daily, 0.04°×0.04°) over Chile are adjusted for the period of 2009-2014. The historical data (satellite and gauge) for 2009-2013 are used to calibrate the methodology; nonparametric cumulative distribution functions of satellite and gauge observations are estimated at every 1°×1° box region. One year (2014) of gauge data was used for validation. The results show that the biases of the PERSIANN-CCS precipitation data are effectively reduced. The spatial patterns of adjusted satellite rainfall show high consistency to the gauge observations, with reduced root-mean-square errors and mean biases. The systematic biases of the PERSIANN-CCS precipitation time series, at both monthly and daily scales, are removed. The extended validation also verifies that the proposed approach can be applied to adjust SPEs into the future, without further need for ground-based measurements. This study serves as a valuable reference for the bias adjustment of existing SPEs using gauge observations worldwide.

  12. Estimating haplotype relative risks on human survival in population-based association studies.

    Science.gov (United States)

    Tan, Qihua; Christiansen, Lene; Bathum, Lise; Zhao, Jing Hua; Yashin, Anatoli I; Vaupel, James W; Christensen, Kaare; Kruse, Torben A

    2005-01-01

    Association-based linkage disequilibrium (LD) mapping is an increasingly important tool for localizing genes that show potential influence on human aging and longevity. As haplotypes contain more LD information than single markers, a haplotype-based LD approach can have increased power in detecting associations as well as increased robustness in statistical testing. In this paper, we develop a new statistical model to estimate haplotype relative risks (HRRs) on human survival using unphased multilocus genotype data from unrelated individuals in cross-sectional studies. Based on the proportional hazard assumption, the model can estimate haplotype risk and frequency parameters, incorporate observed covariates, assess interactions between haplotypes and the covariates, and investigate the modes of gene function. By introducing population survival information available from population statistics, we are able to develop a procedure that carries out the parameter estimation using a nonparametric baseline hazard function and estimates sex-specific HRRs to infer gene-sex interaction. We also evaluate the haplotype effects on human survival while taking into account individual heterogeneity in the unobserved genetic and nongenetic factors or frailty by introducing the gamma-distributed frailty into the survival function. After model validation by computer simulation, we apply our method to an empirical data set to measure haplotype effects on human survival and to estimate haplotype frequencies at birth and over the observed ages. Results from both simulation and model application indicate that our survival analysis model is an efficient method for inferring haplotype effects on human survival in population-based association studies.

  13. Wind estimation based on thermal soaring of birds.

    Science.gov (United States)

    Weinzierl, Rolf; Bohrer, Gil; Kranstauber, Bart; Fiedler, Wolfgang; Wikelski, Martin; Flack, Andrea

    2016-12-01

    The flight performance of birds is strongly affected by the dynamic state of the atmosphere at the birds' locations. Studies of flight and its impact on the movement ecology of birds must consider the wind to help us understand aerodynamics and bird flight strategies. Here, we introduce a systematic approach to evaluate wind speed and direction from the high-frequency GPS recordings from bird-borne tags during thermalling flight. Our method assumes that a fixed horizontal mean wind speed during a short (18 seconds, 19 GPS fixes) flight segment with a constant turn angle along a closed loop, characteristic of thermalling flight, will generate a fixed drift for each consequent location. We use a maximum-likelihood approach to estimate that drift and to determine the wind and airspeeds at the birds' flight locations. We also provide error estimates for these GPS-derived wind speed estimates. We validate our approach by comparing its wind estimates with the mid-resolution weather reanalysis data from ECMWF, and by examining independent wind estimates from pairs of birds in a large dataset of GPS-tagged migrating storks that were flying in close proximity. Our approach provides accurate and unbiased observations of wind speed and additional detailed information on vertical winds and uplift structure. These precise measurements are otherwise rare and hard to obtain and will broaden our understanding of atmospheric conditions, flight aerodynamics, and bird flight strategies. With an increasing number of GPS-tracked animals, we may soon be able to use birds to inform us about the atmosphere they are flying through and thus improve future ecological and environmental studies.

  14. Application for developing countries: Estimating trip attraction in urban zones based on centrality

    Directory of Open Access Journals (Sweden)

    Amila Jayasinghe

    2017-10-01

    Full Text Available This paper introduced a network centrality-based method to estimate the volume of trip attraction in traffic analysis zones. Usually trip attraction volumes are estimated based on land use characteristics. However, executing of land use-based trip attraction models are severely constrained by the lack of updated land use data in developing countries. The proposed method used network centrality-based explanatory variables as “connectivity”, “local integration” and “global integration”. Space syntax tools were used to compute the centrality of road segments. GIS-based kernel density estimation method was used to transform computed road segment-based centrality values into traffic analysis zone. Trip attraction values exhibited significant high correlation with connectivity, global and local integration values. The study developed and validated model to estimate trip attraction by using connectivity, local integration and global integration values as endogenous variables with an accepted level of accuracy (R2 > 0.75. The proposed approach required minimal data, and it was easily executed using a geographic information system. The study recommended the proposed method as a practical tool for transport planners and engineers, especially who work in developing countries and where updated land use data is unavailable.

  15. Validation of MODIS albedo products with high resolution albedo estimates from FORMOSAT-2

    OpenAIRE

    Courault, Dominique; Olioso, Albert; Weiss, Marie; Marloie, Olivier; Baret, Frédéric; Hagolle, Olivier; Gallego-Elvira, Belen

    2013-01-01

    Among MODIS products (freely available to the scientific community from 2001), albedo data (MCD43B3) are 16 days composites at 1km spatial resolution, widely used for various applications in climate models, but which still remains difficult to validate. The objective of this study is to propose a method to validate these products with high spatial and temporal resolution data. 31 FORMOSAT-2 images acquired over a small region in the South-Eastern France at 8m for spatial resolution were aggre...

  16. Linear Frequency Estimation Technique for Reducing Frequency Based Signals.

    Science.gov (United States)

    Woodbridge, Jonathan; Bui, Alex; Sarrafzadeh, Majid

    2010-06-01

    This paper presents a linear frequency estimation (LFE) technique for data reduction of frequency-based signals. LFE converts a signal to the frequency domain by utilizing the Fourier transform and estimates both the real and imaginary parts with a series of vectors much smaller than the original signal size. The estimation is accomplished by selecting optimal points from the frequency domain and interpolating data between these points with a first order approximation. The difficulty of such a problem lies in determining which points are most significant. LFE is unique in the fact that it is generic to a wide variety of frequency-based signals such as electromyography (EMG), voice, and electrocardiography (ECG). The only requirement is that spectral coefficients are spatially correlated. This paper presents the algorithm and results from both EMG and voice data. We complete the paper with a description of how this method can be applied to pattern types of recognition, signal indexing, and compression.

  17. Reproducibility of intensity-based estimates of lung ventilation

    Science.gov (United States)

    Du, Kaifang; Bayouth, John E.; Ding, Kai; Christensen, Gary E.; Cao, Kunlin; Reinhardt, Joseph M.

    2013-01-01

    Purpose: Lung function depends on lung expansion and contraction during the respiratory cycle. Respiratory-gated CT imaging and image registration can be used to estimate the regional lung volume change by observing CT voxel density changes during inspiration or expiration. In this study, the authors examine the reproducibility of intensity-based estimates of lung tissue expansion and contraction in three mechanically ventilated sheep and ten spontaneously breathing humans. The intensity-based estimates are compared to the estimates of lung function derived from image registration deformation field. Methods: 4DCT data set was acquired for a cohort of spontaneously breathing humans and anesthetized and mechanically ventilated sheep. For each subject, two 4DCT scans were performed with a short time interval between acquisitions. From each 4DCT data set, an image pair consisting of a volume reconstructed near end inspiration and a volume reconstructed near end exhalation was selected. The end inspiration and end exhalation images were registered using a tissue volume preserving deformable registration algorithm. The CT density change in the registered image pair was used to compute intensity-based specific air volume change (SAC) and the intensity-based Jacobian (IJAC), while the transformation-based Jacobian (TJAC) was computed directly from the image registration deformation field. IJAC is introduced to make the intensity-based and transformation-based methods comparable since SAC and Jacobian may not be associated with the same physiological phenomenon and have different units. Scan-to-scan variations in respiratory effort were corrected using a global scaling factor for normalization. A gamma index metric was introduced to quantify voxel-by-voxel reproducibility considering both differences in ventilation and distance between matching voxels. The authors also tested how different CT prefiltering levels affected intensity-based ventilation reproducibility. Results

  18. Validation databases for simulation models: aboveground biomass and net primary productive, (NPP) estimation using eastwide FIA data

    Science.gov (United States)

    Jennifer C. Jenkins; Richard A. Birdsey

    2000-01-01

    As interest grows in the role of forest growth in the carbon cycle, and as simulation models are applied to predict future forest productivity at large spatial scales, the need for reliable and field-based data for evaluation of model estimates is clear. We created estimates of potential forest biomass and annual aboveground production for the Chesapeake Bay watershed...

  19. Validity of Robot-Based Assessments of Upper Extremity Function.

    Science.gov (United States)

    McKenzie, Alison; Dodakian, Lucy; See, Jill; Le, Vu; Quinlan, Erin Burke; Bridgford, Claire; Head, Daniel; Han, Vy L; Cramer, Steven C

    2017-10-01

    To examine the validity of 5 robot-based assessments of arm motor function poststroke. Cross-sectional study. Outpatient clinical research center. Volunteer sample of participants (N=40; age, >18y; 3-6mo poststroke) with arm motor deficits that had reached a stable plateau. Not applicable. Clinical standards included the arm motor domain of the Fugl-Meyer Assessment (FMA) and 5 secondary motor outcomes: hand/wrist subsection of the arm motor domain of the FMA, Action Research Arm Test, Box and Block test (BBT), hand motor subscale of the Stroke Impact Scale Version 2.0, and Barthel Index. Robot-based assessments included wrist targeting, finger targeting, finger movement speed, reaction time, and a robotic version of the BBT. Anatomical measures included percent injury to the corticospinal tract (CST) and extent of injury of the hand region of the primary motor cortex obtained from magnetic resonance imaging. Participants had moderate to severe impairment (arm motor domain of the FMA scores, 35.6±14.4; range, 13.5-60). Performance on the robot-based tests, including speed (r=.82; Probotic version of the BBT correlated significantly with the clinical BBT but was less prone to floor effects. Robot-based assessments were comparable to the arm motor domain of the FMA score in relation to percent CST injury and superior in relation to extent of injury to the hand region of the primary motor cortex. The present findings support using a battery of robot-based methods for assessing the upper extremity motor function in participants with chronic stroke. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  20. Indirect estimation of VO2max in athletes by ACSM's equation: valid or not?

    Science.gov (United States)

    Koutlianos, N; Dimitros, E; Metaxas, T; Cansiz, M; Deligiannis, As; Kouidi, E

    2013-04-01

    The purpose of this study was to assess the indirect calculation of VO2max using ACSM's equation for Bruce protocol in athletes of different sports and to compare with the directly measured; secondly to develop regression models predicting VO2 max in athletes. Fifty five male athletes of national and international level (mean age 28.3 ± 5.6 yrs) performed graded exercise test with direct measurement of VO2 through ergospirometric device. Moreover, 3 equations were used for the indirect calculation of VO2max: a) VO2max= (0.2 · Speed) + (0.9 · Speed · Grade) + 3.5 (ACSM running equation), b) regression analysis model using enter method and c) stepwise method based on the measured data of VO2. Age, BMI, speed, grade and exercise time were used as independent variables. Regression analysis using enter method yielded the equation (R=.64, standard error of estimation [SEE] = 6.11): VO2max (ml·kg(-1)·min(-1)) = 58.443 - (0.215 · age) - (0.632 · BMI) - (68.639 · grade) + (1.579 · time) while stepwise method (R = .61, SEE = 6.18) led to: VO2max (ml·kg(-1)·min(-1)) = 33.971 - (0.291 · age) + (1.481 · time). The calculated values of VO2max from these regression models did not differ significantly from the measured VO2max (p>.05). On the contrary, VO2max calculated from the ACSM's running equation was significantly higher from the actually measured value by 14.6% (p VO2max in athletes aged 18-37 years using Bruce protocol. Only the regression models were correlated moderately with the actually measured values of VO2max.

  1. Indirect estimation of VO2max in athletes by ACSM’s equation: valid or not?

    Science.gov (United States)

    Koutlianos, N; Dimitros, E; Metaxas, T; Cansiz, M; Deligiannis, AS; Kouidi, E

    2013-01-01

    Aim: The purpose of this study was to assess the indirect calculation of VO2max using ACSM’s equation for Bruce protocol in athletes of different sports and to compare with the directly measured; secondly to develop regression models predicting VO2 max in athletes. Methods: Fifty five male athletes of national and international level (mean age 28.3 ± 5.6 yrs) performed graded exercise test with direct measurement of VO2 through ergospirometric device. Moreover, 3 equations were used for the indirect calculation of VO2max: a) VO2max= (0.2 · Speed) + (0.9 · Speed · Grade) + 3.5 (ACSM running equation), b) regression analysis model using enter method and c) stepwise method based on the measured data of VO2. Age, BMI, speed, grade and exercise time were used as independent variables. Results: Regression analysis using enter method yielded the equation (R=.64, standard error of estimation [SEE] = 6.11): VO2max (ml·kg-1·min-1) = 58.443 - (0.215 · age) - (0.632 · BMI) - (68.639 · grade) + (1.579 · time) while stepwise method (R = .61, SEE = 6.18) led to: VO2max (ml·kg-1·min-1) = 33.971 - (0.291 · age) + (1.481 · time). The calculated values of VO2max from these regression models did not differ significantly from the measured VO2max (p>.05). On the contrary, VO2max calculated from the ACSM’s running equation was significantly higher from the actually measured value by 14.6% (p VO2max in athletes aged 18-37 years using Bruce protocol. Only the regression models were correlated moderately with the actually measured values of VO2max. PMID:24376318

  2. Estimating genetic correlations based on phenotypic data: a ...

    Indian Academy of Sciences (India)

    Knowledge of genetic correlations is essential to understand the joint evolution of traits through correlated responses to selection, a difficult and seldom, very precise task even with easy-to-breed species. Here, a simulation-based method to estimate genetic correlations and genetic covariances that relies only on ...

  3. TGV-based flow estimation for 4D leukocyte transmigration

    NARCIS (Netherlands)

    Frerking, L.; Burger, M.; Vestweber, D.; Brune, Christoph; Louis, Alfred K.; Arridge, Simon; Rundell, Bill

    2014-01-01

    The aim of this paper is to track transmigrating leukocytes via TGV flow estimation. Recent results have shown the advantages of the nonlinear and higher order terms of TGV regularizers, especially in static models for denoising and medical reconstruction. We present TGV-based models for flow

  4. Islanding detection scheme based on adaptive identifier signal estimation method.

    Science.gov (United States)

    Bakhshi, M; Noroozian, R; Gharehpetian, G B

    2017-11-01

    This paper proposes a novel, passive-based anti-islanding method for both inverter and synchronous machine-based distributed generation (DG) units. Unfortunately, when the active/reactive power mismatches are near to zero, majority of the passive anti-islanding methods cannot detect the islanding situation, correctly. This study introduces a new islanding detection method based on exponentially damped signal estimation method. The proposed method uses adaptive identifier method for estimating of the frequency deviation of the point of common coupling (PCC) link as a target signal that can detect the islanding condition with near-zero active power imbalance. Main advantage of the adaptive identifier method over other signal estimation methods is its small sampling window. In this paper, the adaptive identifier based islanding detection method introduces a new detection index entitled decision signal by estimating of oscillation frequency of the PCC frequency and can detect islanding conditions, properly. In islanding conditions, oscillations frequency of PCC frequency reach to zero, thus threshold setting for decision signal is not a tedious job. The non-islanding transient events, which can cause a significant deviation in the PCC frequency are considered in simulations. These events include different types of faults, load changes, capacitor bank switching, and motor starting. Further, for islanding events, the capability of the proposed islanding detection method is verified by near-to-zero active power mismatches. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. parameter extraction and estimation based on the pv panel outdoor ...

    African Journals Online (AJOL)

    userpc

    PV panel under varying weather conditions to estimate the PV parameters. Outdoor ... panel parameters. The majority of the methods are based on measurements of the I-V characteristic of the panel (Jack et al., 2015). The main aspect of PV simulation that requires attention ... incident solar irradiance, the cell temperature,.

  6. Estimate of Water Residence Times in Tudor Creek, Kenya Based ...

    African Journals Online (AJOL)

    However, even though the observed salinity gradient in the creek appeared consistent with dry and rain periods, estimates of river runoffs were not good enough to calculate water exchange, based on salt conservation. Runoff in general was also too small to give reliable rating curves (correlation between rainfall and river ...

  7. Stereo-Vision-Based Relative Pose Estimation for the Rendezvous and Docking of Noncooperative Satellites

    Directory of Open Access Journals (Sweden)

    Feng Yu

    2014-01-01

    Full Text Available Autonomous on-orbit servicing is expected to play an important role in future space activities. Acquiring the relative pose information and inertial parameters of target is one of the key technologies for autonomous capturing. In this paper, an estimation method of relative pose based on stereo vision is presented for the final phase of the rendezvous and docking of noncooperative satellites. The proposed estimation method utilizes the sparse stereo vision algorithm instead of the dense stereo algorithm. The method consists of three parts: (1 body frame reestablishment, which establishes the body-fixed frame for the target satellite using the natural features on the surface and measures the relative attitude based on TRIAD and QUEST; (2 translational parameter estimation, which designs a standard Kalman filter to estimate the translational states and the location of mass center; (3 rotational parameter estimation, which designs an extended Kalman filter and an unscented Kalman filter, respectively, to estimate the rotational states and all the moment-of-inertia ratios. Compared to the dense stereo algorithm, the proposed method can avoid degeneracy when the target has a high degree of axial symmetry and reduce the number of sensors. The validity of the proposed method is verified by numerical simulations.

  8. Estimating Reliability of Disturbances in Satellite Time Series Data Based on Statistical Analysis

    Science.gov (United States)

    Zhou, Z.-G.; Tang, P.; Zhou, M.

    2016-06-01

    Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with "Change/ No change" by most of the present methods, while few methods focus on estimating reliability (or confidence level) of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1) Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST). (2) Forecasting and detecting disturbances in new time series data. (3) Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI) and Confidence Levels (CL). The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  9. Kalman Filter-Based Hybrid Indoor Position Estimation Technique in Bluetooth Networks

    Directory of Open Access Journals (Sweden)

    Fazli Subhan

    2013-01-01

    Full Text Available This paper presents an extended Kalman filter-based hybrid indoor position estimation technique which is based on integration of fingerprinting and trilateration approach. In this paper, Euclidian distance formula is used for the first time instead of radio propagation model to convert the received signal to distance estimates. This technique combines the features of fingerprinting and trilateration approach in a more simple and robust way. The proposed hybrid technique works in two stages. In the first stage, it uses an online phase of fingerprinting and calculates nearest neighbors (NN of the target node, while in the second stage it uses trilateration approach to estimate the coordinate without the use of radio propagation model. The distance between calculated NN and detective access points (AP is estimated using Euclidian distance formula. Thus, distance between NN and APs provides radii for trilateration approach. Therefore, the position estimation accuracy compared to the lateration approach is better. Kalman filter is used to further enhance the accuracy of the estimated position. Simulation and experimental results validate the performance of proposed hybrid technique and improve the accuracy up to 53.64% and 25.58% compared to lateration and fingerprinting approaches, respectively.

  10. Robustifying Correspondence Based 6D Object Pose Estimation

    DEFF Research Database (Denmark)

    Hietanen, Antti; Halme, Jussi; Buch, Anders Glent

    2017-01-01

    We propose two methods to robustify point correspondence based 6D object pose estimation. The first method, curvature filtering, is based on the assumption that low curvature regions provide false matches, and removing points in these regions improves robustness. The second method, region pruning....... For the experiments, we evaluated three correspondence selection methods, Geometric Consistency (GC) [1], Hough Grouping (HG) [2] and Search of Inliers (SI) [3] and report systematic improvements for their robustified versions with two distinct datasets....

  11. Design of Attribute Control Chart Based on Regression Estimator

    OpenAIRE

    Nadia Mushtaq; Dr. Muhammad Aslam; Jaffer Hussaian

    2017-01-01

    This paper presents a statistical analysis control chart for nonconforming units in quality control. In many situations the Shewhart control charts for nonconforming units may not be suitable or cannot be used, as for many processes, the assumptions of binomial distribution may deviate or may provide inadequate model. In this Study we propose a new control chart based on regression estimator of proportion based on single auxiliary variable, namely the Pr chart and compared its ...

  12. Vision-based stress estimation model for steel frame structures with rigid links

    Science.gov (United States)

    Park, Hyo Seon; Park, Jun Su; Oh, Byung Kwan

    2017-07-01

    This paper presents a stress estimation model for the safety evaluation of steel frame structures with rigid links using a vision-based monitoring system. In this model, the deformed shape of a structure under external loads is estimated via displacements measured by a motion capture system (MCS), which is a non-contact displacement measurement device. During the estimation of the deformed shape, the effective lengths of the rigid link ranges in the frame structure are identified. The radius of the curvature of the structural member to be monitored is calculated using the estimated deformed shape and is employed to estimate stress. Using MCS in the presented model, the safety of a structure can be assessed gauge-freely. In addition, because the stress is directly extracted from the radius of the curvature obtained from the measured deformed shape, information on the loadings and boundary conditions of the structure are not required. Furthermore, the model, which includes the identification of the effective lengths of the rigid links, can consider the influences of the stiffness of the connection and support on the deformation in the stress estimation. To verify the applicability of the presented model, static loading tests for a steel frame specimen were conducted. By comparing the stress estimated by the model with the measured stress, the validity of the model was confirmed.

  13. Estimation of low back moments from video analysis: A validation study

    NARCIS (Netherlands)

    Coenen, P.; Kingma, I.; Boot, C.R.L.; Faber, G.S.; Xu, X.; Bongers, P.M.; Dieën, J.H. van

    2011-01-01

    This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed.

  14. Validation and scale dependencies of the triangle method for the evaporative fraction estimation over heterogeneous areas

    DEFF Research Database (Denmark)

    de Tomás, Alberto; Nieto, Héctor; Guzinski, Radoslaw

    2014-01-01

    been validated against ground measurements obtained with scintillometer on a winter crop field during 2010–2011. When working with large spatial windows, removing areas with different topographic characteristics (altitude and slope) improved the performance of the methods. In addition, replacing...

  15. Repeated holdout Cross-Validation of Model to Estimate Risk of Lyme Disease by Landscape Attributes

    Science.gov (United States)

    We previously modeled Lyme disease (LD) risk at the landscape scale; here we evaluate the model's overall goodness-of-fit using holdout validation. Landscapes were characterized within road-bounded analysis units (AU). Observed LD cases (obsLD) were ascertained per AU. Data were ...

  16. Development and Validation of a Novel Acute Myeloid Leukemia-Composite Model to Estimate Risks of Mortality.

    Science.gov (United States)

    Sorror, Mohamed L; Storer, Barry E; Fathi, Amir T; Gerds, Aaron T; Medeiros, Bruno C; Shami, Paul; Brunner, Andrew M; Sekeres, Mikkael A; Mukherjee, Sudipto; Peña, Esteban; Elsawy, Mahmoud; Wardyn, Shylo; Whitten, Jennifer; Moore, Rachelle; Becker, Pamela S; McCune, Jeannine S; Appelbaum, Frederick R; Estey, Elihu H

    2017-09-07

    To our knowledge, this multicenter analysis is the first to test and validate (1) the prognostic impact of comorbidities on 1-year mortality after initial therapy of acute myeloid leukemia (AML) and (2) a novel, risk-stratifying composite model incorporating comorbidities, age, and cytogenetic and molecular risks. To accurately estimate risks of mortality by developing and validating a composite model that combines the most significant patient-specific and AML-specific features. This is a retrospective cohort study. A series of comorbidities, including those already incorporated into the hematopoietic cell transplantation-comorbidity index (HCT-CI), were evaluated. Patients were randomly divided into a training set (n = 733) and a validation set (n = 367). In the training set, covariates associated with 1-year overall mortality at a significance level of P academic institutions specialized in treating AML; 605 (55%) were male, and 495 (45%) were female. In the validation set, the original HCT-CI had better C statistic and AUC estimates compared with the AML comorbidity index for prediction of 1-year mortality. Augmenting the original HCT-CI with 3 independently significant comorbidities, hypoalbuminemia, thrombocytopenia, and high lactate dehydrogenase level, yielded a better C statistic of 0.66 and AUC of 0.69 for 1-year mortality. A composite model comprising augmented HCT-CI, age, and cytogenetic/molecular risks had even better predictive estimates of 0.72 and 0.76, respectively. In this cohort study, comorbidities influenced 1-year survival of patients with AML, and comorbidities are best captured by an augmented HCT-CI. The augmented HCT-CI, age, and cytogenetic/molecular risks could be combined into an AML composite model that could guide treatment decision-making and trial design in AML. Studying physical, cognitive, and social health might further clarify the prognostic role of aging. Targeting comorbidities with interventions alongside specific

  17. Validation of temperature methods for the estimation of pre-appearance interval in carrion insects.

    Science.gov (United States)

    Matuszewski, Szymon; Mądra-Bielewicz, Anna

    2016-03-01

    The pre-appearance interval (PAI) is an interval preceding appearance of an insect taxon on a cadaver. It decreases with an increase in temperature in several forensically-relevant insects. Therefore, forensic entomologists developed temperature methods for the estimation of PAI. In the current study these methods were tested in the case of adult and larval Necrodes littoralis (Coleoptera: Silphidae), adult and larval Creophilus maxillosus (Coleoptera: Staphylinidae), adult Necrobia rufipes (Coleoptera: Cleridae), adult Saprinus semistriatus (Coleoptera: Histeridae) and adult Stearibia nigriceps (Diptera: Piophilidae). Moreover, factors affecting accuracy of estimation and techniques for the approximation and correction of predictor temperature were studied using results of a multi-year pig carcass study. It was demonstrated that temperature methods outperform conventional methods. The accuracy of estimation was strongly related to the quality of the temperature model for PAI and the quality of temperature data used for the estimation. Models for larval stage performed better than models for adult stage. Mean temperature for the average seasonal PAI was a good initial approximation of predictor temperature. Moreover, iterative estimation of PAI was found to effectively correct predictor temperature, although some pitfalls were identified in this respect. Implications for the estimation of PAI are discussed.

  18. Conditions for Valid Empirical Estimates of Cancer Overdiagnosis in Randomized Trials and Population Studies.

    Science.gov (United States)

    Gulati, Roman; Feuer, Eric J; Etzioni, Ruth

    2016-07-15

    Cancer overdiagnosis is frequently estimated using the excess incidence in a screened group relative to that in an unscreened group. However, conditions for unbiased estimation are poorly understood. We developed a mathematical framework to project the effects of screening on the incidence of relevant cancers-that is, cancers that would present clinically without screening. Screening advances the date of diagnosis for a fraction of preclinical relevant cancers. Which diagnoses are advanced and by how much depends on the preclinical detectable period, test sensitivity, and screening patterns. Using the model, we projected incidence in common trial designs and population settings and compared excess incidence with true overdiagnosis. In trials with no control arm screening, unbiased estimates are available using cumulative incidence if the screen arm stops screening and using annual incidence if the screen arm continues screening. In both designs, unbiased estimation requires waiting until screening stabilizes plus the maximum preclinical period. In continued-screen trials and population settings, excess cumulative incidence is persistently biased. We investigated this bias in published estimates from the European Randomized Study of Screening for Prostate Cancer after 9-13 years. In conclusion, no trial or population setting automatically permits unbiased estimation of overdiagnosis; sufficient follow-up and appropriate analysis remain crucial. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Understanding Rasch and item response theory models: applications to the estimation and validation of interval latent trait measures from responses to rating scale questionnaires.

    Science.gov (United States)

    Massof, Robert W

    2011-02-01

    Modern psychometric theory is now routinely used in clinical vision research, as well as other areas of health research, to measure latent health states on continuous interval scales from responses to self-report rating scale questionnaires. Two competing theories are commonly employed: Rasch theory and item response theory. Because the field is currently in transition from using traditional scoring algorithms based on classical test theory to using the more modern approaches, this article offers a tutorial review of Rasch theory and item response theory and of the analytical methods employed by the two theories to estimate and validate measures.

  20. EEG-Based User Reaction Time Estimation Using Riemannian Geometry Features.

    Science.gov (United States)

    Wu, Dongrui; Lance, Brent J; Lawhern, Vernon J; Gordon, Stephen; Jung, Tzyy-Ping; Lin, Chin-Teng

    2017-11-01

    Riemannian geometry has been successfully used in many brain-computer interface (BCI) classification problems and demonstrated superior performance. In this paper, for the first time, it is applied to BCI regression problems, an important category of BCI applications. More specifically, we propose a new feature extraction approach for electroencephalogram (EEG)-based BCI regression problems: a spatial filter is first used to increase the signal quality of the EEG trials and also to reduce the dimensionality of the covariance matrices, and then Riemannian tangent space features are extracted. We validate the performance of the proposed approach in reaction time estimation from EEG signals measured in a large-scale sustained-attention psychomotor vigilance task, and show that compared with the traditional powerband features, the tangent space features can reduce the root mean square estimation error by 4.30%-8.30%, and increase the estimation correlation coefficient by 6.59%-11.13%.

  1. Estimation of tool wear during CNC milling using neural network-based sensor fusion

    Science.gov (United States)

    Ghosh, N.; Ravi, Y. B.; Patra, A.; Mukhopadhyay, S.; Paul, S.; Mohanty, A. R.; Chattopadhyay, A. B.

    2007-01-01

    Cutting tool wear degrades the product quality in manufacturing processes. Monitoring tool wear value online is therefore needed to prevent degradation in machining quality. Unfortunately there is no direct way of measuring the tool wear online. Therefore one has to adopt an indirect method wherein the tool wear is estimated from several sensors measuring related process variables. In this work, a neural network-based sensor fusion model has been developed for tool condition monitoring (TCM). Features extracted from a number of machining zone signals, namely cutting forces, spindle vibration, spindle current, and sound pressure level have been fused to estimate the average flank wear of the main cutting edge. Novel strategies such as, signal level segmentation for temporal registration, feature space filtering, outlier removal, and estimation space filtering have been proposed. The proposed approach has been validated by both laboratory and industrial implementations.

  2. A Fuzzy Logic-Based Approach for Estimation of Dwelling Times of Panama Metro Stations

    Directory of Open Access Journals (Sweden)

    Aranzazu Berbey Alvarez

    2015-04-01

    Full Text Available Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation of the elements of the origin-destination matrix and the dwelling time of stations in a railway transport system. The fuzzy inference engine used in the algorithm is based in the principle of maximum entropy. The approach considers passengers’ preferences to assign a level of congestion in each car of the train in function of the properties of the station platforms. This approach is implemented to estimate the passenger flow and dwelling times of the recently opened Line 1 of the Panama Metro. The dwelling times obtained from the simulation are compared to real measurements to validate the approach.

  3. Dictionary-based fiber orientation estimation with improved spatial consistency.

    Science.gov (United States)

    Ye, Chuyang; Prince, Jerry L

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that

  4. Validation of an ICT system supporting Competence-Based Education

    Directory of Open Access Journals (Sweden)

    Nadia Mana

    2015-10-01

    Full Text Available This paper aims to present the evaluation of eSchooling, an ICT system supporting competence-based education. The eSchooling team ran an experiment involving ten high schools throughout an entire school year, so as to cover all the teaching stages (from activities planning to the final students evaluations. In order to guarantee objectivity and independence, external experts performed monitoring, validation and evaluation of the experimental results. These experts analysed different aspects: the logs of the eSchooling system, the response of a user community, the logs of the interactive e-book system and the outputs of users focus groups. The main indication derived from the analysis of the collected data is to involve entire groups of teachers of the same class, rather than isolated ones, when evaluating such kinds of educative tools. Another suggestion is to move in the direction of a tighter integration with other ICT tools such as the electronic board for recording activity, even when it is not competence-based.

  5. Research on Bridge Sensor Validation Based on Correlation in Cluster

    Directory of Open Access Journals (Sweden)

    Huang Xiaowei

    2016-01-01

    Full Text Available In order to avoid the false alarm and alarm failure caused by sensor malfunction or failure, it has been critical to diagnose the fault and analyze the failure of the sensor measuring system in major infrastructures. Based on the real time monitoring of bridges and the study on the correlation probability distribution between multisensors adopted in the fault diagnosis system, a clustering algorithm based on k-medoid is proposed, by dividing sensors of the same type into k clusters. Meanwhile, the value of k is optimized by a specially designed evaluation function. Along with the further study of the correlation of sensors within the same cluster, this paper presents the definition and corresponding calculation algorithm of the sensor’s validation. The algorithm is applied to the analysis of the sensor data from an actual health monitoring system. The result reveals that the algorithm can not only accurately measure the failure degree and orientate the malfunction in time domain but also quantitatively evaluate the performance of sensors and eliminate error of diagnosis caused by the failure of the reference sensor.

  6. Fast, moment-based estimation methods for delay network tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, Earl Christophre [Los Alamos National Laboratory; Michailidis, George [U OF MICHIGAN; Nair, Vijayan N [U OF MICHIGAN

    2008-01-01

    Consider the delay network tomography problem where the goal is to estimate distributions of delays at the link-level using data on end-to-end delays. These measurements are obtained using probes that are injected at nodes located on the periphery of the network and sent to other nodes also located on the periphery. Much of the previous literature deals with discrete delay distributions by discretizing the data into small bins. This paper considers more general models with a focus on computationally efficient estimation. The moment-based schemes presented here are designed to function well for larger networks and for applications like monitoring that require speedy solutions.

  7. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...... are treated as special components; they have platform dependent overhead values for the qualities and composition functions, defining how qualities are computed from the values of connected components. The approach is exemplified with a prototype estimation tool applied to an OFDM-decoding module modelled...

  8. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  9. Size-based estimation of the status of fish stocks: simulation analysis and comparison with age-based estimations

    DEFF Research Database (Denmark)

    Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders

    Estimation of the status of fish stocks is important for sustainable management. Data limitations and data quality hinder this task. The commonly used age-based approaches require information about individual age, which is costly and relatively inaccurate. In contrast, the size of organisms...... is linked to physiology more directly than is age, and can be measured easier with less cost. In this work we used a single-species size-based model to estimate the fishing mortality (F) and the status of the stock, quantified by the ratio F/Fmsy between actual fishing mortality and the fishing mortality...... which leads to the maximum sustainable yield. A simulation analysis was done to investigate the sensitivity of the estimation and its improvement when stock specific life history information is available. To evaluate our approach with real observations, data-rich fish stocks, like the North Sea cod...

  10. Kalman-Filter-Based State Estimation for System Information Exchange in a Multi-bus Islanded Microgrid

    DEFF Research Database (Denmark)

    Wang, Yanbo; Tian, Yanjun; Wang, Xiongfei

    2014-01-01

    with consideration of voltage performance and load characteristic is developed. Then, a Kalman-Filter-Based state estimation method is proposed to estimate system information instead of using communication facilities, where the estimator of each DG unit can dynamically obtain information of all the DG units as well...... as network voltages just by local voltage and current itself. The proposed estimation method is able to provide accurate states information to support system operation without any communication facilities. Simulation and experimental results are given for validating the proposed small signal model and state......State monitoring and analysis of distribution systems has become an urgent issue, and state estimation serves as an important tool to deal with it. In this paper, a Kalman-Filter-based state estimation method for a multi-bus islanded microgrid is presented. First, an overall small signal model...

  11. Estimation of Supercapacitor Energy Storage Based on Fractional Differential Equations.

    Science.gov (United States)

    Kopka, Ryszard

    2017-12-22

    In this paper, new results on using only voltage measurements on supercapacitor terminals for estimation of accumulated energy are presented. For this purpose, a study based on application of fractional-order models of supercapacitor charging/discharging circuits is undertaken. Parameter estimates of the models are then used to assess the amount of the energy accumulated in supercapacitor. The obtained results are compared with energy determined experimentally by measuring voltage and current on supercapacitor terminals. All the tests are repeated for various input signal shapes and parameters. Very high consistency between estimated and experimental results fully confirm suitability of the proposed approach and thus applicability of the fractional calculus to modelling of supercapacitor energy storage.

  12. Estimation of Supercapacitor Energy Storage Based on Fractional Differential Equations

    Science.gov (United States)

    Kopka, Ryszard

    2017-12-01

    In this paper, new results on using only voltage measurements on supercapacitor terminals for estimation of accumulated energy are presented. For this purpose, a study based on application of fractional-order models of supercapacitor charging/discharging circuits is undertaken. Parameter estimates of the models are then used to assess the amount of the energy accumulated in supercapacitor. The obtained results are compared with energy determined experimentally by measuring voltage and current on supercapacitor terminals. All the tests are repeated for various input signal shapes and parameters. Very high consistency between estimated and experimental results fully confirm suitability of the proposed approach and thus applicability of the fractional calculus to modelling of supercapacitor energy storage.

  13. Event-based state estimation a stochastic perspective

    CERN Document Server

    Shi, Dawei; Chen, Tongwen

    2016-01-01

    This book explores event-based estimation problems. It shows how several stochastic approaches are developed to maintain estimation performance when sensors perform their updates at slower rates only when needed. The self-contained presentation makes this book suitable for readers with no more than a basic knowledge of probability analysis, matrix algebra and linear systems. The introduction and literature review provide information, while the main content deals with estimation problems from four distinct angles in a stochastic setting, using numerous illustrative examples and comparisons. The text elucidates both theoretical developments and their applications, and is rounded out by a review of open problems. This book is a valuable resource for researchers and students who wish to expand their knowledge and work in the area of event-triggered systems. At the same time, engineers and practitioners in industrial process control will benefit from the event-triggering technique that reduces communication costs ...

  14. The EWMA control chart based on robust scale estimators

    Directory of Open Access Journals (Sweden)

    Nadia Saeed

    2016-12-01

    Full Text Available The exponentially weighted moving average (EWMA chart is very popular in statistical process control for detecting the small shifts in process mean and variance. This chart performs well under the assumption of normality but when data violate the assumption of normality, the robust approaches needed. We have developed the EWMA charts under different robust scale estimators available in literature and also compared the performance of these charts by calculating expected out-of-control points and expected widths under non-symmetric distributions (i.e. gamma and exponential. The simulation studies are being carried out for the purpose and results showed that amongst six robust estimators, the chart based on estimator Q_n relatively performed well for non-normal processes in terms of its shorter expected width and more number of expected out-of-control points which shows its sensitivity to detect the out of control signal.

  15. Spacecraft Formation Orbit Estimation Using WLPS-Based Localization

    Directory of Open Access Journals (Sweden)

    Shu Ting Goh

    2011-01-01

    Full Text Available This paper studies the implementation of a novel wireless local positioning system (WLPS for spacecraft formation flying to maintain high-performance spacecraft relative and absolute position estimation. A WLPS equipped with antenna arrays allows each spacecraft to measure the relative range and coordinate angle(s of other spacecraft located in its coverage area. The dynamic base station and the transponder of WLPS enable spacecraft to localize each other in the formation. Because the signal travels roundtrip in WLPS, and due to the high spacecraft velocities, the signal transmission time delay reduces the localization performance. This work studies spacecraft formation positions estimation performance assuming that only WLPS is available onboard. The feasibility of estimating the spacecraft absolute position using only one-dimensional antenna array is also investigated. The effect of including GPS measurements in addition to WLPS is studied and compared to a GPS standalone system.

  16. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.

    2010-11-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.

  17. Upper Bound Performance Estimation for Copper Based Broadband Access

    DEFF Research Database (Denmark)

    Jensen, Michael; Gutierrez Lopez, Jose Manuel

    2012-01-01

    Around 70% of all broadband connections in the European Union are carried over copper, and the scenario is unlikely to change in the next few years as carriers still believe in the profitability of their copper infrastructure. In this paper we show how to estimate the performance upper bound of c...... to define the limitations of copper based broadband access. A case study in a municipality in Denmark shows how the estimated network dimension to be able to provide video conference services to the majority of the population might be too high to be implemented in reality.......Around 70% of all broadband connections in the European Union are carried over copper, and the scenario is unlikely to change in the next few years as carriers still believe in the profitability of their copper infrastructure. In this paper we show how to estimate the performance upper bound...

  18. In-vivo validation of fast spectral velocity estimation techniques – preliminary results

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Gran, Fredrik; Pedersen, Mads Møller

    2008-01-01

    Spectral Doppler is a common way to estimate blood velocities in medical ultrasound (US). The standard way of estimating spectrograms is by using Welch's method (WM). WM is dependent on a long observation window (OW) (about 100 transmissions) to produce spectrograms with sufficient spectral...... resolution and contrast. Two adaptive filterbank methods have been suggested to circumvent this problem: the Blood spectral Power Capon method (BPC) and the Blood Amplitude and Phase Estimation method (BAPES). Previously, simulations and flow rig experiments have indicated that the two adaptive methods can...... was scanned using the experimental ultrasound scanner RASMUS and a B-K Medical 5 MHz linear array transducer with an angle of insonation not exceeding 60deg. All 280 spectrograms were then randomised and presented to a radiologist blinded for method and OW for visual evaluation: useful or not useful. WMbw...

  19. Classifier-based latency estimation: a novel way to estimate and predict BCI accuracy

    Science.gov (United States)

    Thompson, David E.; Warschausky, Seth; Huggins, Jane E.

    2013-02-01

    Objective. Brain-computer interfaces (BCIs) that detect event-related potentials (ERPs) rely on classification schemes that are vulnerable to latency jitter, a phenomenon known to occur with ERPs such as the P300 response. The objective of this work was to investigate the role that latency jitter plays in BCI classification. Approach. We developed a novel method, classifier-based latency estimation (CBLE), based on a generalization of Woody filtering. The technique works by presenting the time-shifted data to the classifier, and using the time shift that corresponds to the maximal classifier score. Main results. The variance of CBLE estimates correlates significantly (p relatively classifier-independent, and the results were confirmed on two linear classifiers. Significance. The results suggest that latency jitter may be an important cause of poor BCI performance, and methods that correct for latency jitter may improve that performance. CBLE can also be used to decrease the amount of data needed for accuracy estimation, allowing research on effects with shorter timescales.

  20. On Estimating Force-freeness Based on Observed Magnetograms

    Science.gov (United States)

    Zhang, X. M.; Zhang, M.; Su, J. T.

    2017-01-01

    It is a common practice in the solar physics community to test whether or not measured photospheric or chromospheric vector magnetograms are force-free, using the Maxwell stress as a measure. Some previous studies have suggested that magnetic fields of active regions in the solar chromosphere are close to being force-free whereas there is no consistency among previous studies on whether magnetic fields of active regions in the solar photosphere are force-free or not. Here we use three kinds of representative magnetic fields (analytical force-free solutions, modeled solar-like force-free fields, and observed non-force-free fields) to discuss how measurement issues such as limited field of view (FOV), instrument sensitivity, and measurement error could affect the estimation of force-freeness based on observed magnetograms. Unlike previous studies that focus on discussing the effect of limited FOV or instrument sensitivity, our calculation shows that just measurement error alone can significantly influence the results of estimates of force-freeness, due to the fact that measurement errors in horizontal magnetic fields are usually ten times larger than those in vertical fields. This property of measurement errors, interacting with the particular form of a formula for estimating force-freeness, would result in wrong judgments of the force-freeness: a truly force-free field may be mistakenly estimated as being non-force-free and a truly non-force-free field may be estimated as being force-free. Our analysis calls for caution when interpreting estimates of force-freeness based on measured magnetograms, and also suggests that the true photospheric magnetic field may be further away from being force-free than it currently appears to be.

  1. MRI-based intelligence quotient (IQ) estimation with sparse learning.

    Science.gov (United States)

    Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject's IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.

  2. Marker-Based Estimation of Heritability in Immortal Populations

    Science.gov (United States)

    Kruijer, Willem; Boer, Martin P.; Malosetti, Marcos; Flood, Pádraic J.; Engel, Bas; Kooke, Rik; Keurentjes, Joost J. B.; van Eeuwijk, Fred A.

    2015-01-01

    Heritability is a central parameter in quantitative genetics, from both an evolutionary and a breeding perspective. For plant traits heritability is traditionally estimated by comparing within- and between-genotype variability. This approach estimates broad-sense heritability and does not account for different genetic relatedness. With the availability of high-density markers there is growing interest in marker-based estimates of narrow-sense heritability, using mixed models in which genetic relatedness is estimated from genetic markers. Such estimates have received much attention in human genetics but are rarely reported for plant traits. A major obstacle is that current methodology and software assume a single phenotypic value per genotype, hence requiring genotypic means. An alternative that we propose here is to use mixed models at the individual plant or plot level. Using statistical arguments, simulations, and real data we investigate the feasibility of both approaches and how these affect genomic prediction with the best linear unbiased predictor and genome-wide association studies. Heritability estimates obtained from genotypic means had very large standard errors and were sometimes biologically unrealistic. Mixed models at the individual plant or plot level produced more realistic estimates, and for simulated traits standard errors were up to 13 times smaller. Genomic prediction was also improved by using these mixed models, with up to a 49% increase in accuracy. For genome-wide association studies on simulated traits, the use of individual plant data gave almost no increase in power. The new methodology is applicable to any complex trait where multiple replicates of individual genotypes can be scored. This includes important agronomic crops, as well as bacteria and fungi. PMID:25527288

  3. Traditional waveform based spike sorting yields biased rate code estimates.

    Science.gov (United States)

    Ventura, Valérie

    2009-04-28

    Much of neuroscience has to do with relating neural activity and behavior or environment. One common measure of this relationship is the firing rates of neurons as functions of behavioral or environmental parameters, often called tuning functions and receptive fields. Firing rates are estimated from the spike trains of neurons recorded by electrodes implanted in the brain. Individual neurons' spike trains are not typically readily available, because the signal collected at an electrode is often a mixture of activities from different neurons and noise. Extracting individual neurons' spike trains from voltage signals, which is known as spike sorting, is one of the most important data analysis problems in neuroscience, because it has to be undertaken prior to any analysis of neurophysiological data in which more than one neuron is believed to be recorded on a single electrode. All current spike-sorting methods consist of clustering the characteristic spike waveforms of neurons. The sequence of first spike sorting based on waveforms, then estimating tuning functions, has long been the accepted way to proceed. Here, we argue that the covariates that modulate tuning functions also contain information about spike identities, and that if tuning information is ignored for spike sorting, the resulting tuning function estimates are biased and inconsistent, unless spikes can be classified with perfect accuracy. This means, for example, that the commonly used peristimulus time histogram is a biased estimate of the firing rate of a neuron that is not perfectly isolated. We further argue that the correct conceptual way to view the problem out is to note that spike sorting provides information about rate estimation and vice versa, so that the two relationships should be considered simultaneously rather than sequentially. Indeed we show that when spike sorting and tuning-curve estimation are performed in parallel, unbiased estimates of tuning curves can be recovered even from

  4. Validity of Standing Posture Eight-electrode Bioelectrical Impedance to Estimate Body Composition in Taiwanese Elderly

    Directory of Open Access Journals (Sweden)

    Ling-Chun Lee

    2014-09-01

    Conclusion: The results of this study showed that the impedance index and LST in the whole body, upper limbs, and lower limbs derived from DXA findings were highly correlated. The LST and BF% estimated by BIA8 in whole body and various body segments were highly correlated with the corresponding DXA results; however, BC-418 overestimates the participants' appendicular LST and underestimates whole body BF%. Therefore, caution is needed when interpreting the results of appendicular LST and whole body BF% estimated for elderly adults.

  5. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    Science.gov (United States)

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  6. Uav-Based Automatic Tree Growth Measurement for Biomass Estimation

    Science.gov (United States)

    Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.

    2016-06-01

    Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.

  7. UAV-BASED AUTOMATIC TREE GROWTH MEASUREMENT FOR BIOMASS ESTIMATION

    Directory of Open Access Journals (Sweden)

    M. Karpina

    2016-06-01

    Full Text Available Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.

  8. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  9. The Validity of Value-Added Estimates from Low-Stakes Testing Contexts: The Impact of Change in Test-Taking Motivation and Test Consequences

    Science.gov (United States)

    Finney, Sara J.; Sundre, Donna L.; Swain, Matthew S.; Williams, Laura M.

    2016-01-01

    Accountability mandates often prompt assessment of student learning gains (e.g., value-added estimates) via achievement tests. The validity of these estimates have been questioned when performance on tests is low stakes for students. To assess the effects of motivation on value-added estimates, we assigned students to one of three test consequence…

  10. Using the Microsoft Kinect for patient size estimation and radiation dose normalization: proof of concept and initial validation.

    Science.gov (United States)

    Cook, Tessa S; Couch, Gregory; Couch, Timothy J; Kim, Woojin; Boonn, William W

    2013-08-01

    Monitoring patients' imaging-related radiation is currently a hot topic, but there are many obstacles to accurate, patient-specific dose estimation. While some, such as easier access to dose data and parameters, have been overcome, the challenge remains as to how accurately these dose estimates reflect the actual dose received by the patient. The main parameter that is often not considered is patient size. There are many surrogates-weight, body mass index, effective diameter-but none of these truly reflect the three-dimensional "size" of an individual. In this work, we present and evaluate a novel approach to estimating patient volume using the Microsoft Kinect™, a combination RGB camera-infrared depth sensor device. The goal of using this device is to generate a three-dimensional estimate of patient size, in order to more effectively model the dimensions of the anatomy of interest and not only enable better normalization of dose estimates but also promote more patient-specific protocoling of future CT examinations. Preliminary testing and validation of this system reveals good correlation when individuals are standing upright with their arms by their sides, but demonstrates some variation with arm position. Further evaluation and testing is necessary with multiple patient positions and in both adult and pediatric patients. Correlation with other patient size metrics will also be helpful, as the ideal measure of patient "size" may in fact be a combination of existing metrics and newly developed techniques.

  11. Statistical properties of Fourier-based time-lag estimates

    Science.gov (United States)

    Epitropakis, A.; Papadakis, I. E.

    2016-06-01

    Context. The study of X-ray time-lag spectra in active galactic nuclei (AGN) is currently an active research area, since it has the potential to illuminate the physics and geometry of the innermost region (I.e. close to the putative super-massive black hole) in these objects. To obtain reliable information from these studies, the statistical properties of time-lags estimated from data must be known as accurately as possible. Aims: We investigated the statistical properties of Fourier-based time-lag estimates (I.e. based on the cross-periodogram), using evenly sampled time series with no missing points. Our aim is to provide practical "guidelines" on estimating time-lags that are minimally biased (I.e. whose mean is close to their intrinsic value) and have known errors. Methods: Our investigation is based on both analytical work and extensive numerical simulations. The latter consisted of generating artificial time series with various signal-to-noise ratios and sampling patterns/durations similar to those offered by AGN observations with present and past X-ray satellites. We also considered a range of different model time-lag spectra commonly assumed in X-ray analyses of compact accreting systems. Results: Discrete sampling, binning and finite light curve duration cause the mean of the time-lag estimates to have a smaller magnitude than their intrinsic values. Smoothing (I.e. binning over consecutive frequencies) of the cross-periodogram can add extra bias at low frequencies. The use of light curves with low signal-to-noise ratio reduces the intrinsic coherence, and can introduce a bias to the sample coherence, time-lag estimates, and their predicted error. Conclusions: Our results have direct implications for X-ray time-lag studies in AGN, but can also be applied to similar studies in other research fields. We find that: a) time-lags should be estimated at frequencies lower than ≈ 1/2 the Nyquist frequency to minimise the effects of discrete binning of the

  12. Neural Net Gains Estimation Based on an Equivalent Model

    Directory of Open Access Journals (Sweden)

    Karen Alicia Aguilar Cruz

    2016-01-01

    Full Text Available A model of an Equivalent Artificial Neural Net (EANN describes the gains set, viewed as parameters in a layer, and this consideration is a reproducible process, applicable to a neuron in a neural net (NN. The EANN helps to estimate the NN gains or parameters, so we propose two methods to determine them. The first considers a fuzzy inference combined with the traditional Kalman filter, obtaining the equivalent model and estimating in a fuzzy sense the gains matrix A and the proper gain K into the traditional filter identification. The second develops a direct estimation in state space, describing an EANN using the expected value and the recursive description of the gains estimation. Finally, a comparison of both descriptions is performed; highlighting the analytical method describes the neural net coefficients in a direct form, whereas the other technique requires selecting into the Knowledge Base (KB the factors based on the functional error and the reference signal built with the past information of the system.

  13. MODIS Based Estimation of Forest Aboveground Biomass in China.

    Science.gov (United States)

    Yin, Guodong; Zhang, Yuan; Sun, Yan; Wang, Tao; Zeng, Zhenzhong; Piao, Shilong

    2015-01-01

    Accurate estimation of forest biomass C stock is essential to understand carbon cycles. However, current estimates of Chinese forest biomass are mostly based on inventory-based timber volumes and empirical conversion factors at the provincial scale, which could introduce large uncertainties in forest biomass estimation. Here we provide a data-driven estimate of Chinese forest aboveground biomass from 2001 to 2013 at a spatial resolution of 1 km by integrating a recently reviewed plot-level ground-measured forest aboveground biomass database with geospatial information from 1-km Moderate-Resolution Imaging Spectroradiometer (MODIS) dataset in a machine learning algorithm (the model tree ensemble, MTE). We show that Chinese forest aboveground biomass is 8.56 Pg C, which is mainly contributed by evergreen needle-leaf forests and deciduous broadleaf forests. The mean forest aboveground biomass density is 56.1 Mg C ha-1, with high values observed in temperate humid regions. The responses of forest aboveground biomass density to mean annual temperature are closely tied to water conditions; that is, negative responses dominate regions with mean annual precipitation less than 1300 mm y-1 and positive responses prevail in regions with mean annual precipitation higher than 2800 mm y-1. During the 2000s, the forests in China sequestered C by 61.9 Tg C y-1, and this C sink is mainly distributed in north China and may be attributed to warming climate, rising CO2 concentration, N deposition, and growth of young forests.

  14. MODIS Based Estimation of Forest Aboveground Biomass in China.

    Directory of Open Access Journals (Sweden)

    Guodong Yin

    Full Text Available Accurate estimation of forest biomass C stock is essential to understand carbon cycles. However, current estimates of Chinese forest biomass are mostly based on inventory-based timber volumes and empirical conversion factors at the provincial scale, which could introduce large uncertainties in forest biomass estimation. Here we provide a data-driven estimate of Chinese forest aboveground biomass from 2001 to 2013 at a spatial resolution of 1 km by integrating a recently reviewed plot-level ground-measured forest aboveground biomass database with geospatial information from 1-km Moderate-Resolution Imaging Spectroradiometer (MODIS dataset in a machine learning algorithm (the model tree ensemble, MTE. We show that Chinese forest aboveground biomass is 8.56 Pg C, which is mainly contributed by evergreen needle-leaf forests and deciduous broadleaf forests. The mean forest aboveground biomass density is 56.1 Mg C ha-1, with high values observed in temperate humid regions. The responses of forest aboveground biomass density to mean annual temperature are closely tied to water conditions; that is, negative responses dominate regions with mean annual precipitation less than 1300 mm y-1 and positive responses prevail in regions with mean annual precipitation higher than 2800 mm y-1. During the 2000s, the forests in China sequestered C by 61.9 Tg C y-1, and this C sink is mainly distributed in north China and may be attributed to warming climate, rising CO2 concentration, N deposition, and growth of young forests.

  15. Observer Based Fault Detection and Moisture Estimating in Coal Mill

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Mataji, Babak

    2008-01-01

     requirements to the general performance of power plants. Detection  of faults and moisture content estimation are consequently of high interest in the handling of the problems caused by faults and moisture content. The coal flow out of the mill is the obvious variable to monitor, when detecting non-intended drops in the coal......In this paper an observer-based method for detecting faults and estimating moisture content in the coal in coal mills is presented. Handling of faults and operation under special conditions, such as high moisture content in the coal, are of growing importance due to the increasing...... flow out of the coal mill. However, this variable is not measurable. Another estimated variable is the moisture content, which is only "measurable" during steady-state operations of the coal mill. Instead, this paper suggests a method where these unknown variables are estimated based on a simple energy...

  16. Vital signs and estimated blood loss in patients with major trauma: testing the validity of the ATLS classification of hypovolaemic shock.

    Science.gov (United States)

    Guly, H R; Bouamra, O; Spiers, M; Dark, P; Coats, T; Lecky, F E

    2011-05-01

    The Advanced Trauma Life Support (ATLS) system classifies the severity of shock. The aim of this study is to test the validity of this classification. Admission physiology, injury and outcome variables from adult injured patients presenting to hospitals in England and Wales between 1989 and 2007 and stored on the Trauma Audit and Research Network (TARN) database, were studied. For each patient, the blood loss was estimated and patients were divided into four groups based on the estimated blood loss corresponding to the ATLS classes of shock. The median and interquartile ranges (IQR) of the heart rate (HR) systolic blood pressure (SBP), respiratory rate (RR) and Glasgow Coma Score (GCS) were calculated for each group. The median HR rose from 82 beats per minute (BPM) in estimated class 1 shock to 95 BPM in estimated class 4 shock. The median SBP fell from 135 mm Hg to 120 mm Hg. There was no significant change in RR or GCS. With increasing estimated blood loss there is a trend to increasing heart rate and a reduction in SBP but not to the degree suggested by the ATLS classification of shock. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Validation of an image analysis method for estimating coating thickness on pellets

    DEFF Research Database (Denmark)

    Larsen, C.C.; Sonnergaard, Jørn; Bertelsen, P.

    2003-01-01

    of controlling and monitoring the illumination technique utilised. Calibration of the image analysis equipment was of the highest importance. Using pellets with a high degree of sphericity and narrow size distribution, it was sufficient to use 1000 pellets to estimate the mean pellet size and the coating...

  18. Sulphur levels in saliva as an estimation of sulphur status in cattle: a validation study

    NARCIS (Netherlands)

    Dermauw, V.; Froidmont, E.; Dijkstra, J.; Boever, de J.L.; Vyverman, W.; Debeer, A.E.; Janssens, G.P.J.

    2012-01-01

    Effective assessment of sulphur (S) status in cattle is important for optimal health, yet remains difficult. Rumen fluid S concentrations are preferred, but difficult to sample under practical conditions. This study aimed to evaluate salivary S concentration as estimator of S status in cattle.

  19. Validation of the Ejike-Ijeh equations for the estimation of body fat ...

    African Journals Online (AJOL)

    The Ejike-Ijeh equations for the estimation of body fat percentage makes it possible for the body fat content of individuals and populations to be determined without the use of costly equipment. However, because the equations were derived using data from a young-adult (18-29 years old) Nigerian population, it is important ...

  20. Validation of the urine column measurement as an estimation of the intra-abdominal pressure.

    NARCIS (Netherlands)

    Steeg, H.J.J. van der; Akkeren, J.P. van; Houterman, S.; Roumen, R.M.H.

    2009-01-01

    OBJECTIVE: To evaluate the efficacy of the urine column (UC) measurement compared to the intra-vesicular pressure (IVP) measurement as an estimation of intra-abdominal pressure (IAP) in patients with IAP up to 30 mmHg. METHODS: Fifteen patients undergoing a laparoscopic cholecystectomy were studied.

  1. Toward On-line Parameter Estimation of Concentric Tube Robots Using a Mechanics-based Kinematic Model.

    Science.gov (United States)

    Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E; Park, Frank Chongwoo

    2016-10-01

    Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments.

  2. Sex estimation in a contemporary Turkish population based on CT scans of the calcaneus.

    Science.gov (United States)

    Ekizoglu, Oguzhan; Inci, Ercan; Palabiyik, Figen Bakirtas; Can, Ismail Ozgur; Er, Ali; Bozdag, Mustafa; Kacmaz, Ismail Eralp; Kranioti, Elena F

    2017-10-01

    Building a reliable biological profile from decomposed remains depends heavily on the accurate estimation of sex. A variety of methods based on every single skeletal element have been developed over the years for different populations employing both osteological and virtual methods. The latter seem to be a reasonable alternative in countries lacking osteological reference collections. The current study used 3D virtual models of calcanei from CT scans of living adults to develop a sex estimation method for contemporary Turkish. Four hundred and twenty eight calcanei CT scans were analysed. The sample was divided in two subsamples: an original (N=348) and a validation sample (N=80) with similar distribution of males and females. Nine classical measurements were taken using the 3D models of the calcanei and two different statistical methods (Discriminant function analysis and Binary logistic regression) were used. Classification accuracy ranged from 82% to 98% for the validation sample and it was consistently high using any of the two methods. Sex bias seems to be lower for most of the logistic regression equations compared to the discriminant functions. These results, however, need further testing to be verified. Based on the results of this study we recommend the use of both methods for sex estimation from the measurements of the calcaneus bone in a Turkish population. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Data Based Parameter Estimation Method for Circular-scanning SAR Imaging

    Directory of Open Access Journals (Sweden)

    Chen Gong-bo

    2013-06-01

    Full Text Available The circular-scanning Synthetic Aperture Radar (SAR is a novel working mode and its image quality is closely related to the accuracy of the imaging parameters, especially considering the inaccuracy of the real speed of the motion. According to the characteristics of the circular-scanning mode, a new data based method for estimating the velocities of the radar platform and the scanning-angle of the radar antenna is proposed in this paper. By referring to the basic conception of the Doppler navigation technique, the mathematic model and formulations for the parameter estimation are firstly improved. The optimal parameter approximation based on the least square criterion is then realized in solving those equations derived from the data processing. The simulation results verified the validity of the proposed scheme.

  4. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs.

    Science.gov (United States)

    McCoy, A B; Wright, A; Krousel-Wood, M; Thomas, E J; McCoy, J A; Sittig, D F

    2015-01-01

    Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes.

  5. A transverse oscillation approach for estimation of three-dimensional velocity vectors, part II: experimental validation.

    Science.gov (United States)

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev; Rasmussen, Morten Fischer; Jensen, Jørgen Arendt

    2014-10-01

    The 3-D transverse oscillation method is investigated by estimating 3-D velocities in an experimental flow-rig system. Measurements of the synthesized transverse oscillating fields are presented as well. The method employs a 2-D transducer; decouples the velocity estimation; and estimates the axial, transverse, and elevation velocity components simultaneously. Data are acquired using a research ultrasound scanner. The velocity measurements are conducted with steady flow in sixteen different directions. For a specific flow direction with [α, ß] = [45, 15]°, the mean estimated velocity vector at the center of the vessel is (v(x), v(y), v(z)) = (33.8, 34.5, 15.2) ± (4.6, 5.0, 0.6) cm/s where the expected velocity is (34.2, 34.2, 13.0) cm/s. The velocity magnitude is 50.6 ± 5.2 cm/s with a bias of 0.7 cm/s. The flow angles α and ß are estimated as 45.6 ± 4.9° and 17.6 ± 1.0°. Subsequently, the precision and accuracy are calculated over the entire velocity profiles. On average for all direction, the relative mean bias of the velocity magnitude is -0.08%. For α and ß, the mean bias is -0.2° and -1.5°. The relative standard deviations of the velocity magnitude ranges from 8 to 16%. For the flow angles, the ranges of the mean angular deviations are 5° to 16° and 0.7° and 8°.

  6. Monte-Carlo-based phase retardation estimator for polarization sensitive optical coherence tomography

    Science.gov (United States)

    Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki

    2011-08-01

    A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.

  7. Development, Validation and Application of a Novel Method for Estimating the Thermal Conductance of Critical Interfaces in the Jaws of the LHC Collimation System

    CERN Document Server

    Leitao, I V

    2013-01-01

    The motivation for this project arises from the difficulty in quantifying the manufacturing quality of critical interfaces in the water cooled jaws of the TCTP and TCSP (Target Collimator Tertiary Pickup and Target Collimator Secondary Pickup) collimators. These interfaces play a decisive role in the transfer of heat deposited by the beam towards the cooling system avoiding excessive deformation of the collimator. Therefore, it was necessary to develop a non-destructive method that provides an estimation of the thermal conductance during the acceptance test of the TCTP and TCSP jaws. The method is based on experimental measurements of temperature evolution and numerical simulations. By matching experimental and numerical results it is possible to estimate the thermal conductance in several sections of the jaw. A simplified experimental installation was built to validate the method, then a fully automatic Test-Bench was developed and built for the future acceptance of the TCTP/TCSP jaws which will be manufactu...

  8. Gait Phase Estimation Based on Noncontact Capacitive Sensing and Adaptive Oscillators.

    Science.gov (United States)

    Zheng, Enhao; Manca, Silvia; Yan, Tingfang; Parri, Andrea; Vitiello, Nicola; Wang, Qining

    2017-10-01

    This paper presents a novel strategy aiming to acquire an accurate and walking-speed-adaptive estimation of the gait phase through noncontact capacitive sensing and adaptive oscillators (AOs). The capacitive sensing system is designed with two sensing cuffs that can measure the leg muscle shape changes during walking. The system can be dressed above the clothes and free human skin from contacting to electrodes. In order to track the capacitance signals, the gait phase estimator is designed based on the AO dynamic system due to its ability of synchronizing with quasi-periodic signals. After the implementation of the whole system, we first evaluated the offline estimation performance by experiments with 12 healthy subjects walking on a treadmill with changing speeds. The strategy achieved an accurate and consistent gait phase estimation with only one channel of capacitance signal. The average root-mean-square errors in one stride were 0.19 rad (3.0% of one gait cycle) for constant walking speeds and 0.31 rad (4.9% of one gait cycle) for speed transitions even after the subjects rewore the sensing cuffs. We then validated our strategy in a real-time gait phase estimation task with three subjects walking with changing speeds. Our study indicates that the strategy based on capacitive sensing and AOs is a promising alternative for the control of exoskeleton/orthosis.

  9. Validation of a novel smartphone accelerometer-based knee goniometer.

    Science.gov (United States)

    Ockendon, Matthew; Gilbert, Robin E

    2012-09-01

    Loss of full knee extension following anterior cruciate ligament surgery has been shown to impair knee function. However, there can be significant difficulties in accurately and reproducibly measuring a fixed flexion of the knee. We studied the interobserver and the intraobserver reliabilities of a novel, smartphone accelerometer-based, knee goniometer and compared it with a long-armed conventional goniometer for the assessment of fixed flexion knee deformity. Five healthy male volunteers (age range 30 to 40 years) were studied. Measurements of knee flexion angle were made with a telescopic-armed goniometer (Lafayette Instrument, Lafayette, IN) and compared with measurements using the smartphone (iPhone 3GS, Apple Inc., Cupertino, CA) knee goniometer using a novel trigonometric technique based on tibial inclination. Bland-Altman analysis of validity and reliability including statistical analysis of correlation by Pearson's method was undertaken. The iPhone goniometer had an interobserver correlation (r) of 0.994 compared with 0.952 for the Lafayette. The intraobserver correlation was r = 0.982 for the iPhone (compared with 0.927). The datasets from the two instruments correlate closely (r = 0.947) are proportional and have mean difference of only -0.4 degrees (SD 3.86 degrees). The Lafayette goniometer had an intraobserver reliability +/- 9.6 degrees. The interobserver reliability was +/- 8.4 degrees. By comparison the iPhone had an interobserver reliability +/- 2.7 degrees and an intraobserver reliability +/- 4.6 degrees. We found the iPhone goniometer to be a reliable tool for the measurement of subtle knee flexion in the clinic setting.

  10. ANALYTICAL METHOD DEVELOPMENT AND VALIDATION FOR SIMULTANEOUS ESTIMATION OF SIMVASTATIN AND SITAGLIPTIN

    OpenAIRE

    Yaddanapudi Mrudula Devi; R. Karthikeyan; Punttaguntla Sreenivasa Babu

    2013-01-01

    A simple, specific, accurate, rapid, inexpensive isocratic Reversed Phase-High Performance Liquid Chromatography (RP-HPLC) method was developed and validated for the quantitative determination of Simvastatin and Sitagliptin pharmaceutical tablet dosage forms. RP-HPLC method was developed by using Inertsil ODS-3 C 18 (75 mm*4.6 mm) 5 microns Short column, Shimadzu LC-20AT Prominence Liquid Chromatograph. The mobile phase composed of 0.05 M Ammonium acetate: CAN 60:40. The flow rate was set to ...

  11. Vehicle Sideslip Angle Estimation Based on Hybrid Kalman Filter

    Directory of Open Access Journals (Sweden)

    Jing Li

    2016-01-01

    Full Text Available Vehicle sideslip angle is essential for active safety control systems. This paper presents a new hybrid Kalman filter to estimate vehicle sideslip angle based on the 3-DoF nonlinear vehicle dynamic model combined with Magic Formula tire model. The hybrid Kalman filter is realized by combining square-root cubature Kalman filter (SCKF, which has quick convergence and numerical stability, with square-root cubature based receding horizon Kalman FIR filter (SCRHKF, which has robustness against model uncertainty and temporary noise. Moreover, SCKF and SCRHKF work in parallel, and the estimation outputs of two filters are merged by interacting multiple model (IMM approach. Experimental results show the accuracy and robustness of the hybrid Kalman filter.

  12. Biomarker-based prognosis in hepatocellular carcinoma: validation and extension of the BALAD model.

    Science.gov (United States)

    Fox, R; Berhane, S; Teng, M; Cox, T; Tada, T; Toyoda, H; Kumada, T; Kagebayashi, C; Satomura, S; Johnson, P J

    2014-04-15

    The Japanese 'BALAD' model offers the first objective, biomarker-based, tool for assessment of prognosis in hepatocellular carcinoma, but relies on dichotomisation of the constituent data, has not been externally validated, and cannot be applied to the individual patients. In this Japanese/UK collaboration, we replicated the original BALAD model on a UK cohort and then built a new model, BALAD-2, on the original raw Japanese data using variables in their continuous form. Regression analyses using flexible parametric models with fractional polynomials enabled fitting of appropriate baseline hazard functions and functional form of covariates. The resulting models were validated in the respective cohorts to measure the predictive performance. The key prognostic features were confirmed to be Bilirubin and Albumin together with the serological cancer biomarkers, AFP-L3, AFP, and DCP. With appropriate recalibration, the model offered clinically relevant discrimination of prognosis in both the Japanese and UK data sets and accurately predicted patient-level survival. The original BALAD model has been validated in an international setting. The refined BALAD-2 model permits estimation of patient-level survival in UK and Japanese cohorts.

  13. Validade do instrumento WHO VAW STUDY para estimar violência de gênero contra a mulher Validez de instrumento para estimar violencia de género contra la mujer Validity of the WHO VAW study instrument for estimating gender-based violence against women

    Directory of Open Access Journals (Sweden)

    Lilia Blima Schraiber

    2010-08-01

    less favorable outcomes, with the exception of suicide attempts in São Paulo. CONCLUSIONS: The instrument was shown to be adequate for estimating gender-based violence against women perpetrated by intimate partners and can be used in studies on this subject. It has high internal consistency and a capacity to discriminate between different forms of violence (psychological, physical and sexual perpetrated in different social contexts. The instrument also characterizes the female victim and her relationship with the aggressor, thereby facilitating gender analysis.

  14. Adaptive algorithm for mobile user positioning based on environment estimation

    Directory of Open Access Journals (Sweden)

    Grujović Darko

    2014-01-01

    Full Text Available This paper analyzes the challenges to realize an infrastructure independent and a low-cost positioning method in cellular networks based on RSS (Received Signal Strength parameter, auxiliary timing parameter and environment estimation. The proposed algorithm has been evaluated using field measurements collected from GSM (Global System for Mobile Communications network, but it is technology independent and can be applied in UMTS (Universal Mobile Telecommunication Systems and LTE (Long-Term Evolution networks, also.

  15. Monocular Vision- and IMU-Based System for Prosthesis Pose Estimation During Total Hip Replacement Surgery.

    Science.gov (United States)

    Su, Shaojie; Zhou, Yixin; Wang, Zhihua; Chen, Hong

    2017-06-01

    The average age of population increases worldwide, so does the number of total hip replacement surgeries. Total hip replacement, however, often involves a risk of dislocation and prosthetic impingement. To minimize the risk after surgery, we propose an instrumented hip prosthesis that estimates the relative pose between prostheses intraoperatively and ensures the placement of prostheses within a safe zone. We create a model of the hip prosthesis as a ball and socket joint, which has four degrees of freedom (DOFs), including 3-DOF rotation and 1-DOF translation. We mount a camera and an inertial measurement unit (IMU) inside the hollow ball, or "femoral head prosthesis," while printing customized patterns on the internal surface of the socket, or "acetabular cup." Since the sensors were rigidly fixed to the femoral head prosthesis, measuring its motions poses a sensor ego-motion estimation problem. By matching feature points in images of the reference patterns, we propose a monocular vision based method with a relative error of less than 7% in the 3-DOF rotation and 8% in the 1-DOF translation. Further, to reduce system power consumption, we apply the IMU with its data fused by an extended Kalman filter to replace the camera in the 3-DOF rotation estimation, which yields a less than 4.8% relative error and a 21.6% decrease in power consumption. Experimental results show that the best approach to prosthesis pose estimation is a combination of monocular vision-based translation estimation and IMU-based rotation estimation, and we have verified the feasibility and validity of this system in prosthesis pose estimation.

  16. A Novel Continuous Blood Pressure Estimation Approach Based on Data Mining Techniques.

    Science.gov (United States)

    Miao, Fen; Fu, Nan; Zhang, Yuan-Ting; Ding, Xiao-Rong; Hong, Xi; He, Qingyun; Li, Ye

    2017-11-01

    Continuous blood pressure (BP) estimation using pulse transit time (PTT) is a promising method for unobtrusive BP measurement. However, the accuracy of this approach must be improved for it to be viable for a wide range of applications. This study proposes a novel continuous BP estimation approach that combines data mining techniques with a traditional mechanism-driven model. First, 14 features derived from simultaneous electrocardiogram and photoplethysmogram signals were extracted for beat-to-beat BP estimation. A genetic algorithm-based feature selection method was then used to select BP indicators for each subject. Multivariate linear regression and support vector regression were employed to develop the BP model. The accuracy and robustness of the proposed approach were validated for static, dynamic, and follow-up performance. Experimental results based on 73 subjects showed that the proposed approach exhibited excellent accuracy in static BP estimation, with a correlation coefficient and mean error of 0.852 and -0.001 ± 3.102 mmHg for systolic BP, and 0.790 and -0.004 ± 2.199 mmHg for diastolic BP. Similar performance was observed for dynamic BP estimation. The robustness results indicated that the estimation accuracy was lower by a certain degree one day after model construction but was relatively stable from one day to six months after construction. The proposed approach is superior to the state-of-the-art PTT-based model for an approximately 2-mmHg reduction in the standard derivation at different time intervals, thus providing potentially novel insights for cuffless BP estimation.

  17. Development and Validation of HPTLC method for the estimation of Sitagliptin Phosphate and Simvastatin in bulk and Marketed Formulation

    OpenAIRE

    Rathod Sonali; Patil Pallavi; Chopade Vittal

    2012-01-01

    Method describes a development and validation of HPTLC method for the estimation of sitagliptin phosphate and simvastatin in bulk and marketed formulation. This employs a precoated silica gel 60 F254 (0.2 mm thickness) on aluminium sheets and mobile phase chloroform: methanol in the ratio of 8:2 v/v, having chamber saturation for 20 min at room temperature. The developing chamber was run up to 8cm. The Rf values were found to be 0.13 and 0.75 for sitagliptin phosphate and simvastatin respecti...

  18. Validating novel air pollution sensors to improve exposure estimates for epidemiological analyses and citizen science.

    Science.gov (United States)

    Jerrett, Michael; Donaire-Gonzalez, David; Popoola, Olalekan; Jones, Roderic; Cohen, Ronald C; Almanza, Estela; de Nazelle, Audrey; Mead, Iq; Carrasco-Turigas, Glòria; Cole-Hunter, Tom; Triguero-Mas, Margarita; Seto, Edmund; Nieuwenhuijsen, Mark

    2017-10-01

    Low cost, personal air pollution sensors may reduce exposure measurement errors in epidemiological investigations and contribute to citizen science initiatives. Here we assess the validity of a low cost personal air pollution sensor. Study participants were drawn from two ongoing epidemiological projects in Barcelona, Spain. Participants repeatedly wore the pollution sensor - which measured carbon monoxide (CO), nitric oxide (NO), and nitrogen dioxide (NO 2 ). We also compared personal sensor measurements to those from more expensive instruments. Our personal sensors had moderate to high correlations with government monitors with averaging times of 1-h and 30-min epochs (r ~ 0.38-0.8) for NO and CO, but had low to moderate correlations with NO 2 (~0.04-0.67). Correlations between the personal sensors and more expensive research instruments were higher than with the government monitors. The sensors were able to detect high and low air pollution levels in agreement with expectations (e.g., high levels on or near busy roadways and lower levels in background residential areas and parks). Our findings suggest that the low cost, personal sensors have potential to reduce exposure measurement error in epidemiological studies and provide valid data for citizen science studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. A Validated RP-HPLC Method for the Estimation of Pizotifen in Pharmaceutical Dosage Form

    Directory of Open Access Journals (Sweden)

    M. V. Basaveswara Rao

    2012-01-01

    Full Text Available A simple, selective, linear, precise, and accurate RP-HPLC method was developed and validated for rapid assay of Pizotifen in pharmaceutical dosage form. Isocratic elution at a flow rate of 1.0 mL/min was employed on Chromosil C18 (250 mm × 4.6 mm, 5 μm column at ambient temperature. The mobile phase consists of methanol : acetonitrile in the ratio of 10 : 90 v/v. The UV detection wavelength was 230 nm, and 20 μL sample was injected. The retention time for Pizotifen was 2.019 min. The percent RSD for accuracy of the method was found to be 0.2603%. The method was validated as per the ICH guidelines. The method can be successfully applied for routine analysis of Pizotifen in the rapid and reliable determination of Pizotifen in pharmaceutical dosage form.

  20. Gradient HPLC method development and validation for Simultaneous estimation of Rosiglitazone and Gliclazide.

    Directory of Open Access Journals (Sweden)

    Uttam Singh Baghel

    2012-10-01

    Full Text Available Objective: The aim of present work was to develop a gradient RP-HPLC method for simultaneous analysis of rosiglitazone and gliclazide, in a tablet dosage form. Method: Chromatographic system was optimized using a hypersil C18 (250mm x 4.6mm, 5毺 m column with potassium dihydrogen phosphate (pH-7.0 and acetonitrile in the ratio of 60:40, as mobile phase, at a flow rate of 1.0 ml/min. Detection was carried out at 225 nm by a SPD-20A prominence UV/Vis detector. Result: Rosiglitazone and gliclazide were eluted with retention times of 17.36 and 7.06 min, respectively. Beer’s Lambert ’s Law was obeyed over the concentration ranges of 5 to 70 毺 g/ml and 2 to 12 毺 g/ml for rosiglitazone and gliclazide, respectively. Conclusion: The high recovery and low coefficients of variation confirm the suitability of the method for simultaneous analysis of both drugs in a tablets dosage form. Statistical analysis proves that the method is sensitive and significant for the analysis of rosiglitazone and gliclazide in pure and in pharmaceutical dosage form without any interference from the excipients. The method was validated in accordance with ICH guidelines. Validation revealed the method is specific, rapid, accurate, precise, reliable, and reproducible.

  1. Estimations of water balance after validating and administering the water balance questionnaire in pregnant women.

    Science.gov (United States)

    Malisova, Olga; Protopappas, Athanasios; Nyktari, Anastasia; Bountziouka, Vassiliki; Antsaklis, Aristides; Zampelas, Antonis; Kapsokefalou, Maria

    2014-05-01

    Dehydration during pregnancy may be harmful for the mother and fetus; thus our objective was to understand whether pregnant women balance water intake and loss. The Water Balance Questionnaire (WBQ) was modified to reflect pregnancy (WBQ-P). Validation was performed using 3-day diaries (n = 60) and hydration indices in urine (osmolality, specific gravity, pH and color, n = 40). The WBQ-P was found valid according to Kedhal τ-b coefficient agreement. The WBQ-P was administered to 95, 100 and 97 women per trimester, in Greece. Median (IQR) water balance, intake and loss were, respectively, 203 (-577, 971), 2917 (2187, 3544) and 2658 (2078, 3391) ml/day; these did not differ among the trimesters or between pregnant and non-pregnant women. However, more pregnant women were falling in the higher quartiles of water balance distribution. No differences in sources of water intake were identified except that women in the third trimester had lower water intake from beverages.

  2. Validity of Weight Estimation Models in Pigs Reared under Different Management Conditions

    Directory of Open Access Journals (Sweden)

    Marvelous Sungirai

    2014-01-01

    Full Text Available A study was carried out to determine the relationship between linear body measurements and live weight in Landrace and Large White pigs reared under different management conditions in Zimbabwe. Data was collected for body length, heart girth, and live weight in 358 pigs reared under intensive commercial conditions. The stepwise multiple linear regression method was done to develop a model using a random selection of 202 records of pigs. The model showed that age, body length, and heart girth were useful predictors of live weight in these pigs with significantly high positive correlations observed. The model was internally validated using records of the remaining 156 pigs and there was a significantly high positive correlation between the actual and predicted weights. The model was then externally validated using 40 market age pigs reared under communal conditions and there was a significantly low positive correlation between the actual and predicted weights. The results of the study show that while linear measurements can be useful in predicting pig weights the appropriateness of the model is also influenced by the management of the pigs. Models can only be applicable to pigs reared under similar conditions of management.

  3. Volumetric breast density estimation from full-field digital mammograms: a validation study.

    Directory of Open Access Journals (Sweden)

    Albert Gubern-Mérida

    Full Text Available OBJECTIVES: To objectively evaluate automatic volumetric breast density assessment in Full-Field Digital Mammograms (FFDM using measurements obtained from breast Magnetic Resonance Imaging (MRI. MATERIAL AND METHODS: A commercially available method for volumetric breast density estimation on FFDM is evaluated by comparing volume estimates obtained from 186 FFDM exams including mediolateral oblique (MLO and cranial-caudal (CC views to objective reference standard measurements obtained from MRI. RESULTS: Volumetric measurements obtained from FFDM show high correlation with MRI data. Pearson's correlation coefficients of 0.93, 0.97 and 0.85 were obtained for volumetric breast density, breast volume and fibroglandular tissue volume, respectively. CONCLUSIONS: Accurate volumetric breast density assessment is feasible in Full-Field Digital Mammograms and has potential to be used in objective breast cancer risk models and personalized screening.

  4. A Validated RP – HPLC Method for Simultaneous Estimation of Cefixime and Cloxacillin in Tablets

    Directory of Open Access Journals (Sweden)

    G. Rathinavel

    2008-01-01

    Full Text Available This paper presents a RP-HPLC method for the simultaneous estimation of cefixime and cloxacillin in tablets. The process was carried out on C18 column (5 μm, 25 cm × 4.6 mm, i.d using phosphate buffer (pH 5.0, acetonitrile and methanol in the ratio 80:17:3 respectively as a mobile phase at a flow rate of 2mL/min. Wavelength was fixed at 225 nm. The retention time of cefixime and cloxacillin was found to be 5.657 and 6.200 min, respectively. The developed method is rapid and sensitive and it can be used for estimation of combination of these drugs in tablets.

  5. An accurate algorithm for estimation of coal reserves based on support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Deng, X.; Liu, W.; Wang, R. [Wuhan University, Wuhan (China). School of Geology and Geomatics

    2008-09-15

    In an effort to improve the limitations of the present methods of estimating coal reserves an accurate algorithm is presented based on the support vector machine model. By building a thick coal and bulk density model from knowledge of drilling data and eliminating the outer points according to the relation between points and polygons, coal reserves were accurately calculated by summing up all the reserves of a small grid. Two examples for different types of coal mine are given and three-dimensional mineral distribution maps are plotted. The examples validate the reliability and advantages of the method proposed. 9 refs., 1 fig., 1 tab.

  6. Nonparametric signal processing validation in T-wave alternans detection and estimation.

    Science.gov (United States)

    Goya-Esteban, R; Barquero-Pérez, O; Blanco-Velasco, M; Caamaño-Fernández, A J; García-Alberola, A; Rojo-Álvarez, J L

    2014-04-01

    Although a number of methods have been proposed for T-Wave Alternans (TWA) detection and estimation, their performance strongly depends on their signal processing stages and on their free parameters tuning. The dependence of the system quality with respect to the main signal processing stages in TWA algorithms has not yet been studied. This study seeks to optimize the final performance of the system by successive comparisons of pairs of TWA analysis systems, with one single processing difference between them. For this purpose, a set of decision statistics are proposed to evaluate the performance, and a nonparametric hypothesis test (from Bootstrap resampling) is used to make systematic decisions. Both the temporal method (TM) and the spectral method (SM) are analyzed in this study. The experiments were carried out in two datasets: first, in semisynthetic signals with artificial alternant waves and added noise; second, in two public Holter databases with different documented risk of sudden cardiac death. For semisynthetic signals (SNR = 15 dB), after the optimization procedure, a reduction of 34.0% (TM) and 5.2% (SM) of the power of TWA amplitude estimation errors was achieved, and the power of error probability was reduced by 74.7% (SM). For Holter databases, appropriate tuning of several processing blocks, led to a larger intergroup separation between the two populations for TWA amplitude estimation. Our proposal can be used as a systematic procedure for signal processing block optimization in TWA algorithmic implementations.

  7. Validity of rapid estimation of erythrocyte volume in the diagnosis of polycytemia vera

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, S.; Roedbro, P.

    1989-01-01

    In the diagnosis of polycytemia vera, estimation of erythrocyte volume (EV) from plasma volume (PV) and venous hematocrit (Hct/sub v/) is usually thought unadvisable, because the ratio of whole body hematocrit to venous hematocrit (f ratio) is higher in patients with splenomegaly than in normal subjects, and varies considerably between individuals. We determined the mean f ratio in 232 consecutive patients suspected of polycytemia vera (anti f=0.967; SD 0.048) and used it with each patient's PV and Hct/sub v/ to calculate an estimated normalised EV/sub n/. With measured EV as a reference value, EV/sub n/ was investigated as a diagnostic test. By means of two cut off levels the EV/sub n/ values could be divided into EV/sub n/ elevated, EV/sub n/ not elevated (both with high predictive values), and an EV/sub n/ borderline group. The size of the borderline EV/sub n/ group ranged from 5% to 46% depending on position of the cut off levels, i.e. with the efficiency demanded from the diagnostic test. EV can safely and rapidly be estimated from PV and Hct/sub v/, if anti f is determined from the relevant population, and if the results in an easily definable borderline range of EV/sub n/ values are supplemented by direct EV determination.

  8. A multimodal detection model of dolphins to estimate abundance validated by field experiments.

    Science.gov (United States)

    Akamatsu, Tomonari; Ura, Tamaki; Sugimatsu, Harumi; Bahl, Rajendar; Behera, Sandeep; Panda, Sudarsan; Khan, Muntaz; Kar, S K; Kar, C S; Kimura, Satoko; Sasaki-Yamamoto, Yukiko

    2013-09-01

    Abundance estimation of marine mammals requires matching of detection of an animal or a group of animal by two independent means. A multimodal detection model using visual and acoustic cues (surfacing and phonation) that enables abundance estimation of dolphins is proposed. The method does not require a specific time window to match the cues of both means for applying mark-recapture method. The proposed model was evaluated using data obtained in field observations of Ganges River dolphins and Irrawaddy dolphins, as examples of dispersed and condensed distributions of animals, respectively. The acoustic detection probability was approximately 80%, 20% higher than that of visual detection for both species, regardless of the distribution of the animals in present study sites. The abundance estimates of Ganges River dolphins and Irrawaddy dolphins fairly agreed with the numbers reported in previous monitoring studies. The single animal detection probability was smaller than that of larger cluster size, as predicted by the model and confirmed by field data. However, dense groups of Irrawaddy dolphins showed difference in cluster sizes observed by visual and acoustic methods. Lower detection probability of single clusters of this species seemed to be caused by the clumped distribution of this species.

  9. Natural forest biomass estimation based on plantation information using PALSAR data.

    Directory of Open Access Journals (Sweden)

    Ram Avtar

    Full Text Available Forests play a vital role in terrestrial carbon cycling; therefore, monitoring forest biomass at local to global scales has become a challenging issue in the context of climate change. In this study, we investigated the backscattering properties of Advanced Land Observing Satellite (ALOS Phased Array L-band Synthetic Aperture Radar (PALSAR data in cashew and rubber plantation areas of Cambodia. The PALSAR backscattering coefficient (σ0 had different responses in the two plantation types because of differences in biophysical parameters. The PALSAR σ0 showed a higher correlation with field-based measurements and lower saturation in cashew plants compared with rubber plants. Multiple linear regression (MLR models based on field-based biomass of cashew (C-MLR and rubber (R-MLR plants with PALSAR σ0 were created. These MLR models were used to estimate natural forest biomass in Cambodia. The cashew plant-based MLR model (C-MLR produced better results than the rubber plant-based MLR model (R-MLR. The C-MLR-estimated natural forest biomass was validated using forest inventory data for natural forests in Cambodia. The validation results showed a strong correlation (R2 = 0.64 between C-MLR-estimated natural forest biomass and field-based biomass, with RMSE  = 23.2 Mg/ha in deciduous forests. In high-biomass regions, such as dense evergreen forests, this model had a weaker correlation because of the high biomass and the multiple-story tree structure of evergreen forests, which caused saturation of the PALSAR signal.

  10. Ratio-based estimators for a change point in persistence.

    Science.gov (United States)

    Halunga, Andreea G; Osborn, Denise R

    2012-11-01

    We study estimation of the date of change in persistence, from [Formula: see text] to [Formula: see text] or vice versa. Contrary to statements in the original papers, our analytical results establish that the ratio-based break point estimators of Kim [Kim, J.Y., 2000. Detection of change in persistence of a linear time series. Journal of Econometrics 95, 97-116], Kim et al. [Kim, J.Y., Belaire-Franch, J., Badillo Amador, R., 2002. Corringendum to "Detection of change in persistence of a linear time series". Journal of Econometrics 109, 389-392] and Busetti and Taylor [Busetti, F., Taylor, A.M.R., 2004. Tests of stationarity against a change in persistence. Journal of Econometrics 123, 33-66] are inconsistent when a mean (or other deterministic component) is estimated for the process. In such cases, the estimators converge to random variables with upper bound given by the true break date when persistence changes from [Formula: see text] to [Formula: see text]. A Monte Carlo study confirms the large sample downward bias and also finds substantial biases in moderate sized samples, partly due to properties at the end points of the search interval.

  11. Estimation of Sideslip Angle Based on Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Yupeng Huang

    2017-01-01

    Full Text Available The sideslip angle plays an extremely important role in vehicle stability control, but the sideslip angle in production car cannot be obtained from sensor directly in consideration of the cost of the sensor; it is essential to estimate the sideslip angle indirectly by means of other vehicle motion parameters; therefore, an estimation algorithm with real-time performance and accuracy is critical. Traditional estimation method based on Kalman filter algorithm is correct in vehicle linear control area; however, on low adhesion road, vehicles have obvious nonlinear characteristics. In this paper, extended Kalman filtering algorithm had been put forward in consideration of the nonlinear characteristic of the tire and was verified by the Carsim and Simulink joint simulation, such as the simulation on the wet cement road and the ice and snow road with double lane change. To test and verify the effect of extended Kalman filtering estimation algorithm, the real vehicle test was carried out on the limit test field. The experimental results show that the accuracy of vehicle sideslip angle acquired by extended Kalman filtering algorithm is obviously higher than that acquired by Kalman filtering in the area of the nonlinearity.

  12. Spacecraft Angular Velocity Estimation Algorithm Based on Orientation Quaternion Measurements

    Directory of Open Access Journals (Sweden)

    M. V. Li

    2016-01-01

    Full Text Available The spacecraft (SC mission involves providing the appropriate orientation and stabilization of the associated axes in space. One of the main sources of information for the attitude control system is the angular rate sensor blocks. One way to improve a reliability of the system is to provide a back up of the control algorithms in case of failure of these blocks. To solve the problem of estimation of SP angular velocity vector in the inertial system of coordinates with a lack of information from the angular rate sensors is supposed the use of orientation data from the star sensors; in this case at each clock of the onboard digital computer. The equations in quaternions are used to describe the kinematics of rotary motion. Their approximate solution is used to estimate the angular velocity vector. Methods of modal control and multi-dimensional decomposition of a control object are used to solve the problem of observation and identification of the angular rates. These methods enabled us to synthesize the SP angular velocity vector estimation algorithm and obtain the equations, which relate the error quaternion with the calculated estimate of the angular velocity. Mathematical modeling was carried out to test the algorithm. Cases of different initial conditions were simulated. Time between orientation quaternion measurements and angular velocity of the model was varied. The algorithm was compared with a more accurate algorithm, built on more complete equations. Graphs of difference in angular velocity estimation depending on the number of iterations are presented. The difference in angular velocity estimation is calculated from results of the synthesized algorithm and the algorithm for more accurate equations. Graphs of error distribution for angular velocity estimation with initial conditions being changed are also presented, and standard deviations of estimation errors are calculated. The synthesized algorithm is inferior in accuracy assessment to

  13. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    Science.gov (United States)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  14. Validation of air-displacement plethysmography for estimation of body fat mass in healthy elderly subjects.

    Science.gov (United States)

    Bosy-Westphal, A; Mast, M; Eichhorn, C; Becker, C; Kutzner, D; Heller, M; Müller, M J

    2003-08-01

    Air-displacement plethysmography (ADP) is a non-invasive method for body composition analysis that divides the body into fat-free mass (FFM) and fat mass (FM) (= 2 compartment model, 2C). It places low demands on subject performance and is therefore most convenient in the elderly. To validate ADP against dual energy X-ray absorptiometry (DEXA) and to compare it to a four-compartment model of body composition (4C; fat mass, total body water, bone mineral content and residual mass) in the elderly. Body composition was assessed by ADP, DEXA and bioelectrical impedance analysis (BIA) in 26 healthy elderly subjects (15 women, 11 men) aged 60-82 years. Despite a high correlation of %FM assessed by ADP and DEXA we observed significant differences between the results of these methods for both sexes (2.5 +/-3.4%; bias +/- SD). Deviations of %FM(ADP) from %FM(DEXA) were dependent on bone mineral content (BMC(DEXA)) fraction of FFM. A low BMC(DEXA) was related to an overestimation of DEXA-derived %FM by ADP. There was a systematic bias between results from ADP and the 4C model. 76% of its variance was explained by the assumption of a fixed density of FFM. 96% of the variance in the density of FFM was explained by water content and only 4% by BMC(DEXA) of FFM. When compared to a 4C model, overestimation of %FM(ADP) increases with increasing water fraction of FFM. Although there is a tendency for overestimation of %FM(ADP),ADP is a valid method for body composition measurement in the elderly. The bias in %FM(ADP) is mainly related to water content of FFM and indicates that a correction factor for TBW may improve the accuracy of the ADP measurements in the elderly.

  15. Development and Validation of Liquid Chromatographic Method for Estimation of Naringin in Nanoformulation

    Directory of Open Access Journals (Sweden)

    Kranti P. Musmade

    2014-01-01

    Full Text Available A simple, precise, accurate, rapid, and sensitive reverse phase high performance liquid chromatography (RP-HPLC method with UV detection has been developed and validated for quantification of naringin (NAR in novel pharmaceutical formulation. NAR is a polyphenolic flavonoid present in most of the citrus plants having variety of pharmacological activities. Method optimization was carried out by considering the various parameters such as effect of pH and column. The analyte was separated by employing a C18 (250.0 × 4.6 mm, 5 μm column at ambient temperature in isocratic conditions using phosphate buffer pH 3.5: acetonitrile (75 : 25% v/v as mobile phase pumped at a flow rate of 1.0 mL/min. UV detection was carried out at 282 nm. The developed method was validated according to ICH guidelines Q2(R1. The method was found to be precise and accurate on statistical evaluation with a linearity range of 0.1 to 20.0 μg/mL for NAR. The intra- and interday precision studies showed good reproducibility with coefficients of variation (CV less than 1.0%. The mean recovery of NAR was found to be 99.33 ± 0.16%. The proposed method was found to be highly accurate, sensitive, and robust. The proposed liquid chromatographic method was successfully employed for the routine analysis of said compound in developed novel nanopharmaceuticals. The presence of excipients did not show any interference on the determination of NAR, indicating method specificity.

  16. Nursing Activity Score for estimating nursing care need in intensive care units: findings from a face and content validity study.

    Science.gov (United States)

    Palese, Alvisa; Comisso, Irene; Burra, Monica; DiTaranto, Pier Paolo; Peressoni, Luca; Mattiussi, Elisa; Lucchini, Alberto

    2016-05-01

    To re-evaluate the face and content validity of the Nursing Activity Score currently adopted in evaluating activities that best describe workloads in intensive care units and their weight in describing average nursing time consumption. The Nursing Activity Score calculates the amount of nursing time that each patient will require over the next 24 hours. It has been widely used around the world since its first validation in 2003. However, no re-evaluation of its validity with regard to the advancements achieved in intensive care units nursing care has been documented to date. A research project was undertaken from 2012 to 2015, aimed at critically evaluating and validating this tool in the current context of Italian intensive care units nursing care. The 23 items were translated forward and backward into the Italian language, then a panel of 10 experts in critical care evaluated the face validity. Content validity was evaluated through focus groups involving seven critical care expert registered nurses. The Nursing Activity Score instrument has been considered as not fully adequate to measure current intensive care units nursing activities and their weightings have been considered not fully adequate to score average nursing time consumption. From the content validity process, lack of adequacy has emerged with respect to the concept of nursing care underpinning the tool, the interventions included, its capability to predict the nursing resources needed, advancements achieved in intensive care units nurses' roles and competences, and the contextual factors that may influence consumption of nursing time. Development of the Nursing Activity Score tool both conceptually and in its structure, in view of the innovations that have occurred in the context of intensive care units, is necessary to continue to have a common tool to help clinicians and managers to capture accurately and compare nursing care required by patients in critical care settings. There is a need to

  17. VALIDITY OF A COMMERCIAL LINEAR ENCODER TO ESTIMATE BENCH PRESS 1 RM FROM THE FORCE-VELOCITY RELATIONSHIP

    Directory of Open Access Journals (Sweden)

    Laurent Bosquet

    2010-09-01

    Full Text Available The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway to estimate Bench press 1 repetition maximum (1RM from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg, while 1 RM estimated by the Musclelab's software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg. Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001 but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37. The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level.

  18. Validating Satellite Radar Altimetry Estimates of Antarctic sea ice Thickness Using the ASPeCt Data set

    Science.gov (United States)

    Giles, K. A.; Laxon, S. W.; Worby, T.

    2006-12-01

    Measurements of sea ice freeboard from spaceborne radar altimeters have been used to calculate Artic sea ice thickness on a basin wide scale during the winter. The same technique has the potential to be used in the Antarctic. The technique used to convert freeboard to thickness assumes hydrostatic equilibrium and uses estimates of snow depth and density and water and ice density from climatology. The nature of the Arctic climate means that the sea ice has a positive freeboard and that it becomes entirely snow free during the summer months, which simplifies the analysis of the radar return from the sea ice. However, in the Antarctic the situation may be more complicated with negative ice freeboards and flooded and refrozen snow resulting in inaccurate estimate of sea ice freeboard and therefore ice thickness. We present, for the first time, a comparison of estimates of Antarctic sea ice thickness calculated from satellite radar altimetry measurements of sea ice freeboard with ship observation of sea ice thickness from the ASPeCt data set. We describe the both the satellite and ship borne estimates of Antarctic sea ice thickness, the method used to compare the two data sets and outcome of the validation. We also assess the future potential of satellite radar altimetry to provide sea ice thickness in the Antarctic.

  19. Validation of Smartphone Based Retinal Photography for Diabetic Retinopathy Screening.

    Directory of Open Access Journals (Sweden)

    Ramachandran Rajalakshmi

    Full Text Available To evaluate the sensitivity and specificity of "fundus on phone' (FOP camera, a smartphone based retinal imaging system, as a screening tool for diabetic retinopathy (DR detection and DR severity in comparison with 7-standard field digital retinal photography.Single-site, prospective, comparative, instrument validation study.301 patients (602 eyes with type 2 diabetes underwent standard seven-field digital fundus photography with both Carl Zeiss fundus camera and indigenous FOP at a tertiary care diabetes centre in South India. Grading of DR was performed by two independent retina specialists using modified Early Treatment of Diabetic Retinopathy Study grading system. Sight threatening DR (STDR was defined by the presence of proliferative DR(PDR or diabetic macular edema. The sensitivity, specificity and image quality were assessed.The mean age of the participants was 53.5 ±9.6 years and mean duration of diabetes 12.5±7.3 years. The Zeiss camera showed that 43.9% had non-proliferative DR(NPDR and 15.3% had PDR while the FOP camera showed that 40.2% had NPDR and 15.3% had PDR. The sensitivity and specificity for detecting any DR by FOP was 92.7% (95%CI 87.8-96.1 and 98.4% (95%CI 94.3-99.8 respectively and the kappa (ĸ agreement was 0.90 (95%CI-0.85-0.95 p<0.001 while for STDR, the sensitivity was 87.9% (95%CI 83.2-92.9, specificity 94.9% (95%CI 89.7-98.2 and ĸ agreement was 0.80 (95%CI 0.71-0.89 p<0.001, compared to conventional photography.Retinal photography using FOP camera is effective for screening and diagnosis of DR and STDR with high sensitivity and specificity and has substantial agreement with conventional retinal photography.

  20. Validation of Smartphone Based Retinal Photography for Diabetic Retinopathy Screening.

    Science.gov (United States)

    Rajalakshmi, Ramachandran; Arulmalar, Subramanian; Usha, Manoharan; Prathiba, Vijayaraghavan; Kareemuddin, Khaji Syed; Anjana, Ranjit Mohan; Mohan, Viswanathan

    2015-01-01

    To evaluate the sensitivity and specificity of "fundus on phone' (FOP) camera, a smartphone based retinal imaging system, as a screening tool for diabetic retinopathy (DR) detection and DR severity in comparison with 7-standard field digital retinal photography. Single-site, prospective, comparative, instrument validation study. 301 patients (602 eyes) with type 2 diabetes underwent standard seven-field digital fundus photography with both Carl Zeiss fundus camera and indigenous FOP at a tertiary care diabetes centre in South India. Grading of DR was performed by two independent retina specialists using modified Early Treatment of Diabetic Retinopathy Study grading system. Sight threatening DR (STDR) was defined by the presence of proliferative DR(PDR) or diabetic macular edema. The sensitivity, specificity and image quality were assessed. The mean age of the participants was 53.5 ±9.6 years and mean duration of diabetes 12.5±7.3 years. The Zeiss camera showed that 43.9% had non-proliferative DR(NPDR) and 15.3% had PDR while the FOP camera showed that 40.2% had NPDR and 15.3% had PDR. The sensitivity and specificity for detecting any DR by FOP was 92.7% (95%CI 87.8-96.1) and 98.4% (95%CI 94.3-99.8) respectively and the kappa (ĸ) agreement was 0.90 (95%CI-0.85-0.95 p<0.001) while for STDR, the sensitivity was 87.9% (95%CI 83.2-92.9), specificity 94.9% (95%CI 89.7-98.2) and ĸ agreement was 0.80 (95%CI 0.71-0.89 p<0.001), compared to conventional photography. Retinal photography using FOP camera is effective for screening and diagnosis of DR and STDR with high sensitivity and specificity and has substantial agreement with conventional retinal photography.

  1. Finite element model validation of bridge based on structural health monitoring—Part II: Uncertainty propagation and model validation

    Directory of Open Access Journals (Sweden)

    Xiaosong Lin

    2015-08-01

    Full Text Available Because of uncertainties involved in modeling, construction, and measurement systems, the assessment of the FE model validation must be conducted based on stochastic measurements to provide designers with confidence for further applications. In this study, based on the updated model using response surface methodology, a practical model validation methodology via uncertainty propagation is presented. Several criteria of testing/analysis correlation are introduced, and the sources of model and testing uncertainties are also discussed. After that, Monte Carlo stochastic finite element (FE method is employed to perform the uncertainty quantification and propagation. The proposed methodology is illustrated with the examination of the validity of a large-span prestressed concrete continuous rigid frame bridge monitored under operational conditions. It can be concluded that the calculated frequencies and vibration modes of the updated FE model of Xiabaishi Bridge are consistent with the measured ones. The relative errors of each frequency are all less than 3.7%. Meanwhile, the overlap ratio indexes of each frequency are all more than 75%; The MAC values of each calculated vibration frequency are all more than 90%. The model of Xiabaishi Bridge is valid in the whole operation space including experimental design space, and its confidence level is upper than 95%. The validated FE model of Xiabaishi Bridge can reflect the current condition of Xiabaishi Bridge, and also can be used as basis of bridge health monitoring, damage identification and safety assessment.

  2. Temporal regularization of ultrasound-based liver motion estimation for image-guided radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J. [Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS foundation Trust, Sutton, London SM2 5PT (United Kingdom)

    2016-01-15

    Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison with normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking

  3. The construct validity and predictive validity of a self-efficacy measure for student teachers in competence-based education

    NARCIS (Netherlands)

    Prof.Dr. Filip Dochy; Dr. Johan Braeken; Dr. Mart van Dinther; Prof.Dr. Mien Segers

    2013-01-01

    This study intends to investigate the validity of a self-efficacy measure which is developed for predictive and diagnostic purposes concerning student teachers in competence-based education. CFA results delivered converging evidence for the multidimensionality of the student teacher self-efficacy

  4. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  5. Marker-based estimation of genetic parameters in genomics.

    Directory of Open Access Journals (Sweden)

    Zhiqiu Hu

    Full Text Available Linear mixed model (LMM analysis has been recently used extensively for estimating additive genetic variances and narrow-sense heritability in many genomic studies. While the LMM analysis is computationally less intensive than the Bayesian algorithms, it remains infeasible for large-scale genomic data sets. In this paper, we advocate the use of a statistical procedure known as symmetric differences squared (SDS as it may serve as a viable alternative when the LMM methods have difficulty or fail to work with large datasets. The SDS procedure is a general and computationally simple method based only on the least squares regression analysis. We carry out computer simulations and empirical analyses to compare the SDS procedure with two commonly used LMM-based procedures. Our results show that the SDS method is not as good as the LMM methods for small data sets, but it becomes progressively better and can match well with the precision of estimation by the LMM methods for data sets with large sample sizes. Its major advantage is that with larger and larger samples, it continues to work with the increasing precision of estimation while the commonly used LMM methods are no longer able to work under our current typical computing capacity. Thus, these results suggest that the SDS method can serve as a viable alternative particularly when analyzing 'big' genomic data sets.

  6. Improved Goldstein Interferogram Filter Based on Local Fringe Frequency Estimation.

    Science.gov (United States)

    Feng, Qingqing; Xu, Huaping; Wu, Zhefeng; You, Yanan; Liu, Wei; Ge, Shiqi

    2016-11-23

    The quality of an interferogram, which is limited by various phase noise, will greatly affect the further processes of InSAR, such as phase unwrapping. Interferometric SAR (InSAR) geophysical measurements', such as height or displacement, phase filtering is therefore an essential step. In this work, an improved Goldstein interferogram filter is proposed to suppress the phase noise while preserving the fringe edges. First, the proposed adaptive filter step, performed before frequency estimation, is employed to improve the estimation accuracy. Subsequently, to preserve the fringe characteristics, the estimated fringe frequency in each fixed filtering patch is removed from the original noisy phase. Then, the residual phase is smoothed based on the modified Goldstein filter with its parameter alpha dependent on both the coherence map and the residual phase frequency. Finally, the filtered residual phase and the removed fringe frequency are combined to generate the filtered interferogram, with the loss of signal minimized while reducing the noise level. The effectiveness of the proposed method is verified by experimental results based on both simulated and real data.

  7. Optimal estimation of spectral reflectance based on metamerism

    Science.gov (United States)

    Chou, Tzren-Ru; Lin, Wei-Ju

    2012-01-01

    In this paper, we proposed an accurate estimation method for spectral reflectance of objects captured in an image. The spectral reflectance is simply modeled by a linear combination of three basic spectrums of R, G, and B colors respectively, named as spectral reflective bases of objects, which are acquired by solving a linear system based on the principle of color metamerism. Some experiments were performed to evaluate the accuracy of the estimated spectral reflectance of objects. The average mean square error of 24 colors in Macbeth checker between we simulated and the measured is 0.0866, and the maximum is 0.310. In addition, the average color difference of the 24 colors is less than 1.5 under the D65 illuminant. There are 13 colors having their color difference values less than 1, and other 8 colors having the values during the range of 1 and 2. Only three colors are relatively larger, with the differences of 2.558, 4.130 and 2.569, from the colors of No. 2, No. 13, and No. 18 in Macbeth checker respectively. Furthermore, the computational cost of this spectral estimation is very low and suitable for many practical applications in real time.

  8. A Geometrical-Based Model for Cochannel Interference Analysis and Capacity Estimation of CDMA Cellular Systems

    Directory of Open Access Journals (Sweden)

    Konstantinos B. Baltzis

    2008-10-01

    Full Text Available A common assumption in cellular communications is the circular-cell approximation. In this paper, an alternative analysis based on the hexagonal shape of the cells is presented. A geometrical-based stochastic model is proposed to describe the angle of arrival of the interfering signals in the reverse link of a cellular system. Explicit closed form expressions are derived, and simulations performed exhibit the characteristics and validate the accuracy of the proposed model. Applications in the capacity estimation of WCDMA cellular networks are presented. Dependence of system capacity of the sectorization of the cells and the base station antenna radiation pattern is explored. Comparisons with data in literature validate the accuracy of the proposed model. The degree of error of the hexagonal and the circular-cell approaches has been investigated indicating the validity of the proposed model. Results have also shown that, in many cases, the two approaches give similar results when the radius of the circle equals to the hexagon inradius. A brief discussion on how the proposed technique may be applied to broadband access networks is finally made.

  9. A Geometrical-Based Model for Cochannel Interference Analysis and Capacity Estimation of CDMA Cellular Systems

    Directory of Open Access Journals (Sweden)

    Baltzis KonstantinosB

    2008-01-01

    Full Text Available Abstract A common assumption in cellular communications is the circular-cell approximation. In this paper, an alternative analysis based on the hexagonal shape of the cells is presented. A geometrical-based stochastic model is proposed to describe the angle of arrival of the interfering signals in the reverse link of a cellular system. Explicit closed form expressions are derived, and simulations performed exhibit the characteristics and validate the accuracy of the proposed model. Applications in the capacity estimation of WCDMA cellular networks are presented. Dependence of system capacity of the sectorization of the cells and the base station antenna radiation pattern is explored. Comparisons with data in literature validate the accuracy of the proposed model. The degree of error of the hexagonal and the circular-cell approaches has been investigated indicating the validity of the proposed model. Results have also shown that, in many cases, the two approaches give similar results when the radius of the circle equals to the hexagon inradius. A brief discussion on how the proposed technique may be applied to broadband access networks is finally made.

  10. Small Launch Vehicle Trade Space Definition: Development of a Zero Level Mass Estimation Tool with Trajectory Validation

    Science.gov (United States)

    Waters, Eric D.

    2013-01-01

    Recent high level interest in the capability of small launch vehicles has placed significant demand on determining the trade space these vehicles occupy. This has led to the development of a zero level analysis tool that can quickly determine the minimum expected vehicle gross liftoff weight (GLOW) in terms of vehicle stage specific impulse (Isp) and propellant mass fraction (pmf) for any given payload value. Utilizing an extensive background in Earth to orbit trajectory experience a total necessary delta v the vehicle must achieve can be estimated including relevant loss terms. This foresight into expected losses allows for more specific assumptions relating to the initial estimates of thrust to weight values for each stage. This tool was further validated against a trajectory model, in this case the Program to Optimize Simulated Trajectories (POST), to determine if the initial sizing delta v was adequate to meet payload expectations. Presented here is a description of how the tool is setup and the approach the analyst must take when using the tool. Also, expected outputs which are dependent on the type of small launch vehicle being sized will be displayed. The method of validation will be discussed as well as where the sizing tool fits into the vehicle design process.

  11. Development and Validation of RP-HPLC Method for Simultaneous Estimation of Aspirin and Esomeprazole Magnesium in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    Dipali Patel

    2013-01-01

    Full Text Available A simple, specific, precise, and accurate reversed-phase HPLC method was developed and validated for simultaneous estimation of aspirin and esomeprazole magnesium in tablet dosage forms. The separation was achieved by HyperChrom ODS-BP C18 column (200 mm × 4.6 mm; 5.0 μm using acetonitrile: methanol: 0.05 M phosphate buffer at pH 3 adjusted with orthophosphoric acid (25 : 25 : 50, v/v as eluent, at a flow rate of 1 mL/min. Detection was carried out at wavelength 230 nm. The retention times of aspirin and esomeprazole magnesium were 4.29 min and 6.09 min, respectively. The linearity was established over the concentration ranges of 10–70 μg/mL and 10–30 μg/mL with correlation coefficients (r2 0.9986 and 0.9973 for aspirin and esomeprazole magnesium, respectively. The mean recoveries were found to be in the ranges of 99.80–100.57% and 99.70–100.83% for aspirin and esomeprazole magnesium, respectively. The proposed method has been validated as per ICH guidelines and successfully applied to the estimation of aspirin and esomeprazole magnesium in their combined tablet dosage form.

  12. Validation of phantom-based harmonization for patient harmonization.

    Science.gov (United States)

    Panetta, Joseph V; Daube-Witherspoon, Margaret E; Karp, Joel S

    2017-07-01

    To improve the precision of multicenter clinical trials, several efforts are underway to determine scanner-specific parameters for harmonization using standardized phantom measurements. The goal of this study was to test the correspondence between quantification in phantom and patient images and validate the use of phantoms for harmonization of patient images. The National Electrical Manufacturers' Association image quality phantom with hot spheres was scanned on two time-of-flight PET scanners. Whole-body [18 F]-fluorodeoxyglucose (FDG)-PET scans were acquired of subjects on the same systems. List-mode events from spheres (diam.: 10-28 mm) measured in air on each scanner were embedded into the phantom and subject list-mode data from each scanner to create lesions with known uptake with respect to the local background in the phantom and each subject's liver and lung regions, as a proxy to characterize true lesion quantification. Images were analyzed using the contrast recovery coefficient (CRC) typically used in phantom studies and serving as a surrogate for the standardized uptake value used clinically. Postreconstruction filtering (resolution recovery and Gaussian smoothing) was applied to determine if the effect on the phantom images translates equivalently to subject images. Three postfiltering strategies were selected to harmonize the CRCmean or CRCmax values between the two scanners based on the phantom measurements and then applied to the subject images. Both the average CRCmean and CRCmax values for lesions embedded in the lung and liver in four subjects (BMI range 25-38) agreed to within 5% with the CRC values for lesions embedded in the phantom for all lesion sizes. In addition, the relative changes in CRCmean and CRCmax resulting from the application of the postfilters on the subject and phantom images were consistent within measurement uncertainty. Further, the root mean squared percent difference (RMSpd ) between CRC values on the two scanners

  13. Model validation and error estimation of tsunami runup using high resolution data in Sadeng Port, Gunungkidul, Yogyakarta

    Science.gov (United States)

    Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo

    2017-07-01

    A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.

  14. Left ventricular strain and its pattern estimated from cine CMR and validation with DENSE.

    Science.gov (United States)

    Gao, Hao; Allan, Andrew; McComb, Christie; Luo, Xiaoyu; Berry, Colin

    2014-07-07

    Measurement of local strain provides insight into the biomechanical significance of viable myocardium. We attempted to estimate myocardial strain from cine cardiovascular magnetic resonance (CMR) images by using a b-spline deformable image registration method. Three healthy volunteers and 41 patients with either recent or chronic myocardial infarction (MI) were studied at 1.5 Tesla with both cine and DENSE CMR. Regional circumferential and radial left ventricular strains were estimated from cine and DENSE acquisitions. In all healthy volunteers, there was no difference for peak circumferential strain (- 0.18 ± 0.04 versus - 0.18 ± 0.03, p = 0.76) between cine and DENSE CMR, however peak radial strain was overestimated from cine (0.84 ± 0.37 versus 0.49 ± 0.2, p cine were similar to the patterns from DENSE, including the strain evolution related to recovery time and strain patterns related to MI scar extent. Furthermore, cine-derived strain disclosed different strain patterns in MI and non-MI regions, and regions with transmural and non-transmural MI as DENSE. Although there were large variations with radial strain measurements from cine CMR images, useful circumferential strain information can be obtained from routine clinical CMR imaging. Cine strain analysis has potential to improve the diagnostic yield from routine CMR imaging in clinical practice.

  15. Non-invasive estimation of venous admixture: validation of a new formula.

    Science.gov (United States)

    Hope, D A; Jenkins, B J; Willis, N; Maddock, H; Mapleson, W W

    1995-05-01

    We have developed a computer program that estimates venous admixture (intra-pulmonary shunt) from four measurements: haemoglobin concentration, end-tidal carbon dioxide tension (PE'CO2), fractional inspired oxygen concentration (FIO2) and pulse oximetry (SpO2). The formula was tested on patients in an intensive therapy unit by using it to estimate shunt while it was measured simultaneously by a standard, invasive method. A total of 101 measurements were made in 29 patients. After correcting the systematic errors in the assumed differences between PE'CO2 and arterial PCO2, and between SpO2 and co-oximetrically measured SaO2, and correcting for a trend in the arteriovenous oxygen concentration difference (C(a-v))2) with shunt, the bias of the non-invasive minus invasive shunt differences was negligible, with no significant dependence on shunt. The limits of agreement were then +/- 16% shunt overall (+/- 13% within patients). When SaO2 was used instead of SpO2, the limits were +/- 11% (+/- 8% within patients).

  16. Non-intrusive Load Disaggregation Based on Kernel Density Estimation

    Science.gov (United States)

    Sen, Wang; Dongsheng, Yang; Chuchen, Guo; Shengxian, Du

    2017-05-01

    Aiming at the problem of high cost and difficult implementation of high frequency non-intrusive load decomposition method, this paper proposes a new method based on kernel density estimation(KDE) for low frequency NILM (Non-intrusive load monitoring). The method establishes power reference model of electricity load in different working conditions and appliance’s possible combinations first, then probability distribution is calculated as appliances features by kernel density estimation. After that, target power data is divided by step changes, whose distributions will be compared with reference models, and the most similar reference model will be chosen as the decomposed consequence. The proposed approach was tested with data from the GREEND public data set, it showed better performance in terms of energy disaggregation accuracy compared with many traditional NILM approaches. Our results show good performance which can achieve more than 93% accuracy in simulation.

  17. The yield estimation of semiconductor products based on truncated samples

    Directory of Open Access Journals (Sweden)

    Gu K.

    2013-01-01

    Full Text Available Product yield reflects the potential product quality and reliability, which means that high yield corresponds to good quality and high reliability. Yet consumers usually couldn’t know the actual yield of the products they purchase. Generally, the products that consumers get from suppliers are all eligible. Since the quality characteristic of the eligible products is covered by the specifications, then the observations of quality characteristic follow truncated normal distribution. In the light of maximum likelihood estimation, this paper proposes an algorithm for calculating the parameters of full Gaussian distribution before truncation based on truncated data and estimating product yield. The confidence interval of the yield result is derived, and the effect of sample size on the precision of the calculation result is also analyzed. Finally, the effectiveness of this algorithm is verified by an actual instance.

  18. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  19. Estimating spacecraft attitude based on in-orbit sensor measurements

    DEFF Research Database (Denmark)

    Jakobsen, Britt; Lyn-Knudsen, Kevin; Mølgaard, Mathias

    2014-01-01

    of 2014/15. To better evaluate the performance of the payload, it is desirable to couple the payload data with the satellite's orientation. With AAUSAT3 already in orbit it is possible to collect data directly from space in order to evaluate the performance of the attitude estimation. An extended kalman...... filter (EKF) is used for quaternion-based attitude estimation. A Simulink simulation environment developed for AAUSAT3, containing a "truth model" of the satellite and the orbit environment, is used to test the performance The performance is tested using different sensor noise parameters obtained both...... solely on Earth or whether an in-orbit tuning/update of the algorithm is needed. of the EKF. Generally, sensor noise variances are larger in the in-orbit measurements than in the measurements obtained on ground. From Monte Carlo simulations with varying settings of the satellite inertia and initial time...

  20. Vce-based methods for temperature estimation of high power IGBT modules during power cycling - A comparison

    DEFF Research Database (Denmark)

    Amoiridis, Anastasios; Anurag, Anup; Ghimire, Pramod

    2015-01-01

    Temperature estimation is of great importance for performance and reliability of IGBT power modules in converter operation as well as in active power cycling tests. It is common to be estimated through Thermo-Sensitive Electrical Parameters such as the forward voltage drop (Vce) of the chip....... This experimental work evaluates the validity and accuracy of two Vce based methods applied on high power IGBT modules during power cycling tests. The first method estimates the chip temperature when low sense current is applied and the second method when normal load current is present. Finally, a correction factor...

  1. Drone based estimation of actual evapotranspiration over different forest types

    Science.gov (United States)

    Marzahn, Philip; Gampe, David; Castro, Saulo; Vega-Araya, Mauricio; Sanchez-Azofeifa, Arturo; Ludwig, Ralf

    2017-04-01

    Actual evapotranspiration (Eta) plays an important role in surface-atmosphere interactions. Traditionally, Eta is measured by means of lysimeters, eddy-covariance systems or fiber optics, providing estimates which are spatially restricted to a footprint from a few square meters up to several hectares . In the past, several methods have been developed to derive Eta by means of multi-spectral remote sensing data using thermal and VIS/NIR satellite imagery of the land surface. As such approaches do have their justification on coarser scales, they do not provide Eta information on the fine resolution plant level over large areas which is mandatory for the detection of water stress or tree mortality. In this study, we present a comparison of a drone based assessment of Eta with eddy-covariance measurements over two different forest types - a deciduous forest in Alberta, Canada and a tropical dry forest in Costa Rica. Drone based estimates of Eta were calculated applying the Triangle-Method proposed by Jiang and Islam (1999). The Triangle-Method estimates actual evapotranspiration (Eta) by means of the Normalized Difference Vegetation Index (NDVI) and land surface temperature (LST) provided by two camera systems (MicaSense RedEdge, FLIR TAU2 640) flown simultaneously on an octocopter. . Results indicate a high transferability of the original approach from Jiang and Islam (1999) developed for coarse to medium resolution satellite imagery tothe high resolution drone data, leading to a deviation in Eta estimates of 10% compared to the eddy-covariance measurements. In addition, the spatial footprint of the eddy-covariance measurement can be detected with this approach, by showing the spatial heterogeneities of Eta due to the spatial distribution of different trees and understory vegetation.

  2. A History-based Estimation for LHCb job requirements

    Science.gov (United States)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  3. A power function method for estimating base flow.

    Science.gov (United States)

    Lott, Darline A; Stewart, Mark T

    2013-01-01

    Analytical base flow separation techniques are often used to determine the base flow contribution to total stream flow. Most analytical methods derive base flow from discharge records alone without using basin-specific variables other than basin area. This paper derives a power function for estimating base flow, the form being aQ(b) + cQ, an analytical method calibrated against an integrated basin variable, specific conductance, relating base flow to total discharge, and is consistent with observed mathematical behavior of dissolved solids in stream flow with varying discharge. Advantages of the method are being uncomplicated, reproducible, and applicable to hydrograph separation in basins with limited specific conductance data. The power function relationship between base flow and discharge holds over a wide range of basin areas. It better replicates base flow determined by mass balance methods than analytical methods such as filters or smoothing routines that are not calibrated to natural tracers or empirical basin and gauge-specific variables. Also, it can be used with discharge during periods without specific conductance values, including separating base flow from quick flow for single events. However, it may overestimate base flow during very high flow events. Application of geochemical mass balance and power function base flow separation methods to stream flow and specific conductance records from multiple gauges in the same basin suggests that analytical base flow separation methods must be calibrated at each gauge. Using average values of coefficients introduces a potentially significant and unknown error in base flow as compared with mass balance methods. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  4. Estimation and Validation of Land Surface Temperatures from Chinese Second-Generation Polar-Orbit FY-3A VIRR Data

    Directory of Open Access Journals (Sweden)

    Bo-Hui Tang

    2015-03-01

    Full Text Available This work estimated and validated the land surface temperature (LST from thermal-infrared Channels 4 (10.8 µm and 5 (12.0 µm of the Visible and Infrared Radiometer (VIRR onboard the second-generation Chinese polar-orbiting FengYun-3A (FY-3A meteorological satellite. The LST, mean emissivity and atmospheric water vapor content (WVC were divided into several tractable sub-ranges with little overlap to improve the fitting accuracy. The experimental results showed that the root mean square errors (RMSEs were proportional to the viewing zenith angles (VZAs and WVC. The RMSEs were below 1.0 K for VZA sub-ranges less than 30° or for VZA sub-ranges less than 60° and WVC less than 3.5 g/cm2, provided that the land sur