WorldWideScience

Sample records for valid estimates based

  1. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  2. Development and validation of satellite based estimates of surface visibility

    Science.gov (United States)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2015-10-01

    A satellite based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5% for classifying Clear (V ≥ 30 km), Moderate (10 km ≤ V United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network, and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  3. An Improved Fuzzy Based Missing Value Estimation in DNA Microarray Validated by Gene Ranking

    Directory of Open Access Journals (Sweden)

    Sujay Saha

    2016-01-01

    Full Text Available Most of the gene expression data analysis algorithms require the entire gene expression matrix without any missing values. Hence, it is necessary to devise methods which would impute missing data values accurately. There exist a number of imputation algorithms to estimate those missing values. This work starts with a microarray dataset containing multiple missing values. We first apply the modified version of the fuzzy theory based existing method LRFDVImpute to impute multiple missing values of time series gene expression data and then validate the result of imputation by genetic algorithm (GA based gene ranking methodology along with some regular statistical validation techniques, like RMSE method. Gene ranking, as far as our knowledge, has not been used yet to validate the result of missing value estimation. Firstly, the proposed method has been tested on the very popular Spellman dataset and results show that error margins have been drastically reduced compared to some previous works, which indirectly validates the statistical significance of the proposed method. Then it has been applied on four other 2-class benchmark datasets, like Colorectal Cancer tumours dataset (GDS4382, Breast Cancer dataset (GSE349-350, Prostate Cancer dataset, and DLBCL-FL (Leukaemia for both missing value estimation and ranking the genes, and the results show that the proposed method can reach 100% classification accuracy with very few dominant genes, which indirectly validates the biological significance of the proposed method.

  4. MR-based water content estimation in cartilage: design and validation of a method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sophie; Ringgaard, Steffen

    Purpose: Design and validation of an MR-based method that allows the calculation of the water content in cartilage tissue. Methods and Materials: Cartilage tissue T1 map based water content MR sequences were used on a 37 Celsius degree stable system. The T1 map intensity signal was analyzed on 6...... cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer used. Finally, the method was validated after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained...... map based water content sequences can provide information that, after being analyzed using a T1-map analysis software, can be interpreted as the water contained inside a cartilage tissue. The amount of water estimated using this method was similar to the one obtained at the dry-freeze procedure...

  5. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    International Nuclear Information System (INIS)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K.

    2016-01-01

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  6. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K. [Boston Children' s Hospital, Harvard Medical School, Department of Radiology, Boston, MA (United States)

    2016-03-15

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  7. A stepwise validation of a wearable system for estimating energy expenditure in field-based research

    International Nuclear Information System (INIS)

    Rumo, Martin; Mäder, Urs; Amft, Oliver; Tröster, Gerhard

    2011-01-01

    Regular physical activity (PA) is an important contributor to a healthy lifestyle. Currently, standard sensor-based methods to assess PA in field-based research rely on a single accelerometer mounted near the body's center of mass. This paper introduces a wearable system that estimates energy expenditure (EE) based on seven recognized activity types. The system was developed with data from 32 healthy subjects and consists of a chest mounted heart rate belt and two accelerometers attached to a thigh and dominant upper arm. The system was validated with 12 other subjects under restricted lab conditions and simulated free-living conditions against indirect calorimetry, as well as in subjects' habitual environments for 2 weeks against the doubly labeled water method. Our stepwise validation methodology gradually trades reference information from the lab against realistic data from the field. The average accuracy for EE estimation was 88% for restricted lab conditions, 55% for simulated free-living conditions and 87% and 91% for the estimation of average daily EE over the period of 1 and 2 weeks

  8. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    Science.gov (United States)

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  9. Low-cost extrapolation method for maximal lte radio base station exposure estimation: Test and validation

    International Nuclear Information System (INIS)

    Verloock, L.; Joseph, W.; Gati, A.; Varsier, N.; Flach, B.; Wiart, J.; Martens, L.

    2013-01-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on down-link band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2x2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders. (authors)

  10. Estimating misclassification error: a closer look at cross-validation based methods

    Directory of Open Access Journals (Sweden)

    Ounpraseuth Songthip

    2012-11-01

    Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

  11. Validity and practicability of smartphone-based photographic food records for estimating energy and nutrient intake.

    Science.gov (United States)

    Kong, Kaimeng; Zhang, Lulu; Huang, Lisu; Tao, Yexuan

    2017-05-01

    Image-assisted dietary assessment methods are frequently used to record individual eating habits. This study tested the validity of a smartphone-based photographic food recording approach by comparing the results obtained with those of a weighed food record. We also assessed the practicality of the method by using it to measure the energy and nutrient intake of college students. The experiment was implemented in two phases, each lasting 2 weeks. In the first phase, a labelled menu and a photograph database were constructed. The energy and nutrient content of 31 randomly selected dishes in three different portion sizes were then estimated by the photograph-based method and compared with a weighed food record. In the second phase, we combined the smartphone-based photographic method with the WeChat smartphone application and applied this to 120 randomly selected participants to record their energy and nutrient intake. The Pearson correlation coefficients for energy, protein, fat, and carbohydrate content between the weighed and the photographic food record were 0.997, 0.936, 0.996, and 0.999, respectively. Bland-Altman plots showed good agreement between the two methods. The estimated protein, fat, and carbohydrate intake by participants was in accordance with values in the Chinese Residents' Nutrition and Chronic Disease report (2015). Participants expressed satisfaction with the new approach and the compliance rate was 97.5%. The smartphone-based photographic dietary assessment method combined with the WeChat instant messaging application was effective and practical for use by young people.

  12. Validation of a spectrophotometer-based method for estimating daily sperm production and deferent duct transit.

    Science.gov (United States)

    Froman, D P; Rhoads, D D

    2012-10-01

    The objectives of the present work were 3-fold. First, a new method for estimating daily sperm production was validated. This method, in turn, was used to evaluate testis output as well as deferent duct throughput. Next, this analytical approach was evaluated in 2 experiments. The first experiment compared left and right reproductive tracts within roosters. The second experiment compared reproductive tract throughput in roosters from low and high sperm mobility lines. Standard curves were constructed from which unknown concentrations of sperm cells and sperm nuclei could be predicted from observed absorbance. In each case, the independent variable was based upon hemacytometer counts, and absorbance was a linear function of concentration. Reproductive tracts were excised, semen recovered from each duct, and the extragonadal sperm reserve determined by multiplying volume by sperm cell concentration. Testicular sperm nuclei were procured by homogenization of a whole testis, overlaying a 20-mL volume of homogenate upon 15% (wt/vol) Accudenz (Accurate Chemical and Scientific Corporation, Westbury, NY), and then washing nuclei by centrifugation through the Accudenz layer. Daily sperm production was determined by dividing the predicted number of sperm nuclei within the homogenate by 4.5 d (i.e., the time sperm with elongated nuclei spend within the testis). Sperm transit through the deferent duct was estimated by dividing the extragonadal reserve by daily sperm production. Neither the efficiency of sperm production (sperm per gram of testicular parenchyma per day) nor deferent duct transit differed between left and right reproductive tracts (P > 0.05). Whereas efficiency of sperm production did not differ (P > 0.05) between low and high sperm mobility lines, deferent duct transit differed between lines (P < 0.001). On average, this process required 2.2 and 1.0 d for low and high lines, respectively. In summary, we developed and then tested a method for quantifying male

  13. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    Science.gov (United States)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the

  14. Estimating and validating ground-based timber harvesting production through computer simulation

    Science.gov (United States)

    Jingxin Wang; Chris B. LeDoux

    2003-01-01

    Estimating ground-based timber harvesting systems production with an object oriented methodology was investigated. The estimation model developed generates stands of trees, simulates chain saw, drive-to-tree feller-buncher, swing-to-tree single-grip harvester felling, and grapple skidder and forwarder extraction activities, and analyzes costs and productivity. It also...

  15. The validity and reproducibility of food-frequency questionnaire–based total antioxidant capacity estimates in Swedish women

    Science.gov (United States)

    Total antioxidant capacity (TAC) provides an assessment of antioxidant activity and synergistic interactions of redox molecules in foods and plasma. We investigated the validity and reproducibility of food frequency questionnaire (FFQ)–based TAC estimates assessed by oxygen radical absorbance capaci...

  16. Model-based PSF and MTF estimation and validation from skeletal clinical CT images.

    Science.gov (United States)

    Pakdel, Amirreza; Mainprize, James G; Robert, Normand; Fialkov, Jeffery; Whyne, Cari M

    2014-01-01

    A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge about the scanner-specific parameters.

  17. Model-based PSF and MTF estimation and validation from skeletal clinical CT images

    International Nuclear Information System (INIS)

    Pakdel, Amirreza; Mainprize, James G.; Robert, Normand; Fialkov, Jeffery; Whyne, Cari M.

    2014-01-01

    Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge about the

  18. Model-based PSF and MTF estimation and validation from skeletal clinical CT images

    Energy Technology Data Exchange (ETDEWEB)

    Pakdel, Amirreza [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Mainprize, James G.; Robert, Normand [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5 (Canada); Fialkov, Jeffery [Division of Plastic Surgery, Sunnybrook Health Sciences Center, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Whyne, Cari M., E-mail: cari.whyne@sunnybrook.ca [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada)

    2014-01-15

    Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge

  19. Type-specific human papillomavirus biological features: validated model-based estimates.

    Directory of Open Access Journals (Sweden)

    Iacopo Baussano

    Full Text Available Infection with high-risk (hr human papillomavirus (HPV is considered the necessary cause of cervical cancer. Vaccination against HPV16 and 18 types, which are responsible of about 75% of cervical cancer worldwide, is expected to have a major global impact on cervical cancer occurrence. Valid estimates of the parameters that regulate the natural history of hrHPV infections are crucial to draw reliable projections of the impact of vaccination. We devised a mathematical model to estimate the probability of infection transmission, the rate of clearance, and the patterns of immune response following the clearance of infection of 13 hrHPV types. To test the validity of our estimates, we fitted the same transmission model to two large independent datasets from Italy and Sweden and assessed finding consistency. The two populations, both unvaccinated, differed substantially by sexual behaviour, age distribution, and study setting (screening for cervical cancer or Chlamydia trachomatis infection. Estimated transmission probability of hrHPV types (80% for HPV16, 73%-82% for HPV18, and above 50% for most other types; clearance rates decreasing as a function of time since infection; and partial protection against re-infection with the same hrHPV type (approximately 20% for HPV16 and 50% for the other types were similar in the two countries. The model could accurately predict the HPV16 prevalence observed in Italy among women who were not infected three years before. In conclusion, our models inform on biological parameters that cannot at the moment be measured directly from any empirical data but are essential to forecast the impact of HPV vaccination programmes.

  20. MR-based Water Content Estimation in Cartilage: Design and Validation of a Method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sofie; Ringgaard, Steffen

    2012-01-01

    Objective Design and validation of an MR-based method that allows the calculation of the water content in cartilage tissue. Material and Methods We modified and adapted to cartilage tissue T1 map based water content MR sequences commonly used in the neurology field. Using a 37 Celsius degree stable...... was costumed and programmed. Finally, we validated the method after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue...... contains) and we measured the water they contained. Results We could reproduce twice the 37 Celsius degree system and could perform the measurements in a similar way. We found that the MR T1 map based water content sequences can provide information that, after being analyzed with a special software, can...

  1. Assessing the external validity of model-based estimates of the incidence of heart attack in England: a modelling study

    Directory of Open Access Journals (Sweden)

    Peter Scarborough

    2016-11-01

    Full Text Available Abstract Background The DisMod II model is designed to estimate epidemiological parameters on diseases where measured data are incomplete and has been used to provide estimates of disease incidence for the Global Burden of Disease study. We assessed the external validity of the DisMod II model by comparing modelled estimates of the incidence of first acute myocardial infarction (AMI in England in 2010 with estimates derived from a linked dataset of hospital records and death certificates. Methods Inputs for DisMod II were prevalence rates of ever having had an AMI taken from a population health survey, total mortality rates and AMI mortality rates taken from death certificates. By definition, remission rates were zero. We estimated first AMI incidence in an external dataset from England in 2010 using a linked dataset including all hospital admissions and death certificates since 1998. 95 % confidence intervals were derived around estimates from the external dataset and DisMod II estimates based on sampling variance and reported uncertainty in prevalence estimates respectively. Results Estimates of the incidence rate for the whole population were higher in the DisMod II results than the external dataset (+54 % for men and +26 % for women. Age-specific results showed that the DisMod II results over-estimated incidence for all but the oldest age groups. Confidence intervals for the DisMod II and external dataset estimates did not overlap for most age groups. Conclusion By comparison with AMI incidence rates in England, DisMod II did not achieve external validity for age-specific incidence rates, but did provide global estimates of incidence that are of similar magnitude to measured estimates. The model should be used with caution when estimating age-specific incidence rates.

  2. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    Science.gov (United States)

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  3. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations.

    Science.gov (United States)

    Checchi, Francesco; Stewart, Barclay T; Palmer, Jennifer J; Grundy, Chris

    2013-01-23

    Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons' camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to "gold standard" reference population figures from census or other robust methods. Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of layout. For each site, estimates were produced in 2-5 working person-days. In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or shelters, a complex pattern of roofs and multi-level buildings. Based on these results, we discuss possible ways forward for the method's development.

  4. Regional GRACE-based estimates of water mass variations over Australia: validation and interpretation

    Science.gov (United States)

    Seoane, L.; Ramillien, G.; Frappart, F.; Leblanc, M.

    2013-04-01

    Time series of regional 2°-by-2° GRACE solutions have been computed from 2003 to 2011 with a 10 day resolution by using an energy integral method over Australia [112° E 156° E; 44° S 10° S]. This approach uses the dynamical orbit analysis of GRACE Level 1 measurements, and specially accurate along-track K Band Range Rate (KBRR) residuals (1 μm s-1 level of error) to estimate the total water mass over continental regions. The advantages of regional solutions are a significant reduction of GRACE aliasing errors (i.e. north-south stripes) providing a more accurate estimation of water mass balance for hydrological applications. In this paper, the validation of these regional solutions over Australia is presented as well as their ability to describe water mass change as a reponse of climate forcings such as El Niño. Principal component analysis of GRACE-derived total water storage maps show spatial and temporal patterns that are consistent with independent datasets (e.g. rainfall, climate index and in-situ observations). Regional TWS show higher spatial correlations with in-situ water table measurements over Murray-Darling drainage basin (80-90%), and they offer a better localization of hydrological structures than classical GRACE global solutions (i.e. Level 2 GRGS products and 400 km ICA solutions as a linear combination of GFZ, CSR and JPL GRACE solutions).

  5. An intercomparison and validation of satellite-based surface radiative energy flux estimates over the Arctic

    Science.gov (United States)

    Riihelä, Aku; Key, Jeffrey R.; Meirink, Jan Fokke; Kuipers Munneke, Peter; Palo, Timo; Karlsson, Karl-Göran

    2017-05-01

    Accurate determination of radiative energy fluxes over the Arctic is of crucial importance for understanding atmosphere-surface interactions, melt and refreezing cycles of the snow and ice cover, and the role of the Arctic in the global energy budget. Satellite-based estimates can provide comprehensive spatiotemporal coverage, but the accuracy and comparability of the existing data sets must be ascertained to facilitate their use. Here we compare radiative flux estimates from Clouds and the Earth's Radiant Energy System (CERES) Synoptic 1-degree (SYN1deg)/Energy Balanced and Filled, Global Energy and Water Cycle Experiment (GEWEX) surface energy budget, and our own experimental FluxNet / Satellite Application Facility on Climate Monitoring cLoud, Albedo and RAdiation (CLARA) data against in situ observations over Arctic sea ice and the Greenland Ice Sheet during summer of 2007. In general, CERES SYN1deg flux estimates agree best with in situ measurements, although with two particular limitations: (1) over sea ice the upwelling shortwave flux in CERES SYN1deg appears to be underestimated because of an underestimated surface albedo and (2) the CERES SYN1deg upwelling longwave flux over sea ice saturates during midsummer. The Advanced Very High Resolution Radiometer-based GEWEX and FluxNet-CLARA flux estimates generally show a larger range in retrieval errors relative to CERES, with contrasting tendencies relative to each other. The largest source of retrieval error in the FluxNet-CLARA downwelling shortwave flux is shown to be an overestimated cloud optical thickness. The results illustrate that satellite-based flux estimates over the Arctic are not yet homogeneous and that further efforts are necessary to investigate the differences in the surface and cloud properties which lead to disagreements in flux retrievals.

  6. Content validity and its estimation

    Directory of Open Access Journals (Sweden)

    Yaghmale F

    2003-04-01

    Full Text Available Background: Measuring content validity of instruments are important. This type of validity can help to ensure construct validity and give confidence to the readers and researchers about instruments. content validity refers to the degree that the instrument covers the content that it is supposed to measure. For content validity two judgments are necessary: the measurable extent of each item for defining the traits and the set of items that represents all aspects of the traits. Purpose: To develop a content valid scale for assessing experience with computer usage. Methods: First a review of 2 volumes of International Journal of Nursing Studies, was conducted with onlyI article out of 13 which documented content validity did so by a 4-point content validity index (CV! and the judgment of 3 experts. Then a scale with 38 items was developed. The experts were asked to rate each item based on relevance, clarity, simplicity and ambiguity on the four-point scale. Content Validity Index (CVI for each item was determined. Result: Of 38 items, those with CVIover 0.75 remained and the rest were discarded reSulting to 25-item scale. Conclusion: Although documenting content validity of an instrument may seem expensive in terms of time and human resources, its importance warrants greater attention when a valid assessment instrument is to be developed. Keywords: Content Validity, Measuring Content Validity

  7. Towards valid 'serious non-fatal injury' indicators for international comparisons based on probability of admission estimates

    DEFF Research Database (Denmark)

    Cryer, Colin; Miller, Ted R; Lyons, Ronan A

    2017-01-01

    in regions of Canada, Denmark, Greece, Spain and the USA. International Classification of Diseases (ICD)-9 or ICD-10 4-digit/character injury diagnosis-specific ED attendance and inpatient admission counts were provided, based on a common protocol. Diagnosis-specific and region-specific PrAs with 95% CIs...... diagnoses with high estimated PrAs. These diagnoses can be used as the basis for more valid international comparisons of life-threatening injury, based on hospital discharge data, for countries with well-developed healthcare and data collection systems....

  8. Development and validation of satellite-based estimates of surface visibility

    Science.gov (United States)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2016-02-01

    A satellite-based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5 % for classifying clear (V ≥ 30 km), moderate (10 km ≤ V United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  9. Development and validation of a Kalman filter-based model for vehicle slip angle estimation

    Science.gov (United States)

    Gadola, M.; Chindamo, D.; Romano, M.; Padula, F.

    2014-01-01

    It is well known that vehicle slip angle is one of the most difficult parameters to measure on a vehicle during testing or racing activities. Moreover, the appropriate sensor is very expensive and it is often difficult to fit to a car, especially on race cars. We propose here a strategy to eliminate the need for this sensor by using a mathematical tool which gives a good estimation of the vehicle slip angle. A single-track car model, coupled with an extended Kalman filter, was used in order to achieve the result. Moreover, a tuning procedure is proposed that takes into consideration both nonlinear and saturation characteristics typical of vehicle lateral dynamics. The effectiveness of the proposed algorithm has been proven by both simulation results and real-world data.

  10. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  11. Development and validation of a CFD based methodology to estimate the pressure loss of flow through perforated plates

    International Nuclear Information System (INIS)

    Barros Filho, Jose A.; Navarro, Moyses A.; Santos, Andre A.C. dos; Jordao, E.

    2011-01-01

    In spite of the recent great development of Computational Fluid Dynamics (CFD), there are still some issues about how to assess its accurateness. This work presents the validation of a CFD methodology devised to estimate the pressure drop of water flow through perforated plates similar to the ones used in some reactor core components. This was accomplished by comparing the results of CFD simulations against experimental data of 5 perforated plates with different geometric characteristics. The proposed methodology correlates the experimental data within a range of ± 7.5%. The validation procedure recommended by the ASME Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer-V and V 20 is also evaluated. The conclusion is that it is not adequate to this specific use. (author)

  12. A mathematical method for verifying the validity of measured information about the flows of energy resources based on the state estimation theory

    Science.gov (United States)

    Pazderin, A. V.; Sof'in, V. V.; Samoylenko, V. O.

    2015-11-01

    Efforts aimed at improving energy efficiency in all branches of the fuel and energy complex shall be commenced with setting up a high-tech automated system for monitoring and accounting energy resources. Malfunctions and failures in the measurement and information parts of this system may distort commercial measurements of energy resources and lead to financial risks for power supplying organizations. In addition, measurement errors may be connected with intentional distortion of measurements for reducing payment for using energy resources on the consumer's side, which leads to commercial loss of energy resource. The article presents a universal mathematical method for verifying the validity of measurement information in networks for transporting energy resources, such as electricity and heat, petroleum, gas, etc., based on the state estimation theory. The energy resource transportation network is represented by a graph the nodes of which correspond to producers and consumers, and its branches stand for transportation mains (power lines, pipelines, and heat network elements). The main idea of state estimation is connected with obtaining the calculated analogs of energy resources for all available measurements. Unlike "raw" measurements, which contain inaccuracies, the calculated flows of energy resources, called estimates, will fully satisfy the suitability condition for all state equations describing the energy resource transportation network. The state equations written in terms of calculated estimates will be already free from residuals. The difference between a measurement and its calculated analog (estimate) is called in the estimation theory an estimation remainder. The obtained large values of estimation remainders are an indicator of high errors of particular energy resource measurements. By using the presented method it is possible to improve the validity of energy resource measurements, to estimate the transportation network observability, to eliminate

  13. Validating automated kidney stone volumetry in computed tomography and mathematical correlation with estimated stone volume based on diameter.

    Science.gov (United States)

    Wilhelm, Konrad; Miernik, Arkadiusz; Hein, Simon; Schlager, Daniel; Adams, Fabian; Benndorf, Matthias; Fritz, Benjamin; Langer, Mathias; Hesse, Albrecht; Schoenthaler, Martin; Neubauer, Jakob

    2018-06-02

    To validate AutoMated UroLithiasis Evaluation Tool (AMULET) software for kidney stone volumetry and compare its performance to standard clinical practice. Maximum diameter and volume of 96 urinary stones were measured as reference standard by three independent urologists. The same stones were positioned in an anthropomorphic phantom and CT scans acquired in standard settings. Three independent radiologists blinded to the reference values took manual measurements of the maximum diameter and automatic measurements of maximum diameter and volume. An "expected volume" was calculated based on manual diameter measurements using the formula: V=4/3 πr³. 96 stones were analyzed in the study. We had initially aimed to assess 100. Nine were replaced during data acquisition due of crumbling and 4 had to be excluded because the automated measurement did not work. Mean reference maximum diameter was 13.3 mm (5.2-32.1 mm). Correlation coefficients among all measured outcomes were compared. The correlation between the manual and automatic diameter measurements to the reference was 0.98 and 0.91, respectively (pvolumetry is possible and significantly more accurate than diameter-based volumetric calculations. To avoid bias in clinical trials, size should be measured as volume. However, automated diameter measurements are not as accurate as manual measurements.

  14. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques

    International Nuclear Information System (INIS)

    Pontailler, J.-Y.; Hymus, G.J.; Drake, B.G.

    2003-01-01

    This study took place in an evergreen scrub oak ecosystem in Florida. Vegetation reflectance was measured in situ with a laboratory-made sensor in the red (640-665 nm) and near-infrared (750-950 nm) bands to calculate the normalized difference vegetation index (NDVI) and derive the leaf area index (LAI). LAI estimates from this technique were compared with two other nondestructive techniques, intercepted photosynthetically active radiation (PAR) and hemispherical photographs, in four contrasting 4 m 2 plots in February 2000 and two 4m 2 plots in June 2000. We used Beer's law to derive LAI from PAR interception and gap fraction distribution to derive LAI from photographs. The plots were harvested manually after the measurements to determine a 'true' LAI value and to calculate a light extinction coefficient (k). The technique based on Beer's law was affected by a large variation of the extinction coefficient, owing to the larger impact of branches in winter when LAI was low. Hemispherical photographs provided satisfactory estimates, slightly overestimated in winter because of the impact of branches or underestimated in summer because of foliage clumping. NDVI provided the best fit, showing only saturation in the densest plot (LAI = 3.5). We conclude that in situ measurement of NDVI is an accurate and simple technique to nondestructively assess LAI in experimental plots or in crops if saturation remains acceptable. (author)

  15. Estimation of leaf area index using ground-based remote sensed NDVI measurements: validation and comparison with two indirect techniques

    Energy Technology Data Exchange (ETDEWEB)

    Pontailler, J.-Y. [Univ. Paris-Sud XI, Dept. d' Ecophysiologie Vegetale, Orsay Cedex (France); Hymus, G.J.; Drake, B.G. [Smithsonian Environmental Research Center, Kennedy Space Center, Florida (United States)

    2003-06-01

    This study took place in an evergreen scrub oak ecosystem in Florida. Vegetation reflectance was measured in situ with a laboratory-made sensor in the red (640-665 nm) and near-infrared (750-950 nm) bands to calculate the normalized difference vegetation index (NDVI) and derive the leaf area index (LAI). LAI estimates from this technique were compared with two other nondestructive techniques, intercepted photosynthetically active radiation (PAR) and hemispherical photographs, in four contrasting 4 m{sup 2} plots in February 2000 and two 4m{sup 2} plots in June 2000. We used Beer's law to derive LAI from PAR interception and gap fraction distribution to derive LAI from photographs. The plots were harvested manually after the measurements to determine a 'true' LAI value and to calculate a light extinction coefficient (k). The technique based on Beer's law was affected by a large variation of the extinction coefficient, owing to the larger impact of branches in winter when LAI was low. Hemispherical photographs provided satisfactory estimates, slightly overestimated in winter because of the impact of branches or underestimated in summer because of foliage clumping. NDVI provided the best fit, showing only saturation in the densest plot (LAI = 3.5). We conclude that in situ measurement of NDVI is an accurate and simple technique to nondestructively assess LAI in experimental plots or in crops if saturation remains acceptable. (author)

  16. Validity of food frequency questionnaire-based estimates of long-term long-chain n-3 polyunsaturated fatty acid intake.

    Science.gov (United States)

    Wallin, Alice; Di Giuseppe, Daniela; Burgaz, Ann; Håkansson, Niclas; Cederholm, Tommy; Michaëlsson, Karl; Wolk, Alicja

    2014-01-01

    To evaluate how long-term dietary intake of long-chain n-3 polyunsaturated fatty acids (LCn-3 PUFAs), estimated by repeated food frequency questionnaires (FFQs) over 15 years, is correlated with LCn-3 PUFAs in adipose tissue (AT). Subcutaneous adipose tissue was obtained in 2003-2004 (AT-03) from 239 randomly selected women, aged 55-75 years, after completion of a 96-item FFQ (FFQ-03). All participants had previously returned an identical FFQ in 1997 (FFQ-97) and a 67-item version in 1987-1990 (FFQ-87). Pearson product-moment correlations were used to evaluate associations between intake of total and individual LCn-3 PUFAs as estimated by the three FFQ assessments and AT-03 content (% of total fatty acids). FFQ-estimated mean relative intake of LCn-3 PUFAs (% of total fat intake) increased between all three assessments (FFQ-87, 0.55 ± 0.34; FFQ-97, 0.74 ± 0.64; FFQ-03, 0.88 ± 0.56). Validity, in terms of Pearson correlations between FFQ-03 estimates and AT-03 content, was 0.41 (95% CI 0.30-0.51) for total LCn-3 PUFA and ranged from 0.29 to 0.48 for individual fatty acids; lower correlation was observed among participants with higher percentage body fat. With regard to long-term intake estimates, past dietary intake was also correlated with AT-03 content, with correlation coefficients in the range of 0.21-0.33 and 0.21-0.34 for FFQ-97 and FFQ-87, respectively. The correlations were improved by using average estimates from two or more FFQ assessments. Exclusion of fish oil supplement users (14%) did not alter the correlations. These data indicate reasonable validity of FFQ-based estimates of long-term (up to 15 years) LCn-3 PUFA intake, justifying their use in studies of diet-disease associations.

  17. Validity of eyeball estimation for range of motion during the cervical flexion rotation test compared to an ultrasound-based movement analysis system.

    Science.gov (United States)

    Schäfer, Axel; Lüdtke, Kerstin; Breuel, Franziska; Gerloff, Nikolas; Knust, Maren; Kollitsch, Christian; Laukart, Alex; Matej, Laura; Müller, Antje; Schöttker-Königer, Thomas; Hall, Toby

    2018-08-01

    Headache is a common and costly health problem. Although pathogenesis of headache is heterogeneous, one reported contributing factor is dysfunction of the upper cervical spine. The flexion rotation test (FRT) is a commonly used diagnostic test to detect upper cervical movement impairment. The aim of this cross-sectional study was to investigate concurrent validity of detecting high cervical ROM impairment during the FRT by comparing measurements established by an ultrasound-based system (gold standard) with eyeball estimation. Secondary aim was to investigate intra-rater reliability of FRT ROM eyeball estimation. The examiner (6 years experience) was blinded to the data from the ultrasound-based device and to the symptoms of the patients. FRT test result (positive or negative) was based on visual estimation of range of rotation less than 34° to either side. Concurrently, range of rotation was evaluated using the ultrasound-based device. A total of 43 subjects with headache (79% female), mean age of 35.05 years (SD 13.26) were included. According to the International Headache Society Classification 23 subjects had migraine, 4 tension type headache, and 16 multiple headache forms. Sensitivity and specificity were 0.96 and 0.89 for combined rotation, indicating good concurrent reliability. The area under the ROC curve was 0.95 (95% CI 0.91-0.98) for rotation to both sides. Intra-rater reliability for eyeball estimation was excellent with Fleiss Kappa 0.79 for right rotation and left rotation. The results of this study indicate that the FRT is a valid and reliable test to detect impairment of upper cervical ROM in patients with headache.

  18. Validating estimates of problematic drug use in England

    Directory of Open Access Journals (Sweden)

    Heatlie Heath

    2007-10-01

    Full Text Available Abstract Background UK Government expenditure on combatting drug abuse is based on estimates of illicit drug users, yet the validity of these estimates is unknown. This study aims to assess the face validity of problematic drug use (PDU and injecting drug use (IDU estimates for all English Drug Action Teams (DATs in 2001. The estimates were derived from a statistical model using the Multiple Indicator Method (MIM. Methods Questionnaire study, in which the 149 English Drug Action Teams were asked to evaluate the MIM estimates for their DAT. Results The response rate was 60% and there were no indications of selection bias. Of responding DATs, 64% thought the PDU estimates were about right or did not dispute them, while 27% had estimates that were too low and 9% were too high. The figures for the IDU estimates were 52% (about right, 44% (too low and 3% (too high. Conclusion This is the first UK study to determine the validity estimates of problematic and injecting drug misuse. The results of this paper highlight the need to consider criterion and face validity when evaluating estimates of the number of drug users.

  19. Temperature based validation of the analytical model for the estimation of the amount of heat generated during friction stir welding

    Directory of Open Access Journals (Sweden)

    Milčić Dragan S.

    2012-01-01

    Full Text Available Friction stir welding is a solid-state welding technique that utilizes thermomechanical influence of the rotating welding tool on parent material resulting in a monolith joint - weld. On the contact of welding tool and parent material, significant stirring and deformation of parent material appears, and during this process, mechanical energy is partially transformed into heat. Generated heat affects the temperature of the welding tool and parent material, thus the proposed analytical model for the estimation of the amount of generated heat can be verified by temperature: analytically determined heat is used for numerical estimation of the temperature of parent material and this temperature is compared to the experimentally determined temperature. Numerical solution is estimated using the finite difference method - explicit scheme with adaptive grid, considering influence of temperature on material's conductivity, contact conditions between welding tool and parent material, material flow around welding tool, etc. The analytical model shows that 60-100% of mechanical power given to the welding tool is transformed into heat, while the comparison of results shows the maximal relative difference between the analytical and experimental temperature of about 10%.

  20. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  1. Liver stiffness value-based risk estimation of late recurrence after curative resection of hepatocellular carcinoma: development and validation of a predictive model.

    Directory of Open Access Journals (Sweden)

    Kyu Sik Jung

    Full Text Available Preoperative liver stiffness (LS measurement using transient elastography (TE is useful for predicting late recurrence after curative resection of hepatocellular carcinoma (HCC. We developed and validated a novel LS value-based predictive model for late recurrence of HCC.Patients who were due to undergo curative resection of HCC between August 2006 and January 2010 were prospectively enrolled and TE was performed prior to operations by study protocol. The predictive model of late recurrence was constructed based on a multiple logistic regression model. Discrimination and calibration were used to validate the model.Among a total of 139 patients who were finally analyzed, late recurrence occurred in 44 patients, with a median follow-up of 24.5 months (range, 12.4-68.1. We developed a predictive model for late recurrence of HCC using LS value, activity grade II-III, presence of multiple tumors, and indocyanine green retention rate at 15 min (ICG R15, which showed fairly good discrimination capability with an area under the receiver operating characteristic curve (AUROC of 0.724 (95% confidence intervals [CIs], 0.632-0.816. In the validation, using a bootstrap method to assess discrimination, the AUROC remained largely unchanged between iterations, with an average AUROC of 0.722 (95% CIs, 0.718-0.724. When we plotted a calibration chart for predicted and observed risk of late recurrence, the predicted risk of late recurrence correlated well with observed risk, with a correlation coefficient of 0.873 (P<0.001.A simple LS value-based predictive model could estimate the risk of late recurrence in patients who underwent curative resection of HCC.

  2. Development, standardization and validation of nuclear based technologies for estimating microbial protein supply in ruminant livestock for improving productivity

    International Nuclear Information System (INIS)

    Makkar, H.P.S.

    2004-01-01

    The primary constraint to livestock production in developing countries is the scarcity and fluctuating quantity and quality of the year-round feed supply. These countries experience serious shortages of animal feeds and fodders of the conventional type. Natural forages are very variable both in quality and quantity, conventional agro-industrial by-products are scarce and vary seasonal, and grains are required almost exclusively for human consumption. The small farmers in developing countries have limited resources available to them for feeding their ruminant livestock. Poor nutrition results in low rates of reproduction and production as well as increased susceptibility to disease and mortality. Providing adequate good-quality feed to livestock to raise and maintain their productivity is a major challenge to agricultural scientists and policy makers all over the world. Recent advances in ration balancing include manipulation of feed to increase the quantity and quality of protein and energy delivered to the small intestine. Selection of feeds based on high efficiency of microbial protein synthesis in the rumen along with the high dry matter digestibility, and development of feeding strategies based on high efficiency as well as high microbial protein synthesis in the rumen will lead to higher supply of protein post-ruminally. The strategy for improving production has therefore been to maximize the efficiency of utilization of available feed resources in the rumen by providing optimum conditions for microbial growth and thereby supplementing dietary nutrients to complement and balance the products of rumen digestion to the animal's requirement

  3. Remote Estimation of Chlorophyll-a in Inland Waters by a NIR-Red-Based Algorithm: Validation in Asian Lakes

    Directory of Open Access Journals (Sweden)

    Gongliang Yu

    2014-04-01

    Full Text Available Satellite remote sensing is a highly useful tool for monitoring chlorophyll-a concentration (Chl-a in water bodies. Remote sensing algorithms based on near-infrared-red (NIR-red wavelengths have demonstrated great potential for retrieving Chl-a in inland waters. This study tested the performance of a recently developed NIR-red based algorithm, SAMO-LUT (Semi-Analytical Model Optimizing and Look-Up Tables, using an extensive dataset collected from five Asian lakes. Results demonstrated that Chl-a retrieved by the SAMO-LUT algorithm was strongly correlated with measured Chl-a (R2 = 0.94, and the root-mean-square error (RMSE and normalized root-mean-square error (NRMS were 8.9 mg∙m−3 and 72.6%, respectively. However, the SAMO-LUT algorithm yielded large errors for sites where Chl-a was less than 10 mg∙m−3 (RMSE = 1.8 mg∙m−3 and NRMS = 217.9%. This was because differences in water-leaving radiances at the NIR-red wavelengths (i.e., 665 nm, 705 nm and 754 nm used in the SAMO-LUT were too small due to low concentrations of water constituents. Using a blue-green algorithm (OC4E instead of the SAMO-LUT for the waters with low constituent concentrations would have reduced the RMSE and NRMS to 1.0 mg∙m−3 and 16.0%, respectively. This indicates (1 the NIR-red algorithm does not work well when water constituent concentrations are relatively low; (2 different algorithms should be used in light of water constituent concentration; and thus (3 it is necessary to develop a classification method for selecting the appropriate algorithm.

  4. Validating the InterVA model to estimate the burden of mortality from verbal autopsy data: a population-based cross-sectional study.

    Directory of Open Access Journals (Sweden)

    Sebsibe Tadesse

    Full Text Available BACKGROUND: In countries with incomplete or no vital registration systems, verbal autopsy data are often reviewed by physicians in order to assign the probable cause of death. But in addition to being time and energy consuming, the method is liable to produce inconsistent results. The aim of this study is to validate the InterVA model for estimating the burden of mortality from verbal autopsy data by using physician review as a reference standard. METHODS AND FINDINGS: A population-based cross-sectional study was conducted from March to April, 2012. All adults aged ≥ 14 years and died between 01 January, 2010 and 15 February, 2012 were included in the study. The verbal autopsy interviews were reviewed by the InterVA model and physicians to estimate cause-specific mortality fractions. Cohen's kappa statistic, sensitivity, specificity, positive predictive value, and negative predictive value were applied to compare the agreement between the InterVA model and the physician review. A total of 408 adult deaths were studied. There was a general similarity and just slight differences between the InterVA model and the physicians in assigning cause-specific mortality. Both approaches showed an overall agreement in 298 (73% cases [kappa = 0.49, 95% CI: 0.37-0.60]. The observed sensitivities and specificities across causes of death categories varied from 13.3% to 81.9% and 77.7% to 99.5%, respectively. CONCLUSIONS: In understanding the burden of disease and setting health intervention priorities in areas that lack reliable vital registration systems, an accurate analysis of verbal autopsies is essential. Therefore, users should be aware of the suboptimal performance of the InterVA model. Similar validation studies need to be undertaken considering the limitation of the physician review as gold standard since physicians may misinterpret some of the verbal autopsy data and finally reach a wrong conclusion of the cause of death.

  5. Cell type specific DNA methylation in cord blood: A 450K-reference data set and cell count-based validation of estimated cell type composition.

    Science.gov (United States)

    Gervin, Kristina; Page, Christian Magnus; Aass, Hans Christian D; Jansen, Michelle A; Fjeldstad, Heidi Elisabeth; Andreassen, Bettina Kulle; Duijts, Liesbeth; van Meurs, Joyce B; van Zelm, Menno C; Jaddoe, Vincent W; Nordeng, Hedvig; Knudsen, Gunn Peggy; Magnus, Per; Nystad, Wenche; Staff, Anne Cathrine; Felix, Janine F; Lyle, Robert

    2016-09-01

    Epigenome-wide association studies of prenatal exposure to different environmental factors are becoming increasingly common. These studies are usually performed in umbilical cord blood. Since blood comprises multiple cell types with specific DNA methylation patterns, confounding caused by cellular heterogeneity is a major concern. This can be adjusted for using reference data consisting of DNA methylation signatures in cell types isolated from blood. However, the most commonly used reference data set is based on blood samples from adult males and is not representative of the cell type composition in neonatal cord blood. The aim of this study was to generate a reference data set from cord blood to enable correct adjustment of the cell type composition in samples collected at birth. The purity of the isolated cell types was very high for all samples (>97.1%), and clustering analyses showed distinct grouping of the cell types according to hematopoietic lineage. We explored whether this cord blood and the adult peripheral blood reference data sets impact the estimation of cell type composition in cord blood samples from an independent birth cohort (MoBa, n = 1092). This revealed significant differences for all cell types. Importantly, comparison of the cell type estimates against matched cell counts both in the cord blood reference samples (n = 11) and in another independent birth cohort (Generation R, n = 195), demonstrated moderate to high correlation of the data. This is the first cord blood reference data set with a comprehensive examination of the downstream application of the data through validation of estimated cell types against matched cell counts.

  6. Validation of Core Temperature Estimation Algorithm

    Science.gov (United States)

    2016-01-20

    based on an extended Kalman filter , which was developed using field data from 17 young male U.S. Army soldiers with core temperatures ranging from...CTstart, v) %KFMODEL estimate core temperature from heart rate with Kalman filter % This version supports both batch mode (operate on entire HR time...CTstart = 37.1; % degrees Celsius end if nargin < 3 v = 0; end %Extended Kalman Filter Parameters a = 1; gamma = 0.022^2; b_0 = -7887.1; b_1

  7. QbD-Based Development and Validation of a Stability-Indicating HPLC Method for Estimating Ketoprofen in Bulk Drug and Proniosomal Vesicular System.

    Science.gov (United States)

    Yadav, Nand K; Raghuvanshi, Ashish; Sharma, Gajanand; Beg, Sarwar; Katare, Om P; Nanda, Sanju

    2016-03-01

    The current studies entail systematic quality by design (QbD)-based development of simple, precise, cost-effective and stability-indicating high-performance liquid chromatography method for estimation of ketoprofen. Analytical target profile was defined and critical analytical attributes (CAAs) were selected. Chromatographic separation was accomplished with an isocratic, reversed-phase chromatography using C-18 column, pH 6.8, phosphate buffer-methanol (50 : 50v/v) as a mobile phase at a flow rate of 1.0 mL/min and UV detection at 258 nm. Systematic optimization of chromatographic method was performed using central composite design by evaluating theoretical plates and peak tailing as the CAAs. The method was validated as per International Conference on Harmonization guidelines with parameters such as high sensitivity, specificity of the method with linearity ranging between 0.05 and 250 µg/mL, detection limit of 0.025 µg/mL and quantification limit of 0.05 µg/mL. Precision was demonstrated using relative standard deviation of 1.21%. Stress degradation studies performed using acid, base, peroxide, thermal and photolytic methods helped in identifying the degradation products in the proniosome delivery systems. The results successfully demonstrated the utility of QbD for optimizing the chromatographic conditions for developing highly sensitive liquid chromatographic method for ketoprofen. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Validity of Edgeworth expansions for realized volatility estimators

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Veliyev, Bezirgen

    (2009). Second, we show that the validity of the Edgeworth expansions for realized volatility may not cover the optimal two-point distribution wild bootstrap proposed by Gonçalves and Meddahi (2009). Then, we propose a new optimal nonlattice distribution which ensures the second-order correctness...... of the bootstrap. Third, in the presence of microstructure noise, based on our Edgeworth expansions, we show that the new optimal choice proposed in the absence of noise is still valid in noisy data for the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). Finally, we show how...

  9. How Valid are Estimates of Occupational Illness?

    Science.gov (United States)

    Hilaski, Harvey J.; Wang, Chao Ling

    1982-01-01

    Examines some of the methods of estimating occupational diseases and suggests that a consensus on the adequacy and reliability of estimates by the Bureau of Labor Statistics and others is not likely. (SK)

  10. Validating a mass balance accounting approach to using 7Be measurements to estimate event-based erosion rates over an extended period at the catchment scale

    Science.gov (United States)

    Porto, Paolo; Walling, Des E.; Cogliandro, Vanessa; Callegari, Giovanni

    2016-07-01

    Use of the fallout radionuclides cesium-137 and excess lead-210 offers important advantages over traditional methods of quantifying erosion and soil redistribution rates. However, both radionuclides provide information on longer-term (i.e., 50-100 years) average rates of soil redistribution. Beryllium-7, with its half-life of 53 days, can provide a basis for documenting short-term soil redistribution and it has been successfully employed in several studies. However, the approach commonly used introduces several important constraints related to the timing and duration of the study period. A new approach proposed by the authors that overcomes these constraints has been successfully validated using an erosion plot experiment undertaken in southern Italy. Here, a further validation exercise undertaken in a small (1.38 ha) catchment is reported. The catchment was instrumented to measure event sediment yields and beryllium-7 measurements were employed to document the net soil loss for a series of 13 events that occurred between November 2013 and June 2015. In the absence of significant sediment storage within the catchment's ephemeral channel system and of a significant contribution from channel erosion to the measured sediment yield, the estimates of net soil loss for the individual events could be directly compared with the measured sediment yields to validate the former. The close agreement of the two sets of values is seen as successfully validating the use of beryllium-7 measurements and the new approach to obtain estimates of net soil loss for a sequence of individual events occurring over an extended period at the scale of a small catchment.

  11. Online cross-validation-based ensemble learning.

    Science.gov (United States)

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Estimation and Validation of RapidEye-Based Time-Series of Leaf Area Index for Winter Wheat in the Rur Catchment (Germany

    Directory of Open Access Journals (Sweden)

    Muhammad Ali

    2015-03-01

    Full Text Available Leaf Area Index (LAI is an important variable for numerous processes in various disciplines of bio- and geosciences. In situ measurements are the most accurate source of LAI among the LAI measuring methods, but the in situ measurements have the limitation of being labor intensive and site specific. For spatial-explicit applications (from regional to continental scales, satellite remote sensing is a promising source for obtaining LAI with different spatial resolutions. However, satellite-derived LAI measurements using empirical models require calibration and validation with the in situ measurements. In this study, we attempted to validate a direct LAI retrieval method from remotely sensed images (RapidEye with in situ LAI (LAIdestr. Remote sensing LAI (LAIrapideye were derived using different vegetation indices, namely SAVI (Soil Adjusted Vegetation Index and NDVI (Normalized Difference Vegetation Index. Additionally, applicability of the newly available red-edge band (RE was also analyzed through Normalized Difference Red-Edge index (NDRE and Soil Adjusted Red-Edge index (SARE. The LAIrapideye obtained from vegetation indices with red-edge band showed better correlation with LAIdestr (r = 0.88 and Root Mean Square Devation, RMSD = 1.01 & 0.92. This study also investigated the need to apply radiometric/atmospheric correction methods to the time-series of RapidEye Level 3A data prior to LAI estimation. Analysis of the the RapidEye Level 3A data set showed that application of the radiometric/atmospheric correction did not improve correlation of the estimated LAI with in situ LAI.

  13. Experimental validation of pulsed column inventory estimators

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Weh, R.; Eiben, K.; Dander, T.; Hakkila, E.A.

    1991-01-01

    Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may be an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs

  14. Validation of equations for pleural effusion volume estimation by ultrasonography.

    Science.gov (United States)

    Hassan, Maged; Rizk, Rana; Essam, Hatem; Abouelnour, Ahmed

    2017-12-01

    To validate the accuracy of previously published equations that estimate pleural effusion volume using ultrasonography. Only equations using simple measurements were tested. Three measurements were taken at the posterior axillary line for each case with effusion: lateral height of effusion ( H ), distance between collapsed lung and chest wall ( C ) and distance between lung and diaphragm ( D ). Cases whose effusion was aspirated to dryness were included and drained volume was recorded. Intra-class correlation coefficient (ICC) was used to determine the predictive accuracy of five equations against the actual volume of aspirated effusion. 46 cases with effusion were included. The most accurate equation in predicting effusion volume was ( H  +  D ) × 70 (ICC 0.83). The simplest and yet accurate equation was H  × 100 (ICC 0.79). Pleural effusion height measured by ultrasonography gives a reasonable estimate of effusion volume. Incorporating distance between lung base and diaphragm into estimation improves accuracy from 79% with the first method to 83% with the latter.

  15. Solar radiation estimation based on the insolation

    International Nuclear Information System (INIS)

    Assis, F.N. de; Steinmetz, S.; Martins, S.R.; Mendez, M.E.G.

    1998-01-01

    A series of daily global solar radiation data measured by an Eppley pyranometer was used to test PEREIRA and VILLA NOVA’s (1997) model to estimate the potential of radiation based on the instantaneous values measured at solar noon. The model also allows to estimate the parameters of PRESCOTT’s equation (1940) assuming a = 0,29 cosj. The results demonstrated the model’s validity for the studied conditions. Simultaneously, the hypothesis of generalizing the use of the radiation estimative formulas based on insolation, and using K = Ko (0,29 cosj + 0,50 n/N), was analysed and confirmed [pt

  16. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    Science.gov (United States)

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides

  17. Well-founded cost estimation validated by experience

    International Nuclear Information System (INIS)

    LaGuardia, T.S.

    2005-01-01

    Full text: Reliable cost estimating is one of the most important elements of decommissioning planning. Alternative technologies may be evaluated and compared based on their efficiency and effectiveness, and measured against a baseline cost as to the feasibility and benefits derived from the technology. When the plan is complete, those cost considerations ensure that it is economically sound and practical for funding. Estimates of decommissioning costs have been performed and published by many organizations for many different applications. The results often vary because of differences in the work scope. Labor force cost, monetary considerations, oversight costs, the specific contaminated materials involved, the waste stream and peripheral costs associated with that type of waste, or applicable environmental compliance requirements. Many of these differences are unavoidable since a reasonable degree of reliability and accuracy can only be achieved by developing decommissioning cost estimates on a case-by-case site-specific basis. This paper describes the estimating methodology and process applied to develop decommissioning cost estimates. A major effort has been made to standardize these methodologies, and to understand the assumptions and bases that drive the costs. However, estimates are only as accurate as the information available from which to derive the costs. This information includes the assumptions of scope of the work, labour cost inputs, inflationary effects, and financial analyses that project these costs to year of expenditure. Attempts at comparison of estimates for two facilities of similar design and size must clearly identify the assumptions used in developing the estimate, and comparison of actual costs versus estimated costs must reflect these same assumptions. For the nuclear industry to grow, decommissioning estimating tools must improve to keep pace with changing technology, regulations and stakeholder issues. The decommissioning industry needs

  18. Estimating activity energy expenditure: how valid are physical activity questionnaires?

    Science.gov (United States)

    Neilson, Heather K; Robson, Paula J; Friedenreich, Christine M; Csizmadi, Ilona

    2008-02-01

    Activity energy expenditure (AEE) is the modifiable component of total energy expenditure (TEE) derived from all activities, both volitional and nonvolitional. Because AEE may affect health, there is interest in its estimation in free-living people. Physical activity questionnaires (PAQs) could be a feasible approach to AEE estimation in large populations, but it is unclear whether or not any PAQ is valid for this purpose. Our aim was to explore the validity of existing PAQs for estimating usual AEE in adults, using doubly labeled water (DLW) as a criterion measure. We reviewed 20 publications that described PAQ-to-DLW comparisons, summarized study design factors, and appraised criterion validity using mean differences (AEE(PAQ) - AEE(DLW), or TEE(PAQ) - TEE(DLW)), 95% limits of agreement, and correlation coefficients (AEE(PAQ) versus AEE(DLW) or TEE(PAQ) versus TEE(DLW)). Only 2 of 23 PAQs assessed most types of activity over the past year and indicated acceptable criterion validity, with mean differences (TEE(PAQ) - TEE(DLW)) of 10% and 2% and correlation coefficients of 0.62 and 0.63, respectively. At the group level, neither overreporting nor underreporting was more prevalent across studies. We speculate that, aside from reporting error, discrepancies between PAQ and DLW estimates may be partly attributable to 1) PAQs not including key activities related to AEE, 2) PAQs and DLW ascertaining different time periods, or 3) inaccurate assignment of metabolic equivalents to self-reported activities. Small sample sizes, use of correlation coefficients, and limited information on individual validity were problematic. Future research should address these issues to clarify the true validity of PAQs for estimating AEE.

  19. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  20. Observer-Based Human Knee Stiffness Estimation.

    Science.gov (United States)

    Misgeld, Berno J E; Luken, Markus; Riener, Robert; Leonhardt, Steffen

    2017-05-01

    We consider the problem of stiffness estimation for the human knee joint during motion in the sagittal plane. The new stiffness estimator uses a nonlinear reduced-order biomechanical model and a body sensor network (BSN). The developed model is based on a two-dimensional knee kinematics approach to calculate the angle-dependent lever arms and the torques of the muscle-tendon-complex. To minimize errors in the knee stiffness estimation procedure that result from model uncertainties, a nonlinear observer is developed. The observer uses the electromyogram (EMG) of involved muscles as input signals and the segmental orientation as the output signal to correct the observer-internal states. Because of dominating model nonlinearities and nonsmoothness of the corresponding nonlinear functions, an unscented Kalman filter is designed to compute and update the observer feedback (Kalman) gain matrix. The observer-based stiffness estimation algorithm is subsequently evaluated in simulations and in a test bench, specifically designed to provide robotic movement support for the human knee joint. In silico and experimental validation underline the good performance of the knee stiffness estimation even in the cases of a knee stiffening due to antagonistic coactivation. We have shown the principle function of an observer-based approach to knee stiffness estimation that employs EMG signals and segmental orientation provided by our own IPANEMA BSN. The presented approach makes realtime, model-based estimation of knee stiffness with minimal instrumentation possible.

  1. The relative validity of a retrospective estimate of food consumption based on a current dietary history and a food frequency list.

    Science.gov (United States)

    Bakkum, A; Bloemberg, B; van Staveren, W A; Verschuren, M; West, C E

    1988-01-01

    The relative validity of information and food consumption in the distant past was assessed by combining a dietary history (referring to the recent past) with a food frequency list (monitoring major changes over the past 12-14 years). This approach was evaluated in a study of two groups of apparently healthy elderly people (mean age 80 years) who had participated in a food consumption study 12-14 years before the start of the present study. One group consisted of 18 harbor employees who retired subsequent to the initial assessment of food intake. On the average, each member of this group had reduced his food consumption by about 1,000 kcal. The other group consisted of 46 elderly men and women who had retired before their food consumption was measured initially. This group had not markedly changed their food intake. The results showed that both groups overestimated changes in their food intake and that the systematic overestimation and random error were similar for both groups. If the men in both groups were combined to form one group, a valid ranking of subjects in small and large consumers of energy and most of the selected nutrients was possible. However, current food intake influenced the accuracy of the measurement of past food intake.

  2. View Estimation Based on Value System

    Science.gov (United States)

    Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru

    Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.

  3. Validation of estimated glomerular filtration rate equations for Japanese children.

    Science.gov (United States)

    Gotoh, Yoshimitsu; Uemura, Osamu; Ishikura, Kenji; Sakai, Tomoyuki; Hamasaki, Yuko; Araki, Yoshinori; Hamda, Riku; Honda, Masataka

    2018-01-25

    The gold standard for evaluation of kidney function is renal inulin clearance (Cin). However, the methodology for Cin is complicated and difficult, especially for younger children and/or patients with bladder dysfunction. Therefore, we developed a simple and easier method for obtaining the estimated glomerular filtration rate (eGFR) using equations and values for several biomarkers, i.e., serum creatinine (Cr), serum cystatin C (cystC), serum beta-2 microglobulin (β 2 MG), and creatinine clearance (Ccr). The purpose of the present study was to validate these equations with a new data set. To validate each equation, we used data of 140 patients with CKD with clinical need for Cin, using the measured GFR (mGFR). We compared the results for each eGFR equation with the mGFR using mean error (ME), root mean square error (RMSE), P 30 , and Bland-Altman analysis. The ME of Cr, cystC, β 2 MG, and Ccr based on eGFR was 15.8 ± 13.0, 17.2 ± 16.5, 15.4 ± 14.3, and 10.6 ± 13.0 ml/min/1.73 m 2 , respectively. The RMSE was 29.5, 23.8, 20.9, and 16.7, respectively. The P 30 was 79.4, 71.1, 69.5, and 92.9%, respectively. The Bland-Altman bias analysis showed values of 4.0 ± 18.6, 5.3 ± 16.8, 12.7 ± 17.0, and 2.5 ± 17.2 ml/min/1.73 m 2 , respectively, for these parameters. The bias of each eGFR equation was not large. Therefore, each eGFR equation could be used.

  4. Summary of the co-ordinated research project on development, standardization and validation of nuclear based technologies for estimating microbial protein supply in ruminant livestock for improving productivity

    International Nuclear Information System (INIS)

    Jayasuriya, M.C.N.

    1999-01-01

    A major constraint to animal production in developing countries is poor nutrition due to inadequate or fluctuating nutrient supply. This results in low rates of reproduction and production as well as increased susceptibility to disease and mortality. Microbial cells formed as a result of rumen degradation of carbohydrates under anaerobic conditions are a major source of protein for ruminants. They provide the majority of the amino acids that the host animal requires for tissue maintenance, growth and production. In roughage-fed ruminants, micro-organisms are virtually the only source of protein. Therefore, a knowledge of the microbial contribution to the nutrition of the host animal is essential to developing feed supplementation strategies for improving ruminant production. While this factor has been recognized for many years, it has been extremely difficult to determine the microbial protein contribution to ruminant nutrition. The methods generally used for determining microbial protein production depend on the use of natural microbial markers such as RNA (ribonucleic acid) and DAPA (diamino-pimelic acid) or of isotopes 35 S, 15 N or 32 P. However, these methods involve surgical intervention such as post-rumen cannulation and complex procedures that require accurate and quantitative information on both digesta and microbial marker flow. A calorimetric technique using enzymatic procedures was developed for measuring purine derivatives (PD) in urine under a Technical Contract. With knowledge of the amount of PD excreted in the urine, the microbial protein supply to the host animal can be estimated. The principle of the method is that nucleic acids leaving the rumen are essentially of microbial origin. The nucleic acids are extensively digested in the small intestine and the resulting purines are absorbed

  5. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    Science.gov (United States)

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  6. Observers for vehicle tyre/road forces estimation: experimental validation

    Science.gov (United States)

    Doumiati, M.; Victorino, A.; Lechner, D.; Baffet, G.; Charara, A.

    2010-11-01

    The motion of a vehicle is governed by the forces generated between the tyres and the road. Knowledge of these vehicle dynamic variables is important for vehicle control systems that aim to enhance vehicle stability and passenger safety. This study introduces a new estimation process for tyre/road forces. It presents many benefits over the existing state-of-art works, within the dynamic estimation framework. One of these major contributions consists of discussing in detail the vertical and lateral tyre forces at each tyre. The proposed method is based on the dynamic response of a vehicle instrumented with potentially integrated sensors. The estimation process is separated into two principal blocks. The role of the first block is to estimate vertical tyre forces, whereas in the second block two observers are proposed and compared for the estimation of lateral tyre/road forces. The different observers are based on a prediction/estimation Kalman filter. The performance of this concept is tested and compared with real experimental data using a laboratory car. Experimental results show that the proposed approach is a promising technique to provide accurate estimation. Thus, it can be considered as a practical low-cost solution for calculating vertical and lateral tyre/road forces.

  7. Validation of limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring - is a separate validation group required?

    NARCIS (Netherlands)

    Proost, J. H.

    Objective: Limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring are usually validated in a separate group of patients, according to published guidelines. The aim of this study is to evaluate the validation of LSM by comparing independent validation with cross-validation

  8. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  9. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    1997-01-01

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  10. On the validity of time-dependent AUC estimators.

    Science.gov (United States)

    Schmid, Matthias; Kestler, Hans A; Potapov, Sergej

    2015-01-01

    Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  11. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  12. Validity evidence based on test content.

    Science.gov (United States)

    Sireci, Stephen; Faulkner-Bond, Molly

    2014-01-01

    Validity evidence based on test content is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the logic and theory underlying such evidence and describe traditional and modern methods for gathering and analyzing content validity data. A comprehensive review of the literature and of the aforementioned Standards is presented. For educational tests and other assessments targeting knowledge and skill possessed by examinees, validity evidence based on test content is necessary for building a validity argument to support the use of a test for a particular purpose. By following the methods described in this article, practitioners have a wide arsenal of tools available for determining how well the content of an assessment is congruent with and appropriate for the specific testing purposes.

  13. Validation of differential gene expression algorithms: Application comparing fold-change estimation to hypothesis testing

    Directory of Open Access Journals (Sweden)

    Bickel David R

    2010-01-01

    Full Text Available Abstract Background Sustained research on the problem of determining which genes are differentially expressed on the basis of microarray data has yielded a plethora of statistical algorithms, each justified by theory, simulation, or ad hoc validation and yet differing in practical results from equally justified algorithms. Recently, a concordance method that measures agreement among gene lists have been introduced to assess various aspects of differential gene expression detection. This method has the advantage of basing its assessment solely on the results of real data analyses, but as it requires examining gene lists of given sizes, it may be unstable. Results Two methodologies for assessing predictive error are described: a cross-validation method and a posterior predictive method. As a nonparametric method of estimating prediction error from observed expression levels, cross validation provides an empirical approach to assessing algorithms for detecting differential gene expression that is fully justified for large numbers of biological replicates. Because it leverages the knowledge that only a small portion of genes are differentially expressed, the posterior predictive method is expected to provide more reliable estimates of algorithm performance, allaying concerns about limited biological replication. In practice, the posterior predictive method can assess when its approximations are valid and when they are inaccurate. Under conditions in which its approximations are valid, it corroborates the results of cross validation. Both comparison methodologies are applicable to both single-channel and dual-channel microarrays. For the data sets considered, estimating prediction error by cross validation demonstrates that empirical Bayes methods based on hierarchical models tend to outperform algorithms based on selecting genes by their fold changes or by non-hierarchical model-selection criteria. (The latter two approaches have comparable

  14. Validation of Persian rapid estimate of adult literacy in dentistry.

    Science.gov (United States)

    Pakpour, Amir H; Lawson, Douglas M; Tadakamadla, Santosh K; Fridlund, Bengt

    2016-05-01

    The aim of the present study was to establish the psychometric properties of the Rapid Estimate of adult Literacy in Dentistry-99 (REALD-99) in the Persian language for use in an Iranian population (IREALD-99). A total of 421 participants with a mean age of 28 years (59% male) were included in the study. Participants included those who were 18 years or older and those residing in Quazvin (a city close to Tehran), Iran. A forward-backward translation process was used for the IREALD-99. The Test of Functional Health Literacy in Dentistry (TOFHLiD) was also administrated. The validity of the IREALD-99 was investigated by comparing the IREALD-99 across the categories of education and income levels. To further investigate, the correlation of IREALD-99 with TOFHLiD was computed. A principal component analysis (PCA) was performed on the data to assess unidimensionality and strong first factor. The Rasch mathematical model was used to evaluate the contribution of each item to the overall measure, and whether the data were invariant to differences in sex. Reliability was estimated with Cronbach's α and test-retest correlation. Cronbach's alpha for the IREALD-99 was 0.98, indicating strong internal consistency. The test-retest correlation was 0.97. IREALD-99 scores differed by education levels. IREALD-99 scores were positively related to TOFHLiD scores (rh = 0.72, P < 0.01). In addition, IREALD-99 showed positive correlation with self-rated oral health status (rh = 0.31, P < 0.01) as evidence of convergent validity. The PCA indicated a strong first component, five times the strength of the second component and nine times the third. The empirical data were a close fit with the Rasch mathematical model. There was not a significant difference in scores with respect to income level (P = 0.09), and only the very lowest income level was significantly different (P < 0.01). The IREALD-99 exhibited excellent reliability on repeated administrations, as well as internal

  15. Methodology for testing and validating knowledge bases

    Science.gov (United States)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  16. Parameter extraction and estimation based on the PV panel outdoor ...

    African Journals Online (AJOL)

    The experimental data obtained are validated and compared with the estimated results obtained through simulation based on the manufacture's data sheet. The simulation is based on the Newton-Raphson iterative method in MATLAB environment. This approach aids the computation of the PV module's parameters at any ...

  17. Validation of generic cost estimates for construction-related activities at nuclear power plants: Final report

    International Nuclear Information System (INIS)

    Simion, G.; Sciacca, F.; Claiborne, E.; Watlington, B.; Riordan, B.; McLaughlin, M.

    1988-05-01

    This report represents a validation study of the cost methodologies and quantitative factors derived in Labor Productivity Adjustment Factors and Generic Methodology for Estimating the Labor Cost Associated with the Removal of Hardware, Materials, and Structures From Nuclear Power Plants. This cost methodology was developed to support NRC analysts in determining generic estimates of removal, installation, and total labor costs for construction-related activities at nuclear generating stations. In addition to the validation discussion, this report reviews the generic cost analysis methodology employed. It also discusses each of the individual cost factors used in estimating the costs of physical modifications at nuclear power plants. The generic estimating approach presented uses the /open quotes/greenfield/close quotes/ or new plant construction installation costs compiled in the Energy Economic Data Base (EEDB) as a baseline. These baseline costs are then adjusted to account for labor productivity, radiation fields, learning curve effects, and impacts on ancillary systems or components. For comparisons of estimated vs actual labor costs, approximately four dozen actual cost data points (as reported by 14 nuclear utilities) were obtained. Detailed background information was collected on each individual data point to give the best understanding possible so that the labor productivity factors, removal factors, etc., could judiciously be chosen. This study concludes that cost estimates that are typically within 40% of the actual values can be generated by prudently using the methodologies and cost factors investigated herein

  18. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in

    2016-09-07

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.

  19. Importance of Statistical Evidence in Estimating Valid DEA Scores.

    Science.gov (United States)

    Barnum, Darold T; Johnson, Matthew; Gleason, John M

    2016-03-01

    Data Envelopment Analysis (DEA) allows healthcare scholars to measure productivity in a holistic manner. It combines a production unit's multiple outputs and multiple inputs into a single measure of its overall performance relative to other units in the sample being analyzed. It accomplishes this task by aggregating a unit's weighted outputs and dividing the output sum by the unit's aggregated weighted inputs, choosing output and input weights that maximize its output/input ratio when the same weights are applied to other units in the sample. Conventional DEA assumes that inputs and outputs are used in different proportions by the units in the sample. So, for the sample as a whole, inputs have been substituted for each other and outputs have been transformed into each other. Variables are assigned different weights based on their marginal rates of substitution and marginal rates of transformation. If in truth inputs have not been substituted nor outputs transformed, then there will be no marginal rates and therefore no valid basis for differential weights. This paper explains how to statistically test for the presence of substitutions among inputs and transformations among outputs. Then, it applies these tests to the input and output data from three healthcare DEA articles, in order to identify the effects on DEA scores when input substitutions and output transformations are absent in the sample data. It finds that DEA scores are badly biased when substitution and transformation are absent and conventional DEA models are used.

  20. Validation Of Critical Knowledge-Based Systems

    Science.gov (United States)

    Duke, Eugene L.

    1992-01-01

    Report discusses approach to verification and validation of knowledge-based systems. Also known as "expert systems". Concerned mainly with development of methodologies for verification of knowledge-based systems critical to flight-research systems; e.g., fault-tolerant control systems for advanced aircraft. Subject matter also has relevance to knowledge-based systems controlling medical life-support equipment or commuter railroad systems.

  1. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  2. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  3. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  4. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, Cetin; Williams, Brian; McClure, Patrick; Nelson, Ralph A.

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M and S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for

  5. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Unal, Cetin [Los Alamos National Laboratory; Williams, Brian [Los Alamos National Laboratory; Mc Clure, Patrick [Los Alamos National Laboratory; Nelson, Ralph A [IDAHO NATIONAL LAB

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  6. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Melius, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ong, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  7. Entropy Evaluation Based on Value Validity

    Directory of Open Access Journals (Sweden)

    Tarald O. Kvålseth

    2014-09-01

    Full Text Available Besides its importance in statistical physics and information theory, the Boltzmann-Shannon entropy S has become one of the most widely used and misused summary measures of various attributes (characteristics in diverse fields of study. It has also been the subject of extensive and perhaps excessive generalizations. This paper introduces the concept and criteria for value validity as a means of determining if an entropy takes on values that reasonably reflect the attribute being measured and that permit different types of comparisons to be made for different probability distributions. While neither S nor its relative entropy equivalent S* meet the value-validity conditions, certain power functions of S and S* do to a considerable extent. No parametric generalization offers any advantage over S in this regard. A measure based on Euclidean distances between probability distributions is introduced as a potential entropy that does comply fully with the value-validity requirements and its statistical inference procedure is discussed.

  8. Validation of statistical models for estimating hospitalization associated with influenza and other respiratory viruses.

    Directory of Open Access Journals (Sweden)

    Lin Yang

    Full Text Available BACKGROUND: Reliable estimates of disease burden associated with respiratory viruses are keys to deployment of preventive strategies such as vaccination and resource allocation. Such estimates are particularly needed in tropical and subtropical regions where some methods commonly used in temperate regions are not applicable. While a number of alternative approaches to assess the influenza associated disease burden have been recently reported, none of these models have been validated with virologically confirmed data. Even fewer methods have been developed for other common respiratory viruses such as respiratory syncytial virus (RSV, parainfluenza and adenovirus. METHODS AND FINDINGS: We had recently conducted a prospective population-based study of virologically confirmed hospitalization for acute respiratory illnesses in persons <18 years residing in Hong Kong Island. Here we used this dataset to validate two commonly used models for estimation of influenza disease burden, namely the rate difference model and Poisson regression model, and also explored the applicability of these models to estimate the disease burden of other respiratory viruses. The Poisson regression models with different link functions all yielded estimates well correlated with the virologically confirmed influenza associated hospitalization, especially in children older than two years. The disease burden estimates for RSV, parainfluenza and adenovirus were less reliable with wide confidence intervals. The rate difference model was not applicable to RSV, parainfluenza and adenovirus and grossly underestimated the true burden of influenza associated hospitalization. CONCLUSION: The Poisson regression model generally produced satisfactory estimates in calculating the disease burden of respiratory viruses in a subtropical region such as Hong Kong.

  9. Estimating North Dakota's Economic Base

    OpenAIRE

    Coon, Randal C.; Leistritz, F. Larry

    2009-01-01

    North Dakota’s economic base is comprised of those activities producing a product paid for by nonresidents, or products exported from the state. North Dakota’s economic base activities include agriculture, mining, manufacturing, tourism, and federal government payments for construction and to individuals. Development of the North Dakota economic base data is important because it provides the information to quantify the state’s economic growth, and it creates the final demand sectors for the N...

  10. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  11. Validity of Submaximal Cycle Ergometry for Estimating Aerobic Capacity

    National Research Council Canada - National Science Library

    Myhre, Loren

    1998-01-01

    ... that allows early selection of the most appropriate test work load. A computerized version makes it possible for non-trained personnel to safely administer this test for estimating aerobic capacity...

  12. Validation of Transverse Oscillation Vector Velocity Estimation In-Vivo

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Udesen, Jesper; Thomsen, Carsten

    2007-01-01

    Conventional Doppler methods for blood velocity estimation only estimate the velocity component along the ultrasound (US) beam direction. This implies that a Doppler angle under examination close to 90deg results in unreliable information about the true blood direction and blood velocity. The novel...... the presented angle independent 2-D vector velocity method. The results give reason to believe that the TO method can be a useful alternative to conventional Doppler systems bringing forth new information to the US examination of blood flow....

  13. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    International Nuclear Information System (INIS)

    Kirchner, G.; Peterson, R.

    1996-11-01

    variation between the best estimate predictions of the group. The assumptions of the users result in more uncertainty in the predictions (taking into account the 95% confidence intervals) than is shown by the confidence interval on the predictions of one user. Mistakes, being examples of incorrect user assumptions, cannot be ignored and must be accepted as contributing to the variability seen in the spread of predictions. The user's confidence in his/her understanding of a scenario description and/or confidence in working with a code does not necessarily mean that the predictions will be more accurate. Choice of parameter values contributed most to user-induced uncertainty followed by scenario interpretation. The contribution due to code implementation was low, but may have been limited due to the decision of the majority of the group not to submit predictions using the most complex of the three codes. Most modelers had difficulty adapting the models for certain expected output. Parameter values for wet and dry deposition, transfer from forage to milk and concentration ratios were mostly taken from the extensive database of Chernobyl fallout radionuclides, no matter what the scenario. Examples provided in the code manuals may influence code users considerably when preparing their own input files. A major problem concerns pasture concentrations given in fresh or dry weight: parameter values in codes have to be based on one or the other and the request for predictions in the scenario description may or may not be the same unit. This is a surprisingly common source of error. Most of the predictions showed order of magnitude discrepancies when best estimates are compared with the observations, although the participants had a highly professional background in radioecology and a good understanding of the importance of the processes modelled. When uncertainties are considered, however, mostly there was overlap between predictions and observations. A failure to reproduce the time

  14. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others

    1996-11-01

    variation between the best estimate predictions of the group. The assumptions of the users result in more uncertainty in the predictions (taking into account the 95% confidence intervals) than is shown by the confidence interval on the predictions of one user. Mistakes, being examples of incorrect user assumptions, cannot be ignored and must be accepted as contributing to the variability seen in the spread of predictions. The user's confidence in his/her understanding of a scenario description and/or confidence in working with a code does not necessarily mean that the predictions will be more accurate. Choice of parameter values contributed most to user-induced uncertainty followed by scenario interpretation. The contribution due to code implementation was low, but may have been limited due to the decision of the majority of the group not to submit predictions using the most complex of the three codes. Most modelers had difficulty adapting the models for certain expected output. Parameter values for wet and dry deposition, transfer from forage to milk and concentration ratios were mostly taken from the extensive database of Chernobyl fallout radionuclides, no matter what the scenario. Examples provided in the code manuals may influence code users considerably when preparing their own input files. A major problem concerns pasture concentrations given in fresh or dry weight: parameter values in codes have to be based on one or the other and the request for predictions in the scenario description may or may not be the same unit. This is a surprisingly common source of error. Most of the predictions showed order of magnitude discrepancies when best estimates are compared with the observations, although the participants had a highly professional background in radioecology and a good understanding of the importance of the processes modelled. When uncertainties are considered, however, mostly there was overlap between predictions and observations. A failure to reproduce the

  15. Estimate of body composition by Hume's equation: validation with DXA.

    Science.gov (United States)

    Carnevale, Vincenzo; Piscitelli, Pamela Angela; Minonne, Rita; Castriotta, Valeria; Cipriani, Cristiana; Guglielmi, Giuseppe; Scillitani, Alfredo; Romagnoli, Elisabetta

    2015-05-01

    We investigated how the Hume's equation, using the antipyrine space, could perform in estimating fat mass (FM) and lean body mass (LBM). In 100 (40 male ad 60 female) subjects, we estimated FM and LBM by the equation and compared these values with those measured by a last generation DXA device. The correlation coefficients between measured and estimated FM were r = 0.940 (p LBM were r = 0.913 (p LBM, though the equation underestimated FM and overestimated LBM in respect to DXA. The mean difference for FM was 1.40 kg (limits of agreement of -6.54 and 8.37 kg). For LBM, the mean difference in respect to DXA was 1.36 kg (limits of agreement -8.26 and 6.52 kg). The root mean square error was 3.61 kg for FM and 3.56 kg for LBM. Our results show that in clinically stable subjects the Hume's equation could reliably assess body composition, and the estimated FM and LBM approached those measured by a modern DXA device.

  16. Improved best estimate plus uncertainty methodology, including advanced validation concepts, to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, C.; Williams, B.; Hemez, F.; Atamturktur, S.H.; McClure, P.

    2011-01-01

    Research highlights: → The best estimate plus uncertainty methodology (BEPU) is one option in the licensing of nuclear reactors. → The challenges for extending the BEPU method for fuel qualification for an advanced reactor fuel are primarily driven by schedule, the need for data, and the sufficiency of the data. → In this paper we develop an extended BEPU methodology that can potentially be used to address these new challenges in the design and licensing of advanced nuclear reactors. → The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. → The methodology includes a formalism to quantify an adequate level of validation (predictive maturity) with respect to existing data, so that required new testing can be minimized, saving cost by demonstrating that further testing will not enhance the quality of the predictive tools. - Abstract: Many evolving nuclear energy technologies use advanced predictive multiscale, multiphysics modeling and simulation (M and S) capabilities to reduce the cost and schedule of design and licensing. Historically, the role of experiments has been as a primary tool for the design and understanding of nuclear system behavior, while M and S played the subordinate role of supporting experiments. In the new era of multiscale, multiphysics computational-based technology development, this role has been reversed. The experiments will still be needed, but they will be performed at different scales to calibrate and validate the models leading to predictive simulations for design and licensing. Minimizing the required number of validation experiments produces cost and time savings. The use of multiscale, multiphysics models introduces challenges in validating these predictive tools - traditional methodologies will have to be modified to address these challenges. This paper gives the basic aspects of a methodology that can potentially be used to address these new challenges in

  17. Monte Carlo-based tail exponent estimator

    Science.gov (United States)

    Barunik, Jozef; Vacha, Lukas

    2010-11-01

    In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.

  18. Channel Estimation in DCT-Based OFDM

    Science.gov (United States)

    Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing

    2014-01-01

    This paper derives the channel estimation of a discrete cosine transform- (DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic. PMID:24757439

  19. Statistical inference based on latent ability estimates

    NARCIS (Netherlands)

    Hoijtink, H.J.A.; Boomsma, A.

    The quality of approximations to first and second order moments (e.g., statistics like means, variances, regression coefficients) based on latent ability estimates is being discussed. The ability estimates are obtained using either the Rasch, oi the two-parameter logistic model. Straightforward use

  20. Catalytic hydrolysis of ammonia borane: Intrinsic parameter estimation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Basu, S.; Gore, J.P. [School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907-2088 (United States); School of Chemical Engineering, Purdue University, West Lafayette, IN 47907-2100 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States); Zheng, Y. [School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907-2088 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States); Varma, A.; Delgass, W.N. [School of Chemical Engineering, Purdue University, West Lafayette, IN 47907-2100 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States)

    2010-04-02

    Ammonia borane (AB) hydrolysis is a potential process for on-board hydrogen generation. This paper presents isothermal hydrogen release rate measurements of dilute AB (1 wt%) hydrolysis in the presence of carbon supported ruthenium catalyst (Ru/C). The ranges of investigated catalyst particle sizes and temperature were 20-181 {mu}m and 26-56 C, respectively. The obtained rate data included both kinetic and diffusion-controlled regimes, where the latter was evaluated using the catalyst effectiveness approach. A Langmuir-Hinshelwood kinetic model was adopted to interpret the data, with intrinsic kinetic and diffusion parameters determined by a nonlinear fitting algorithm. The AB hydrolysis was found to have an activation energy 60.4 kJ mol{sup -1}, pre-exponential factor 1.36 x 10{sup 10} mol (kg-cat){sup -1} s{sup -1}, adsorption energy -32.5 kJ mol{sup -1}, and effective mass diffusion coefficient 2 x 10{sup -10} m{sup 2} s{sup -1}. These parameters, obtained under dilute AB conditions, were validated by comparing measurements with simulations of AB consumption rates during the hydrolysis of concentrated AB solutions (5-20 wt%), and also with the axial temperature distribution in a 0.5 kW continuous-flow packed-bed reactor. (author)

  1. Validation of walk score for estimating neighborhood walkability: an analysis of four US metropolitan areas.

    Science.gov (United States)

    Duncan, Dustin T; Aldstadt, Jared; Whalen, John; Melly, Steven J; Gortmaker, Steven L

    2011-11-01

    Neighborhood walkability can influence physical activity. We evaluated the validity of Walk Score(®) for assessing neighborhood walkability based on GIS (objective) indicators of neighborhood walkability with addresses from four US metropolitan areas with several street network buffer distances (i.e., 400-, 800-, and 1,600-meters). Address data come from the YMCA-Harvard After School Food and Fitness Project, an obesity prevention intervention involving children aged 5-11 years and their families participating in YMCA-administered, after-school programs located in four geographically diverse metropolitan areas in the US (n = 733). GIS data were used to measure multiple objective indicators of neighborhood walkability. Walk Scores were also obtained for the participant's residential addresses. Spearman correlations between Walk Scores and the GIS neighborhood walkability indicators were calculated as well as Spearman correlations accounting for spatial autocorrelation. There were many significant moderate correlations between Walk Scores and the GIS neighborhood walkability indicators such as density of retail destinations and intersection density (p walkability. Correlations generally became stronger with a larger spatial scale, and there were some geographic differences. Walk Score(®) is free and publicly available for public health researchers and practitioners. Results from our study suggest that Walk Score(®) is a valid measure of estimating certain aspects of neighborhood walkability, particularly at the 1600-meter buffer. As such, our study confirms and extends the generalizability of previous findings demonstrating that Walk Score is a valid measure of estimating neighborhood walkability in multiple geographic locations and at multiple spatial scales.

  2. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    International Nuclear Information System (INIS)

    Elert, M.

    1996-09-01

    deterministic case, and the uncertainty bands did not always overlap. This suggest that there are considerable model uncertainties present, which were not considered in this study. Concerning possible constraints in the application domain of different models, the results of this exercise suggest that if only the evolution of the root zone concentration is to be predicted, all of the studied models give comparable results. However, if also the flux to the groundwater is to be predicted, then a considerably increased amount of detail is needed concerning the model and the parameterization. This applies to the hydrological as well as the transport modelling. The difference in model predictions and the magnitude of uncertainty was quite small for some of the end-points predicted, while for others it could span many orders of magnitude. Of special importance were end-points where delay in the soil was involved, e.g. release to the groundwater. In such cases the influence of radioactive decay gave rise to strongly non-linear effects. The work in the subgroup has provided many valuable insights on the effects of model simplifications, e.g. discretization in the model, averaging of the time varying input parameters and the assignment of uncertainties to parameters. The conclusions that have been drawn concerning these are primarily valid for the studied scenario. However, we believe that they to a large extent also are generally applicable. The subgroup have had many opportunities to study the pitfalls involved in model comparison. The intention was to provide a well defined scenario for the subgroup, but despite several iterations misunderstandings and ambiguities remained. The participants have been forced to scrutinize their models to try to explain differences in the predictions and most, if not all, of the participants have improved their models as a result of this

  3. Subspace Based Blind Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Hayashi, Kazunori; Matsushima, Hiroki; Sakai, Hideaki

    2012-01-01

    The paper proposes a subspace based blind sparse channel estimation method using 1–2 optimization by replacing the 2–norm minimization in the conventional subspace based method by the 1–norm minimization problem. Numerical results confirm that the proposed method can significantly improve...

  4. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  5. Estimating incidence from prevalence in generalised HIV epidemics: methods and validation.

    Directory of Open Access Journals (Sweden)

    Timothy B Hallett

    2008-04-01

    Full Text Available HIV surveillance of generalised epidemics in Africa primarily relies on prevalence at antenatal clinics, but estimates of incidence in the general population would be more useful. Repeated cross-sectional measures of HIV prevalence are now becoming available for general populations in many countries, and we aim to develop and validate methods that use these data to estimate HIV incidence.Two methods were developed that decompose observed changes in prevalence between two serosurveys into the contributions of new infections and mortality. Method 1 uses cohort mortality rates, and method 2 uses information on survival after infection. The performance of these two methods was assessed using simulated data from a mathematical model and actual data from three community-based cohort studies in Africa. Comparison with simulated data indicated that these methods can accurately estimates incidence rates and changes in incidence in a variety of epidemic conditions. Method 1 is simple to implement but relies on locally appropriate mortality data, whilst method 2 can make use of the same survival distribution in a wide range of scenarios. The estimates from both methods are within the 95% confidence intervals of almost all actual measurements of HIV incidence in adults and young people, and the patterns of incidence over age are correctly captured.It is possible to estimate incidence from cross-sectional prevalence data with sufficient accuracy to monitor the HIV epidemic. Although these methods will theoretically work in any context, we have able to test them only in southern and eastern Africa, where HIV epidemics are mature and generalised. The choice of method will depend on the local availability of HIV mortality data.

  6. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    Science.gov (United States)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  7. Development of robust flexible OLED encapsulations using simulated estimations and experimental validations

    International Nuclear Information System (INIS)

    Lee, Chang-Chun; Shih, Yan-Shin; Wu, Chih-Sheng; Tsai, Chia-Hao; Yeh, Shu-Tang; Peng, Yi-Hao; Chen, Kuang-Jung

    2012-01-01

    This work analyses the overall stress/strain characteristic of flexible encapsulations with organic light-emitting diode (OLED) devices. A robust methodology composed of a mechanical model of multi-thin film under bending loads and related stress simulations based on nonlinear finite element analysis (FEA) is proposed, and validated to be more reliable compared with related experimental data. With various geometrical combinations of cover plate, stacked thin films and plastic substrate, the position of the neutral axis (NA) plate, which is regarded as a key design parameter to minimize stress impact for the concerned OLED devices, is acquired using the present methodology. The results point out that both the thickness and mechanical properties of the cover plate help in determining the NA location. In addition, several concave and convex radii are applied to examine the reliable mechanical tolerance and to provide an insight into the estimated reliability of foldable OLED encapsulations. (paper)

  8. A comparative study and validation of state estimation algorithms for Li-ion batteries in battery management systems

    International Nuclear Information System (INIS)

    Klee Barillas, Joaquín; Li, Jiahao; Günther, Clemens; Danzer, Michael A.

    2015-01-01

    Highlights: • Description of state observers for estimating the battery’s SOC. • Implementation of four estimation algorithms in a BMS. • Reliability and performance study of BMS regarding the estimation algorithms. • Analysis of the robustness and code properties of the estimation approaches. • Guide to evaluate estimation algorithms to improve the BMS performance. - Abstract: To increase lifetime, safety, and energy usage battery management systems (BMS) for Li-ion batteries have to be capable of estimating the state of charge (SOC) of the battery cells with a very low estimation error. The accurate SOC estimation and the real time reliability are critical issues for a BMS. In general an increasing complexity of the estimation methods leads to higher accuracy. On the other hand it also leads to a higher computational load and may exceed the BMS limitations or increase its costs. An approach to evaluate and verify estimation algorithms is presented as a requisite prior the release of the battery system. The approach consists of an analysis concerning the SOC estimation accuracy, the code properties, complexity, the computation time, and the memory usage. Furthermore, a study for estimation methods is proposed for their evaluation and validation with respect to convergence behavior, parameter sensitivity, initialization error, and performance. In this work, the introduced analysis is demonstrated with four of the most published model-based estimation algorithms including Luenberger observer, sliding-mode observer, Extended Kalman Filter and Sigma-point Kalman Filter. The experiments under dynamic current conditions are used to verify the real time functionality of the BMS. The results show that a simple estimation method like the sliding-mode observer can compete with the Kalman-based methods presenting less computational time and memory usage. Depending on the battery system’s application the estimation algorithm has to be selected to fulfill the

  9. Risk Probability Estimating Based on Clustering

    DEFF Research Database (Denmark)

    Chen, Yong; Jensen, Christian D.; Gray, Elizabeth

    2003-01-01

    of prior experiences, recommendations from a trusted entity or the reputation of the other entity. In this paper we propose a dynamic mechanism for estimating the risk probability of a certain interaction in a given environment using hybrid neural networks. We argue that traditional risk assessment models...... from the insurance industry do not directly apply to ubiquitous computing environments. Instead, we propose a dynamic mechanism for risk assessment, which is based on pattern matching, classification and prediction procedures. This mechanism uses an estimator of risk probability, which is based...

  10. Valid and efficient manual estimates of intracranial volume from magnetic resonance images

    International Nuclear Information System (INIS)

    Klasson, Niklas; Olsson, Erik; Rudemo, Mats; Eckerström, Carl; Malmgren, Helge; Wallin, Anders

    2015-01-01

    Manual segmentations of the whole intracranial vault in high-resolution magnetic resonance images are often regarded as very time-consuming. Therefore it is common to only segment a few linearly spaced intracranial areas to estimate the whole volume. The purpose of the present study was to evaluate how the validity of intracranial volume estimates is affected by the chosen interpolation method, orientation of the intracranial areas and the linear spacing between them. Intracranial volumes were manually segmented on 62 participants from the Gothenburg MCI study using 1.5 T, T 1 -weighted magnetic resonance images. Estimates of the intracranial volumes were then derived using subsamples of linearly spaced coronal, sagittal or transversal intracranial areas from the same volumes. The subsamples of intracranial areas were interpolated into volume estimates by three different interpolation methods. The linear spacing between the intracranial areas ranged from 2 to 50 mm and the validity of the estimates was determined by comparison with the entire intracranial volumes. A progressive decrease in intra-class correlation and an increase in percentage error could be seen with increased linear spacing between intracranial areas. With small linear spacing (≤15 mm), orientation of the intracranial areas and interpolation method had negligible effects on the validity. With larger linear spacing, the best validity was achieved using cubic spline interpolation with either coronal or sagittal intracranial areas. Even at a linear spacing of 50 mm, cubic spline interpolation on either coronal or sagittal intracranial areas had a mean absolute agreement intra-class correlation with the entire intracranial volumes above 0.97. Cubic spline interpolation in combination with linearly spaced sagittal or coronal intracranial areas overall resulted in the most valid and robust estimates of intracranial volume. Using this method, valid ICV estimates could be obtained in less than five

  11. Validating agent based models through virtual worlds.

    Energy Technology Data Exchange (ETDEWEB)

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  12. Teletactile System Based on Mechanical Properties Estimation

    Directory of Open Access Journals (Sweden)

    Mauro M. Sette

    2011-01-01

    Full Text Available Tactile feedback is a major missing feature in minimally invasive procedures; it is an essential means of diagnosis and orientation during surgical procedures. Previous works have presented a remote palpation feedback system based on the coupling between a pressure sensor and a general haptic interface. Here a new approach is presented based on the direct estimation of the tissue mechanical properties and finally their presentation to the operator by means of a haptic interface. The approach presents different technical difficulties and some solutions are proposed: the implementation of a fast Young’s modulus estimation algorithm, the implementation of a real time finite element model, and finally the implementation of a stiffness estimation approach in order to guarantee the system’s stability. The work is concluded with an experimental evaluation of the whole system.

  13. Classification in hyperspectral images by independent component analysis, segmented cross-validation and uncertainty estimates

    Directory of Open Access Journals (Sweden)

    Beatriz Galindo-Prieto

    2018-02-01

    Full Text Available Independent component analysis combined with various strategies for cross-validation, uncertainty estimates by jack-knifing and critical Hotelling’s T2 limits estimation, proposed in this paper, is used for classification purposes in hyperspectral images. To the best of our knowledge, the combined approach of methods used in this paper has not been previously applied to hyperspectral imaging analysis for interpretation and classification in the literature. The data analysis performed here aims to distinguish between four different types of plastics, some of them containing brominated flame retardants, from their near infrared hyperspectral images. The results showed that the method approach used here can be successfully used for unsupervised classification. A comparison of validation approaches, especially leave-one-out cross-validation and regions of interest scheme validation is also evaluated.

  14. Validation of Temperature Histories for Structural Steel Welds Using Estimated Heat-Affected-Zone Edges

    Science.gov (United States)

    2016-10-12

    Metallurgy , 2nd Ed., John Wiley & Sons, Inc., 2003. DOI: 10.1002/0471434027. 2. O. Grong, Metallurgical Modelling of Welding , 2ed., Materials Modelling...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6394--16-9690 Validation of Temperature Histories for Structural Steel Welds Using...PAGES 17. LIMITATION OF ABSTRACT Validation of Temperature Histories for Structural Steel Welds Using Estimated Heat-Affected-Zone Edges S.G. Lambrakos

  15. Validity and reliability of Nike + Fuelband for estimating physical activity energy expenditure.

    Science.gov (United States)

    Tucker, Wesley J; Bhammar, Dharini M; Sawyer, Brandon J; Buman, Matthew P; Gaesser, Glenn A

    2015-01-01

    The Nike + Fuelband is a commercially available, wrist-worn accelerometer used to track physical activity energy expenditure (PAEE) during exercise. However, validation studies assessing the accuracy of this device for estimating PAEE are lacking. Therefore, this study examined the validity and reliability of the Nike + Fuelband for estimating PAEE during physical activity in young adults. Secondarily, we compared PAEE estimation of the Nike + Fuelband with the previously validated SenseWear Armband (SWA). Twenty-four participants (n = 24) completed two, 60-min semi-structured routines consisting of sedentary/light-intensity, moderate-intensity, and vigorous-intensity physical activity. Participants wore a Nike + Fuelband and SWA, while oxygen uptake was measured continuously with an Oxycon Mobile (OM) metabolic measurement system (criterion). The Nike + Fuelband (ICC = 0.77) and SWA (ICC = 0.61) both demonstrated moderate to good validity. PAEE estimates provided by the Nike + Fuelband (246 ± 67 kcal) and SWA (238 ± 57 kcal) were not statistically different than OM (243 ± 67 kcal). Both devices also displayed similar mean absolute percent errors for PAEE estimates (Nike + Fuelband = 16 ± 13 %; SWA = 18 ± 18 %). Test-retest reliability for PAEE indicated good stability for Nike + Fuelband (ICC = 0.96) and SWA (ICC = 0.90). The Nike + Fuelband provided valid and reliable estimates of PAEE, that are similar to the previously validated SWA, during a routine that included approximately equal amounts of sedentary/light-, moderate- and vigorous-intensity physical activity.

  16. Validation of equations and proposed reference values to estimate fat mass in Chilean university students.

    Science.gov (United States)

    Gómez Campos, Rossana; Pacheco Carrillo, Jaime; Almonacid Fierro, Alejandro; Urra Albornoz, Camilo; Cossío-Bolaños, Marco

    2018-03-01

    (i) To propose regression equations based on anthropometric measures to estimate fat mass (FM) using dual energy X-ray absorptiometry (DXA) as reference method, and (ii)to establish population reference standards for equation-derived FM. A cross-sectional study on 6,713 university students (3,354 males and 3,359 females) from Chile aged 17.0 to 27.0years. Anthropometric measures (weight, height, waist circumference) were taken in all participants. Whole body DXA was performed in 683 subjects. A total of 478 subjects were selected to develop regression equations, and 205 for their cross-validation. Data from 6,030 participants were used to develop reference standards for FM. Equations were generated using stepwise multiple regression analysis. Percentiles were developed using the LMS method. Equations for men were: (i) FM=-35,997.486 +232.285 *Weight +432.216 *CC (R 2 =0.73, SEE=4.1); (ii)FM=-37,671.303 +309.539 *Weight +66,028.109 *ICE (R2=0.76, SEE=3.8), while equations for women were: (iii)FM=-13,216.917 +461,302 *Weight+91.898 *CC (R 2 =0.70, SEE=4.6), and (iv) FM=-14,144.220 +464.061 *Weight +16,189.297 *ICE (R 2 =0.70, SEE=4.6). Percentiles proposed included p10, p50, p85, and p95. The developed equations provide valid and accurate estimation of FM in both sexes. The values obtained using the equations may be analyzed from percentiles that allow for categorizing body fat levels by age and sex. Copyright © 2017 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.

  17. Power system dynamic state estimation using prediction based evolutionary technique

    International Nuclear Information System (INIS)

    Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan

    2016-01-01

    In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.

  18. Estimation of dynamic rotor loads for the rotor systems research aircraft: Methodology development and validation

    Science.gov (United States)

    Duval, R. W.; Bahrami, M.

    1985-01-01

    The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.

  19. Secretin-stimulated ultrasound estimation of pancreatic secretion in cystic fibrosis validated by magnetic resonance imaging

    International Nuclear Information System (INIS)

    Engjom, Trond; Dimcevski, Georg; Tjora, Erling; Wathle, Gaute; Erchinger, Friedemann; Laerum, Birger N.; Gilja, Odd H.; Haldorsen, Ingfrid Salvesen

    2018-01-01

    Secretin-stimulated magnetic resonance imaging (s-MRI) is the best validated radiological modality assessing pancreatic secretion. The purpose of this study was to compare volume output measures from secretin-stimulated transabdominal ultrasonography (s-US) to s-MRI for the diagnosis of exocrine pancreatic failure in cystic fibrosis (CF). We performed transabdominal ultrasonography and MRI before and at timed intervals during 15 minutes after secretin stimulation in 21 CF patients and 13 healthy controls. To clearly identify the subjects with reduced exocrine pancreatic function, we classified CF patients as pancreas-sufficient or -insufficient by secretin-stimulated endoscopic short test and faecal elastase. Pancreas-insufficient CF patients had reduced pancreatic secretions compared to pancreas-sufficient subjects based on both imaging modalities (p < 0.001). Volume output estimates assessed by s-US correlated to that of s-MRI (r = 0.56-0.62; p < 0.001). Both s-US (AUC: 0.88) and s-MRI (AUC: 0.99) demonstrated good diagnostic accuracy for exocrine pancreatic failure. Pancreatic volume-output estimated by s-US corresponds well to exocrine pancreatic function in CF patients and yields comparable results to that of s-MRI. s-US provides a simple and feasible tool in the assessment of pancreatic secretion. (orig.)

  20. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    Science.gov (United States)

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating

  1. Spectrum estimation method based on marginal spectrum

    International Nuclear Information System (INIS)

    Cai Jianhua; Hu Weiwen; Wang Xianchun

    2011-01-01

    FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)

  2. Base Flow Model Validation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  3. Robust Backlash Estimation for Industrial Drive-Train Systems—Theory and Validation

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik

    2018-01-01

    Backlash compensation is used in modern machinetool controls to ensure high-accuracy positioning. When wear of a machine causes deadzone width to increase, high-accuracy control may be maintained if the deadzone is accurately estimated. Deadzone estimation is also an important parameter to indica......-of-the-art Siemens equipment. The experiments validate the theory and show that expected performance and robustness to parameter uncertainties are both achieved....

  4. Validation of radiation dose estimations in VRdose: comparing estimated radiation doses with observed radiation doses

    International Nuclear Information System (INIS)

    Nystad, Espen; Sebok, Angelia; Meyer, Geir

    2004-04-01

    The Halden Virtual Reality Centre has developed work-planning software that predicts the radiation exposure of workers in contaminated areas. To validate the accuracy of the predicted radiation dosages, it is necessary to compare predicted doses to actual dosages. During an experimental study conducted at the Halden Boiling Water Reactor (HBWR) hall, the radiation exposure was measured for all participants throughout the test session, ref. HWR-681 [3]. Data from this experimental study have also been used to model tasks in the work-planning software and gather data for predicted radiation exposure. Two different methods were used to predict radiation dosages; one method used all radiation data from all the floor levels in the HBWR (all-data method). The other used only data from the floor level where the task was conducted (isolated data method). The study showed that the all-data method gave predictions that were on average 2.3 times higher than the actual radiation dosages. The isolated-data method gave predictions on average 0.9 times the actual dosages. (Author)

  5. Estimation of skull table thickness with clinical CT and validation with microCT.

    Science.gov (United States)

    Lillie, Elizabeth M; Urban, Jillian E; Weaver, Ashley A; Powers, Alexander K; Stitzel, Joel D

    2015-01-01

    Brain injuries resulting from motor vehicle crashes (MVC) are extremely common yet the details of the mechanism of injury remain to be well characterized. Skull deformation is believed to be a contributing factor to some types of traumatic brain injury (TBI). Understanding biomechanical contributors to skull deformation would provide further insight into the mechanism of head injury resulting from blunt trauma. In particular, skull thickness is thought be a very important factor governing deformation of the skull and its propensity for fracture. Current computed tomography (CT) technology is limited in its ability to accurately measure cortical thickness using standard techniques. A method to evaluate cortical thickness using cortical density measured from CT data has been developed previously. This effort validates this technique for measurement of skull table thickness in clinical head CT scans using two postmortem human specimens. Bone samples were harvested from the skulls of two cadavers and scanned with microCT to evaluate the accuracy of the estimated cortical thickness measured from clinical CT. Clinical scans were collected at 0.488 and 0.625 mm in plane resolution with 0.625 mm thickness. The overall cortical thickness error was determined to be 0.078 ± 0.58 mm for cortical samples thinner than 4 mm. It was determined that 91.3% of these differences fell within the scanner resolution. Color maps of clinical CT thickness estimations are comparable to color maps of microCT thickness measurements, indicating good quantitative agreement. These data confirm that the cortical density algorithm successfully estimates skull table thickness from clinical CT scans. The application of this technique to clinical CT scans enables evaluation of cortical thickness in population-based studies. © 2014 Anatomical Society.

  6. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    Science.gov (United States)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  7. Validity of 20-metre multi stage shuttle run test for estimation of ...

    African Journals Online (AJOL)

    Validity of 20-metre multi stage shuttle run test for estimation of maximum oxygen uptake in indian male university students. P Chatterjee, AK Banerjee, P Debnath, P Bas, B Chatterjee. Abstract. No Abstract. South African Journal for Physical, Health Education, Recreation and DanceVol. 12(4) 2006: pp. 461-467. Full Text:.

  8. Validity of Two New Brief Instruments to Estimate Vegetable Intake in Adults

    Directory of Open Access Journals (Sweden)

    Janine Wright

    2015-08-01

    Full Text Available Cost effective population-based monitoring tools are needed for nutritional surveillance and interventions. The aim was to evaluate the relative validity of two new brief instruments (three item: VEG3 and five item: VEG5 for estimating usual total vegetable intake in comparison to a 7-day dietary record (7DDR. Sixty-four Australian adult volunteers aged 30 to 69 years (30 males, mean age ± SD 56.3 ± 9.2 years and 34 female mean age ± SD 55.3 ± 10.0 years. Pearson correlations between 7DDR and VEG3 and VEG5 were modest, at 0.50 and 0.56, respectively. VEG3 significantly (p < 0.001 underestimated mean vegetable intake compared to 7DDR measures (2.9 ± 1.3 vs. 3.6 ± 1.6 serves/day, respectively, whereas mean vegetable intake assessed by VEG5 did not differ from 7DDR measures (3.3 ± 1.5 vs. 3.6 ± 1.6 serves/day. VEG5 was also able to correctly identify 95%, 88% and 75% of those subjects not consuming five, four and three serves/day of vegetables according to their 7DDR classification. VEG5, but not VEG3, can estimate usual total vegetable intake of population groups and had superior performance to VEG3 in identifying those not meeting different levels of vegetable intake. VEG5, a brief instrument, shows measurement characteristics useful for population-based monitoring and intervention targeting.

  9. Postprocessing MPEG based on estimated quantization parameters

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2009-01-01

    the case where the coded stream is not accessible, or from an architectural point of view not desirable to use, and instead estimate some of the MPEG stream parameters based on the decoded sequence. The I-frames are detected and the quantization parameters are estimated from the coded stream and used...... in the postprocessing. We focus on deringing and present a scheme which aims at suppressing ringing artifacts, while maintaining the sharpness of the texture. The goal is to improve the visual quality, so perceptual blur and ringing metrics are used in addition to PSNR evaluation. The performance of the new `pure......' postprocessing compares favorable to a reference postprocessing filter which has access to the quantization parameters not only for I-frames but also on P and B-frames....

  10. Validation of computer code TRAFIC used for estimation of charcoal heatup in containment ventilation systems

    International Nuclear Information System (INIS)

    Yadav, D.H.; Datta, D.; Malhotra, P.K.; Ghadge, S.G.; Bajaj, S.S.

    2005-01-01

    Full text of publication follows: Standard Indian PHWRs are provided with a Primary Containment Filtration and Pump-Back System (PCFPB) incorporating charcoal filters in the ventilation circuit to remove radioactive iodine that may be released from reactor core into the containment during LOCA+ECCS failure which is a Design Basis Accident for containment of radioactive release. This system is provided with two identical air circulation loops, each having 2 full capacity fans (1 operating and 1 standby) for a bank of four combined charcoal and High Efficiency Particulate Activity (HEPA) filters, in addition to other filters. While the filtration circuit is designed to operate under forced flow conditions, it is of interest to understand the performance of the charcoal filters, in the event of failure of the fans after operating for some time, i.e., when radio-iodine inventory is at its peak value. It is of interest to check whether the buoyancy driven natural circulation occurring in the filtration circuit is sufficient enough to keep the temperature in the charcoal under safe limits. A computer code TRAFIC (Transient Analysis of Filters in Containment) was developed using conservative one dimensional model to analyze the system. Suitable parametric studies were carried out to understand the problem and to identify the safety of existing system. TRAFIC Code has two important components. The first one estimates the heat generation in charcoal filter based on 'Source Term'; while the other one performs thermal-hydraulic computations. In an attempt validate the Code, experimental studies have been carried out. For this purpose, an experimental set up comprising of scaled down model of filtration circuit with heating coils embedded in charcoal for simulating the heating effect due to radio iodine has been constructed. The present work of validation consists of utilizing the results obtained from experiments conducted for different heat loads, elevations and adsorbent

  11. Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.

    Science.gov (United States)

    Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M

    2016-03-11

    Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.

  12. Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets

    Directory of Open Access Journals (Sweden)

    Raquel D. Langer

    2016-03-01

    Full Text Available Background: Bioelectrical Impedance Analysis (BIA is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. Methods: A total of 396 males, Brazilian Army cadets, aged 17–24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA as a reference method. Student’s t-test (for paired sample, linear regression analysis, and Bland–Altman method were used to test the validity of the BIA equations. Results: Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05 and large limits of agreement by Bland–Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Conclusion: Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.

  13. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  14. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time.

    Science.gov (United States)

    Martin, Corby K; Correa, John B; Han, Hongmei; Allen, H Raymond; Rood, Jennifer C; Champagne, Catherine M; Gunturk, Bahadir K; Bray, George A

    2012-04-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1's objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake (EI) with the Remote Food Photography Method (RFPM) over 6 days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, EI estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n = 24) or Customized Prompts (n = 16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating EI when Standard (mean ± s.d. = -895 ± 770 kcal/day, P < 0.0001), but not Customized Prompts (-270 ± 748 kcal/day, P = 0.22) were used. Error (EI from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM's ability to accurately estimate EI in free-living adults (N = 50) over 6 days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living EI (-152 ± 694 kcal/day, P = 0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake.

  15. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time

    Science.gov (United States)

    Martin, C. K.; Correa, J. B.; Han, H.; Allen, H. R.; Rood, J.; Champagne, C. M.; Gunturk, B. K.; Bray, G. A.

    2014-01-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1’s objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake with the Remote Food Photography Method (RFPM) over six days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, energy intake estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n=24) or Customized Prompts (n=16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating energy intake when Standard (mean±SD = −895±770 kcal/day, p<.0001), but not Customized Prompts (−270±748 kcal/day, p=.22) were used. Error (energy intake from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM’s ability to accurately estimate energy intake in free-living adults (N=50) over six days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living energy intake (−152±694 kcal/day, p=0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake. PMID:22134199

  16. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    Science.gov (United States)

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  17. ESTIMATION OF STATURE BASED ON FOOT LENGTH

    Directory of Open Access Journals (Sweden)

    Vidyullatha Shetty

    2015-01-01

    Full Text Available BACKGROUND : Stature is the height of the person in the upright posture. It is an important measure of physical identity. Estimation of body height from its segments or dismember parts has important considerations for identifications of living or dead human body or remains recovered from disasters or other similar conditions. OBJECTIVE : Stature is an important indicator for identification. There are numerous means to establish stature and their significance lies in the simplicity of measurement, applicability and accuracy in prediction. Our aim of the study was to review the relationship between foot length and body height. METHODS : The present study reviews various prospective studies which were done to estimate the stature. All the measurements were taken by using standard measuring devices and standard anthropometric techniques. RESULTS : This review shows there is a correlation between stature and foot dimensions it is found to be positive and statistically highly significant. Prediction of stature was found to be most accurate by multiple regression analysis. CONCLUSIONS : Stature and gender estimation can be done by using foot measurements and stud y will help in medico - legal cases in establishing identity of an individual and this would be useful for Anatomists and Anthropologists to calculate stature based on foot length

  18. Validity of parent-reported weight and height of preschool children measured at home or estimated without home measurement: a validation study

    Directory of Open Access Journals (Sweden)

    Cox Bianca

    2011-07-01

    Full Text Available Abstract Background Parental reports are often used in large-scale surveys to assess children's body mass index (BMI. Therefore, it is important to know to what extent these parental reports are valid and whether it makes a difference if the parents measured their children's weight and height at home or whether they simply estimated these values. The aim of this study is to compare the validity of parent-reported height, weight and BMI values of preschool children (3-7 y-old, when measured at home or estimated by parents without actual measurement. Methods The subjects were 297 Belgian preschool children (52.9% male. Participation rate was 73%. A questionnaire including questions about height and weight of the children was completed by the parents. Nurses measured height and weight following standardised procedures. International age- and sex-specific BMI cut-off values were employed to determine categories of weight status and obesity. Results On the group level, no important differences in accuracy of reported height, weight and BMI were identified between parent-measured or estimated values. However, for all 3 parameters, the correlations between parental reports and nurse measurements were higher in the group of children whose body dimensions were measured by the parents. Sensitivity for underweight and overweight/obesity were respectively 73% and 47% when parents measured their child's height and weight, and 55% and 47% when parents estimated values without measurement. Specificity for underweight and overweight/obesity were respectively 82% and 97% when parents measured the children, and 75% and 93% with parent estimations. Conclusions Diagnostic measures were more accurate when parents measured their child's weight and height at home than when those dimensions were based on parental judgements. When parent-reported data on an individual level is used, the accuracy could be improved by encouraging the parents to measure weight and height

  19. Policy and Validity Prospects for Performance-Based Assessment.

    Science.gov (United States)

    Baker, Eva L.; And Others

    1994-01-01

    This article describes performance-based assessment as expounded by its proponents, comments on these conceptions, reviews evidence regarding the technical quality of performance-based assessment, and considers its validity under various policy options. (JDD)

  20. Validation and selection of ODE based systems biology models: how to arrive at more reliable decisions.

    Science.gov (United States)

    Hasdemir, Dicle; Hoefsloot, Huub C J; Smilde, Age K

    2015-07-08

    Most ordinary differential equation (ODE) based modeling studies in systems biology involve a hold-out validation step for model validation. In this framework a pre-determined part of the data is used as validation data and, therefore it is not used for estimating the parameters of the model. The model is assumed to be validated if the model predictions on the validation dataset show good agreement with the data. Model selection between alternative model structures can also be performed in the same setting, based on the predictive power of the model structures on the validation dataset. However, drawbacks associated with this approach are usually under-estimated. We have carried out simulations by using a recently published High Osmolarity Glycerol (HOG) pathway from S.cerevisiae to demonstrate these drawbacks. We have shown that it is very important how the data is partitioned and which part of the data is used for validation purposes. The hold-out validation strategy leads to biased conclusions, since it can lead to different validation and selection decisions when different partitioning schemes are used. Furthermore, finding sensible partitioning schemes that would lead to reliable decisions are heavily dependent on the biology and unknown model parameters which turns the problem into a paradox. This brings the need for alternative validation approaches that offer flexible partitioning of the data. For this purpose, we have introduced a stratified random cross-validation (SRCV) approach that successfully overcomes these limitations. SRCV leads to more stable decisions for both validation and selection which are not biased by underlying biological phenomena. Furthermore, it is less dependent on the specific noise realization in the data. Therefore, it proves to be a promising alternative to the standard hold-out validation strategy.

  1. Validity of a self-administered food frequency questionnaire (FFQ and its generalizability to the estimation of dietary folate intake in Japan

    Directory of Open Access Journals (Sweden)

    Iso Hiroyasu

    2005-10-01

    Full Text Available Abstract Background In an epidemiological study, it is essential to test the validity of the food frequency questionnaire (FFQ for its ability to estimate dietary intake. The objectives of our study were to 1 validate a FFQ for estimating folate intake, and to identify the foods that contribute to inter-individual variation of folate intake in the Japanese population. Methods Validity of the FFQ was evaluated using 28-day weighed dietary records (DRs as gold standard in the two groups independently. In the group for which the FFQ was developed, validity was evaluated by Spearman's correlation coefficients (CCs, and linear regression analysis was used to identify foods with large inter-individual variation. The cumulative mean intake of these foods was compared with total intake estimated by the DR. The external validity of the FFQ and intake from foods on the same list were evaluated in the other group to verify generalizability. Subjects were a subsample from the Japan Public Health Center-based prospective Study who volunteered to participate in the FFQ validation study. Results CCs for the internal validity of the FFQ were 0.49 for men and 0.29 and women, while CCs for external validity were 0.33 for men and 0.42 for women. CCs for cumulative folate intake from 33 foods selected by regression analysis were also applicable to an external population. Conclusion Our FFQ was valid for and generalizable to the estimation of folate intake. Foods identified as predictors of inter-individual variation in folate intake were also generalizable in Japanese populations. The FFQ with 138 foods was valid for the estimation of folate intake, while that with 33 foods might be useful for estimating inter-individual variation and ranking of individual folate intake.

  2. Validity of a self-administered food frequency questionnaire (FFQ) and its generalizability to the estimation of dietary folate intake in Japan

    Science.gov (United States)

    Ishihara, Junko; Yamamoto, Seiichiro; Iso, Hiroyasu; Inoue, Manami; Tsugane, Shoichiro

    2005-01-01

    Background In an epidemiological study, it is essential to test the validity of the food frequency questionnaire (FFQ) for its ability to estimate dietary intake. The objectives of our study were to 1) validate a FFQ for estimating folate intake, and to identify the foods that contribute to inter-individual variation of folate intake in the Japanese population. Methods Validity of the FFQ was evaluated using 28-day weighed dietary records (DRs) as gold standard in the two groups independently. In the group for which the FFQ was developed, validity was evaluated by Spearman's correlation coefficients (CCs), and linear regression analysis was used to identify foods with large inter-individual variation. The cumulative mean intake of these foods was compared with total intake estimated by the DR. The external validity of the FFQ and intake from foods on the same list were evaluated in the other group to verify generalizability. Subjects were a subsample from the Japan Public Health Center-based prospective Study who volunteered to participate in the FFQ validation study. Results CCs for the internal validity of the FFQ were 0.49 for men and 0.29 and women, while CCs for external validity were 0.33 for men and 0.42 for women. CCs for cumulative folate intake from 33 foods selected by regression analysis were also applicable to an external population. Conclusion Our FFQ was valid for and generalizable to the estimation of folate intake. Foods identified as predictors of inter-individual variation in folate intake were also generalizable in Japanese populations. The FFQ with 138 foods was valid for the estimation of folate intake, while that with 33 foods might be useful for estimating inter-individual variation and ranking of individual folate intake. PMID:16202175

  3. A validated HPTLC method for estimation of moxifloxacin hydrochloride in tablets.

    Science.gov (United States)

    Dhillon, Vandana; Chaudhary, Alok Kumar

    2010-10-01

    A simple HPTLC method having high accuracy, precision and reproducibility was developed for the routine estimation of moxifloxacin hydrochloride in the tablets available in market and was validated for various parameters according to ICH guidelines. moxifloxacin hydrochloride was estimated at 292 nm by densitometry using Silica gel 60 F254 as stationary phase and a premix of methylene chloride: methanol: strong ammonia solution and acetonitrile (10:10:5:10) as mobile phase. Method was found linear in a range of 9-54 nanograms with a correlation coefficient >0.99. The regression equation was: AUC = 65.57 × (Amount in nanograms) + 163 (r(2) = 0.9908).

  4. A Novel Rules Based Approach for Estimating Software Birthmark

    Science.gov (United States)

    Binti Alias, Norma; Anwar, Sajid

    2015-01-01

    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363

  5. Development and validation of GFR-estimating equations using diabetes, transplant and weight

    DEFF Research Database (Denmark)

    Stevens, L.A.; Schmid, C.H.; Zhang, Y.L.

    2009-01-01

    interactions. Equations were developed in a pooled database of 10 studies [2/3 (N = 5504) for development and 1/3 (N = 2750) for internal validation], and final model selection occurred in 16 additional studies [external validation (N = 3896)]. RESULTS: The mean mGFR was 68, 67 and 68 ml/min/ 1.73 m(2......BACKGROUND: We have reported a new equation (CKD-EPI equation) that reduces bias and improves accuracy for GFR estimation compared to the MDRD study equation while using the same four basic predictor variables: creatinine, age, sex and race. Here, we describe the development and validation...... of this equation as well as other equations that incorporate diabetes, transplant and weight as additional predictor variables. METHODS: Linear regression was used to relate log-measured GFR (mGFR) to sex, race, diabetes, transplant, weight, various transformations of creatinine and age with and without...

  6. Development, Validation, and Verification of a Self-Assessment Tool to Estimate Agnibala (Digestive Strength).

    Science.gov (United States)

    Singh, Aparna; Singh, Girish; Patwardhan, Kishor; Gehlot, Sangeeta

    2017-01-01

    According to Ayurveda, the traditional system of healthcare of Indian origin, Agni is the factor responsible for digestion and metabolism. Four functional states (Agnibala) of Agni have been recognized: regular, irregular, intense, and weak. The objective of the present study was to develop and validate a self-assessment tool to estimate Agnibala The developed tool was evaluated for its reliability and validity by administering it to 300 healthy volunteers of either gender belonging to 18 to 40-year age group. Besides confirming the statistical validity and reliability, the practical utility of the newly developed tool was also evaluated by recording serum lipid parameters of all the volunteers. The results show that the lipid parameters vary significantly according to the status of Agni The tool, therefore, may be used to screen normal population to look for possible susceptibility to certain health conditions. © The Author(s) 2016.

  7. Competency-Based Training and Simulation: Making a "Valid" Argument.

    Science.gov (United States)

    Noureldin, Yasser A; Lee, Jason Y; McDougall, Elspeth M; Sweet, Robert M

    2018-02-01

    The use of simulation as an assessment tool is much more controversial than is its utility as an educational tool. However, without valid simulation-based assessment tools, the ability to objectively assess technical skill competencies in a competency-based medical education framework will remain challenging. The current literature in urologic simulation-based training and assessment uses a definition and framework of validity that is now outdated. This is probably due to the absence of awareness rather than an absence of comprehension. The following review article provides the urologic community an updated taxonomy on validity theory as it relates to simulation-based training and assessments and translates our simulation literature to date into this framework. While the old taxonomy considered validity as distinct subcategories and focused on the simulator itself, the modern taxonomy, for which we translate the literature evidence, considers validity as a unitary construct with a focus on interpretation of simulator data/scores.

  8. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    Science.gov (United States)

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research

  9. Validation and Intercomparison of Ocean Color Algorithms for Estimating Particulate Organic Carbon in the Oceans

    Directory of Open Access Journals (Sweden)

    Hayley Evers-King

    2017-08-01

    Full Text Available Particulate Organic Carbon (POC plays a vital role in the ocean carbon cycle. Though relatively small compared with other carbon pools, the POC pool is responsible for large fluxes and is linked to many important ocean biogeochemical processes. The satellite ocean-color signal is influenced by particle composition, size, and concentration and provides a way to observe variability in the POC pool at a range of temporal and spatial scales. To provide accurate estimates of POC concentration from satellite ocean color data requires algorithms that are well validated, with uncertainties characterized. Here, a number of algorithms to derive POC using different optical variables are applied to merged satellite ocean color data provided by the Ocean Color Climate Change Initiative (OC-CCI and validated against the largest database of in situ POC measurements currently available. The results of this validation exercise indicate satisfactory levels of performance from several algorithms (highest performance was observed from the algorithms of Loisel et al., 2002; Stramski et al., 2008 and uncertainties that are within the requirements of the user community. Estimates of the standing stock of the POC can be made by applying these algorithms, and yield an estimated mixed-layer integrated global stock of POC between 0.77 and 1.3 Pg C of carbon. Performance of the algorithms vary regionally, suggesting that blending of region-specific algorithms may provide the best way forward for generating global POC products.

  10. A comparative study of soft sensor design for lipid estimation of microalgal photobioreactor system with experimental validation.

    Science.gov (United States)

    Yoo, Sung Jin; Jung, Dong Hwi; Kim, Jung Hun; Lee, Jong Min

    2015-03-01

    This study examines the applicability of various nonlinear estimators for online estimation of the lipid concentration in microalgae cultivation system. Lipid is a useful bio-product that has many applications including biofuels and bioactives. However, the improvement of lipid productivity using real-time monitoring and control with experimental validation is limited because measurement of lipid in microalgae is a difficult and time-consuming task. In this study, estimation of lipid concentration from other measurable sources such as biomass or glucose sensor was studied. Extended Kalman filter (EKF), unscented Kalman filter (UKF), and particle filter (PF) were compared in various cases for their applicability to photobioreactor systems. Furthermore, simulation studies to identify appropriate types of sensors for estimating lipid were also performed. Based on the case studies, the most effective case was validated with experimental data and found that UKF and PF with time-varying system noise covariance is effective for microalgal photobioreactor system. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Development and test validation of a computational scheme for high-fidelity fluence estimations of the Swiss BWRs

    International Nuclear Information System (INIS)

    Vasiliev, A.; Wieselquist, W.; Ferroukhi, H.; Canepa, S.; Heldt, J.; Ledergerber, G.

    2011-01-01

    One of the current objectives within reactor analysis related projects at the Paul Scherrer Institut is the establishment of a comprehensive computational methodology for fast neutron fluence (FNF) estimations of reactor pressure vessels (RPV) and internals for both PWRs and BWRs. In the recent past, such an integral calculational methodology based on the CASMO-4/SIMULATE- 3/MCNPX system of codes was developed for PWRs and validated against RPV scraping tests. Based on the very satisfactory validation results, the methodology was recently applied for predictive FNF evaluations of a Swiss PWR to support the national nuclear safety inspectorate in the framework of life-time estimations. Today, focus is at PSI given to develop a corresponding advanced methodology for high-fidelity FNF estimations of BWR reactors. In this paper, the preliminary steps undertaken in that direction are presented. To start, the concepts of the PWR computational scheme and its transfer/adaptation to BWR are outlined. Then, the modelling of a Swiss BWR characterized by very heterogeneous core designs is presented along with preliminary sensitivity studies carried out to assess the sufficient level of details required for the complex core region. Finally, a first validation test case is presented on the basis of two dosimeter monitors irradiated during two recent cycles of the given BWR reactor. The achieved computational results show a satisfactory agreement with measured dosimeter data and illustrate thereby the feasibility of applying the PSI FNF computational scheme also for BWRs. Further sensitivity/optimization studies are nevertheless necessary in order to consolidate the scheme and to ensure increasing continuously, the fidelity and reliability of the BWR FNF estimations. (author)

  12. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  13. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    Science.gov (United States)

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  14. Order Tracking Based on Robust Peak Search Instantaneous Frequency Estimation

    International Nuclear Information System (INIS)

    Gao, Y; Guo, Y; Chi, Y L; Qin, S R

    2006-01-01

    Order tracking plays an important role in non-stationary vibration analysis of rotating machinery, especially to run-up or coast down. An instantaneous frequency estimation (IFE) based order tracking of rotating machinery is introduced. In which, a peak search algorithms of spectrogram of time-frequency analysis is employed to obtain IFE of vibrations. An improvement to peak search is proposed, which can avoid strong non-order components or noises disturbing to the peak search work. Compared with traditional methods of order tracking, IFE based order tracking is simplified in application and only software depended. Testing testify the validity of the method. This method is an effective supplement to traditional methods, and the application in condition monitoring and diagnosis of rotating machinery is imaginable

  15. Development and prospective validation of a model estimating risk of readmission in cancer patients.

    Science.gov (United States)

    Schmidt, Carl R; Hefner, Jennifer; McAlearney, Ann S; Graham, Lisa; Johnson, Kristen; Moffatt-Bruce, Susan; Huerta, Timothy; Pawlik, Timothy M; White, Susan

    2018-02-26

    Hospital readmissions among cancer patients are common. While several models estimating readmission risk exist, models specific for cancer patients are lacking. A logistic regression model estimating risk of unplanned 30-day readmission was developed using inpatient admission data from a 2-year period (n = 18 782) at a tertiary cancer hospital. Readmission risk estimates derived from the model were then calculated prospectively over a 10-month period (n = 8616 admissions) and compared with actual incidence of readmission. There were 2478 (13.2%) unplanned readmissions. Model factors associated with readmission included: emergency department visit within 30 days, >1 admission within 60 days, non-surgical admission, solid malignancy, gastrointestinal cancer, emergency admission, length of stay >5 days, abnormal sodium, hemoglobin, or white blood cell count. The c-statistic for the model was 0.70. During the 10-month prospective evaluation, estimates of readmission from the model were associated with higher actual readmission incidence from 20.7% for the highest risk category to 9.6% for the lowest. An unplanned readmission risk model developed specifically for cancer patients performs well when validated prospectively. The specificity of the model for cancer patients, EMR incorporation, and prospective validation justify use of the model in future studies designed to reduce and prevent readmissions. © 2018 Wiley Periodicals, Inc.

  16. Validation of voxel-based morphometry (VBM) based on MRI

    Science.gov (United States)

    Yang, Xueyu; Chen, Kewei; Guo, Xiaojuan; Yao, Li

    2007-03-01

    Voxel-based morphometry (VBM) is an automated and objective image analysis technique for detecting differences in regional concentration or volume of brain tissue composition based on structural magnetic resonance (MR) images. VBM has been used widely to evaluate brain morphometric differences between different populations, but there isn't an evaluation system for its validation until now. In this study, a quantitative and objective evaluation system was established in order to assess VBM performance. We recruited twenty normal volunteers (10 males and 10 females, age range 20-26 years, mean age 22.6 years). Firstly, several focal lesions (hippocampus, frontal lobe, anterior cingulate, back of hippocampus, back of anterior cingulate) were simulated in selected brain regions using real MRI data. Secondly, optimized VBM was performed to detect structural differences between groups. Thirdly, one-way ANOVA and post-hoc test were used to assess the accuracy and sensitivity of VBM analysis. The results revealed that VBM was a good detective tool in majority of brain regions, even in controversial brain region such as hippocampus in VBM study. Generally speaking, much more severity of focal lesion was, better VBM performance was. However size of focal lesion had little effects on VBM analysis.

  17. Validity of selected cardiovascular field-based test among Malaysian ...

    African Journals Online (AJOL)

    Based on emerge obese problem among Malaysian, this research is formulated to validate published tests among healthy female adult. Selected test namely; 20 meter multi-stage shuttle run, 2.4km run test, 1 mile walk test and Harvard Step test were correlated with laboratory test (Bruce protocol) to find the criterion validity ...

  18. Population-based absolute risk estimation with survey data

    Science.gov (United States)

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  19. Estimating Evapotranspiration Using an Observation Based Terrestrial Water Budget

    Science.gov (United States)

    Rodell, Matthew; McWilliams, Eric B.; Famiglietti, James S.; Beaudoing, Hiroko K.; Nigro, Joseph

    2011-01-01

    Evapotranspiration (ET) is difficult to measure at the scales of climate models and climate variability. While satellite retrieval algorithms do exist, their accuracy is limited by the sparseness of in situ observations available for calibration and validation, which themselves may be unrepresentative of 500m and larger scale satellite footprints and grid pixels. Here, we use a combination of satellite and ground-based observations to close the water budgets of seven continental scale river basins (Mackenzie, Fraser, Nelson, Mississippi, Tocantins, Danube, and Ubangi), estimating mean ET as a residual. For any river basin, ET must equal total precipitation minus net runoff minus the change in total terrestrial water storage (TWS), in order for mass to be conserved. We make use of precipitation from two global observation-based products, archived runoff data, and TWS changes from the Gravity Recovery and Climate Experiment satellite mission. We demonstrate that while uncertainty in the water budget-based estimates of monthly ET is often too large for those estimates to be useful, the uncertainty in the mean annual cycle is small enough that it is practical for evaluating other ET products. Here, we evaluate five land surface model simulations, two operational atmospheric analyses, and a recent global reanalysis product based on our results. An important outcome is that the water budget-based ET time series in two tropical river basins, one in Brazil and the other in central Africa, exhibit a weak annual cycle, which may help to resolve debate about the strength of the annual cycle of ET in such regions and how ET is constrained throughout the year. The methods described will be useful for water and energy budget studies, weather and climate model assessments, and satellite-based ET retrieval optimization.

  20. Long-term monitoring of endangered Laysan ducks: Index validation and population estimates 1998–2012

    Science.gov (United States)

    Reynolds, Michelle H.; Courtot, Karen; Brinck, Kevin W.; Rehkemper, Cynthia; Hatfield, Jeffrey

    2015-01-01

    Monitoring endangered wildlife is essential to assessing management or recovery objectives and learning about population status. We tested assumptions of a population index for endangered Laysan duck (or teal; Anas laysanensis) monitored using mark–resight methods on Laysan Island, Hawai’i. We marked 723 Laysan ducks between 1998 and 2009 and identified seasonal surveys through 2012 that met accuracy and precision criteria for estimating population abundance. Our results provide a 15-y time series of seasonal population estimates at Laysan Island. We found differences in detection among seasons and how observed counts related to population estimates. The highest counts and the strongest relationship between count and population estimates occurred in autumn (September–November). The best autumn surveys yielded population abundance estimates that ranged from 674 (95% CI = 619–730) in 2003 to 339 (95% CI = 265–413) in 2012. A population decline of 42% was observed between 2010 and 2012 after consecutive storms and Japan’s To¯hoku earthquake-generated tsunami in 2011. Our results show positive correlations between the seasonal maximum counts and population estimates from the same date, and support the use of standardized bimonthly counts of unmarked birds as a valid index to monitor trends among years within a season at Laysan Island.

  1. Access Based Cost Estimation for Beddown Analysis

    National Research Council Canada - National Science Library

    Pennington, Jasper E

    2006-01-01

    The purpose of this research is to develop an automated web-enabled beddown estimation application for Air Mobility Command in order to increase the effectiveness and enhance the robustness of beddown estimates...

  2. Validation of Agent Based Distillation Movement Algorithms

    National Research Council Canada - National Science Library

    Gill, Andrew

    2003-01-01

    Agent based distillations (ABD) are low-resolution abstract models, which can be used to explore questions associated with land combat operations in a short period of time Movement of agents within the EINSTein and MANA ABDs...

  3. Base Flow Model Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  4. Development and validation of a two-dimensional fast-response flood estimation model

    Energy Technology Data Exchange (ETDEWEB)

    Judi, David R [Los Alamos National Laboratory; Mcpherson, Timothy N [Los Alamos National Laboratory; Burian, Steven J [UNIV OF UTAK

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.

  5. Validation of a scenario-based assessment of critical thinking using an externally validated tool.

    Science.gov (United States)

    Buur, Jennifer L; Schmidt, Peggy; Smylie, Dean; Irizarry, Kris; Crocker, Carlos; Tyler, John; Barr, Margaret

    2012-01-01

    With medical education transitioning from knowledge-based curricula to competency-based curricula, critical thinking skills have emerged as a major competency. While there are validated external instruments for assessing critical thinking, many educators have created their own custom assessments of critical thinking. However, the face validity of these assessments has not been challenged. The purpose of this study was to compare results from a custom assessment of critical thinking with the results from a validated external instrument of critical thinking. Students from the College of Veterinary Medicine at Western University of Health Sciences were administered a custom assessment of critical thinking (ACT) examination and the externally validated instrument, California Critical Thinking Skills Test (CCTST), in the spring of 2011. Total scores and sub-scores from each exam were analyzed for significant correlations using Pearson correlation coefficients. Significant correlations between ACT Blooms 2 and deductive reasoning and total ACT score and deductive reasoning were demonstrated with correlation coefficients of 0.24 and 0.22, respectively. No other statistically significant correlations were found. The lack of significant correlation between the two examinations illustrates the need in medical education to externally validate internal custom assessments. Ultimately, the development and validation of custom assessments of non-knowledge-based competencies will produce higher quality medical professionals.

  6. Global temperature estimates in the troposphere and stratosphere: a validation study of COSMIC/FORMOSAT-3 measurements

    Directory of Open Access Journals (Sweden)

    P. Kishore

    2009-02-01

    Full Text Available This paper mainly focuses on the validation of temperature estimates derived with the newly launched Constellation Observing System for Meteorology Ionosphere and Climate (COSMIC/Formosa Satellite 3 (FORMOSAT-3 system. The analysis is based on the radio occultation (RO data samples collected during the first year observation from April 2006 to April 2007. For the validation, we have used the operational stratospheric analyses including the National Centers for Environmental Prediction - Reanalysis (NCEP, the Japanese 25-year Reanalysis (JRA-25, and the United Kingdom Met Office (MetO data sets. Comparisons done in different formats reveal good agreement between the COSMIC and reanalysis outputs. Spatially, the largest deviations are noted in the polar latitudes, and height-wise, the tropical tropopause region noted the maximum differences (2–4 K. We found that among the three reanalysis data sets the NCEP data sets have the best resemblance with the COSMIC measurements.

  7. Estimation of in-vivo neurotransmitter release by brain microdialysis: the issue of validity.

    Science.gov (United States)

    Di Chiara, G.; Tanda, G.; Carboni, E.

    1996-11-01

    Although microdialysis is commonly understood as a method of sampling low molecular weight compounds in the extracellular compartment of tissues, this definition appears insufficient to specifically describe brain microdialysis of neurotransmitters. In fact, transmitter overflow from the brain into dialysates is critically dependent upon the composition of the perfusing Ringer. Therefore, the dialysing Ringer not only recovers the transmitter from the extracellular brain fluid but is a main determinant of its in-vivo release. Two types of brain microdialysis are distinguished: quantitative micro-dialysis and conventional microdialysis. Quantitative microdialysis provides an estimate of neurotransmitter concentrations in the extracellular fluid in contact with the probe. However, this information might poorly reflect the kinetics of neurotransmitter release in vivo. Conventional microdialysis involves perfusion at a constant rate with a transmitter-free Ringer, resulting in the formation of a steep neurotransmitter concentration gradient extending from the Ringer into the extracellular fluid. This artificial gradient might be critical for the ability of conventional microdialysis to detect and resolve phasic changes in neurotransmitter release taking place in the implanted area. On the basis of these characteristics, conventional microdialysis of neurotransmitters can be conceptualized as a model of the in-vivo release of neurotransmitters in the brain. As such, the criteria of face-validity, construct-validity and predictive-validity should be applied to select the most appropriate experimental conditions for estimating neurotransmitter release in specific brain areas in relation to behaviour.

  8. Validating alternative methodologies to estimate the hydrological regime of temporary streams when flow data are unavailable

    Science.gov (United States)

    Llorens, Pilar; Gallart, Francesc; Latron, Jérôme; Cid, Núria; Rieradevall, Maria; Prat, Narcís

    2016-04-01

    Aquatic life in temporary streams is strongly conditioned by the temporal variability of the hydrological conditions that control the occurrence and connectivity of diverse mesohabitats. In this context, the software TREHS (Temporary Rivers' Ecological and Hydrological Status) has been developed, in the framework of the LIFE Trivers project, to help managers for adequately implement the Water Framework Directive in this type of water bodies. TREHS, using the methodology described in Gallart et al (2012), defines six temporal 'aquatic states', based on the hydrological conditions representing different mesohabitats, for a given reach at a particular moment. Nevertheless, hydrological data for assessing the regime of temporary streams are often non-existent or scarce. The scarcity of flow data makes frequently impossible the characterization of temporary streams hydrological regimes and, as a consequence, the selection of the correct periods and methods to determine their ecological status. Because of its qualitative nature, the TREHS approach allows the use of alternative methodologies to assess the regime of temporary streams in the lack of observed flow data. However, to adapt the TREHS to this qualitative data both the temporal scheme (from monthly to seasonal) as well as the number of aquatic states (from 6 to 3) have been modified. Two alternatives complementary methodologies were tested within the TREHS framework to assess the regime of temporary streams: interviews and aerial photographs. All the gauging stations (13) belonging to the Catalan Internal Catchments (NE, Spain) with recurrent zero flows periods were selected to validate both methodologies. On one hand, non-structured interviews were carried out to inhabitants of villages and small towns near the gauging stations. Flow permanence metrics for input into TREHS were drawn from the notes taken during the interviews. On the other hand, the historical series of available aerial photographs (typically 10

  9. Relative validity of fruit and vegetable intake estimated by the food frequency questionnaire used in the Danish National Birth Cohort

    DEFF Research Database (Denmark)

    Mikkelsen, Tina B.; Olsen, Sjurdur F.; Rasmussen, Salka E.

    2007-01-01

    ) (r=0.57); and fruit, vegetables, and juice (F&V&J) (r=0.62). Sensitivities of correct classification by FFQ into the two lowest and the two highest quintiles of F&V&J intake were 58-67% and 50-74%, respectively, and specificities were 71-79% and 65-83%, respectively. F&V&J intake estimated from......Objective: To validate the fruit and vegetable intake estimated from the Food Frequency Questionnaire (FFQ) used in the Danish National Birth Cohort (DNBC). Subjects and setting: The DNBC is a cohort of 101,042 pregnant women in Denmark, who received a FFQ by mail in gestation week 25. A validation...... study with 88 participants was made. A seven-day weighed food diary (FD) and three different biomarkers were employed as comparison methods. Results: Significant correlations between FFQ and FD-based estimates were found for fruit (r=0.66); vegetables (r=0.32); juice (r=0.52); fruit and vegetables (F&V...

  10. Are traditional body fat equations and anthropometry valid to estimate body fat in children and adolescents living with HIV?

    Science.gov (United States)

    Lima, Luiz Rodrigo Augustemak de; Martins, Priscila Custódio; Junior, Carlos Alencar Souza Alves; Castro, João Antônio Chula de; Silva, Diego Augusto Santos; Petroski, Edio Luiz

    The aim of this study was to assess the validity of traditional anthropometric equations and to develop predictive equations of total body and trunk fat for children and adolescents living with HIV based on anthropometric measurements. Forty-eight children and adolescents of both sexes (24 boys) aged 7-17 years, living in Santa Catarina, Brazil, participated in the study. Dual-energy X-ray absorptiometry was used as the reference method to evaluate total body and trunk fat. Height, body weight, circumferences and triceps, subscapular, abdominal and calf skinfolds were measured. The traditional equations of Lohman and Slaughter were used to estimate body fat. Multiple regression models were fitted to predict total body fat (Model 1) and trunk fat (Model 2) using a backward selection procedure. Model 1 had an R 2 =0.85 and a standard error of the estimate of 1.43. Model 2 had an R 2 =0.80 and standard error of the estimate=0.49. The traditional equations of Lohman and Slaughter showed poor performance in estimating body fat in children and adolescents living with HIV. The prediction models using anthropometry provided reliable estimates and can be used by clinicians and healthcare professionals to monitor total body and trunk fat in children and adolescents living with HIV. Copyright © 2017 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.

  11. Validation of a simple evaporation-transpiration scheme (SETS) to estimate evaporation using micro-lysimeter measurements

    Science.gov (United States)

    Ghazanfari, Sadegh; Pande, Saket; Savenije, Hubert

    2014-05-01

    Several methods exist to estimate E and T. The Penman-Montieth or Priestly-Taylor methods along with the Jarvis scheme for estimating vegetation resistance are commonly used to estimate these fluxes as a function of land cover, atmospheric forcing and soil moisture content. In this study, a simple evaporation transpiration method is developed based on MOSAIC Land Surface Model that explicitly accounts for soil moisture. Soil evaporation and transpiration estimated by SETS is validated on a single column of soil profile with measured evaporation data from three micro-lysimeters located at Ferdowsi University of Mashhad synoptic station, Iran, for the year 2005. SETS is run using both implicit and explicit computational schemes. Results show that the implicit scheme estimates the vapor flux close to that by the explicit scheme. The mean difference between the implicit and explicit scheme is -0.03 mm/day. The paired T-test of mean difference (p-Value = 0.042 and t-Value = 2.04) shows that there is no significant difference between the two methods. The sum of soil evaporation and transpiration from SETS is also compared with P-M equation and micro-lysimeters measurements. The SETS predicts the actual evaporation with a lower bias (= 1.24mm/day) than P-M (= 1.82 mm/day) and with R2 value of 0.82.

  12. Validation of an elastic registration technique to estimate anatomical lung modification in Non-Small-Cell Lung Cancer Tomotherapy

    International Nuclear Information System (INIS)

    Faggiano, Elena; Cattaneo, Giovanni M; Ciavarro, Cristina; Dell'Oca, Italo; Persano, Diego; Calandrino, Riccardo; Rizzo, Giovanna

    2011-01-01

    The study of lung parenchyma anatomical modification is useful to estimate dose discrepancies during the radiation treatment of Non-Small-Cell Lung Cancer (NSCLC) patients. We propose and validate a method, based on free-form deformation and mutual information, to elastically register planning kVCT with daily MVCT images, to estimate lung parenchyma modification during Tomotherapy. We analyzed 15 registrations between the planning kVCT and 3 MVCT images for each of the 5 NSCLC patients. Image registration accuracy was evaluated by visual inspection and, quantitatively, by Correlation Coefficients (CC) and Target Registration Errors (TRE). Finally, a lung volume correspondence analysis was performed to specifically evaluate registration accuracy in lungs. Results showed that elastic registration was always satisfactory, both qualitatively and quantitatively: TRE after elastic registration (average value of 3.6 mm) remained comparable and often smaller than voxel resolution. Lung volume variations were well estimated by elastic registration (average volume and centroid errors of 1.78% and 0.87 mm, respectively). Our results demonstrate that this method is able to estimate lung deformations in thorax MVCT, with an accuracy within 3.6 mm comparable or smaller than the voxel dimension of the kVCT and MVCT images. It could be used to estimate lung parenchyma dose variations in thoracic Tomotherapy

  13. FPGA-Based Embedded Motion Estimation Sensor

    Directory of Open Access Journals (Sweden)

    Zhaoyi Wei

    2008-01-01

    Full Text Available Accurate real-time motion estimation is very critical to many computer vision tasks. However, because of its computational power and processing speed requirements, it is rarely used for real-time applications, especially for micro unmanned vehicles. In our previous work, a FPGA system was built to process optical flow vectors of 64 frames of 640×480 image per second. Compared to software-based algorithms, this system achieved much higher frame rate but marginal accuracy. In this paper, a more accurate optical flow algorithm is proposed. Temporal smoothing is incorporated in the hardware structure which significantly improves the algorithm accuracy. To accommodate temporal smoothing, the hardware structure is composed of two parts: the derivative (DER module produces intermediate results and the optical flow computation (OFC module calculates the final optical flow vectors. Software running on a built-in processor on the FPGA chip is used in the design to direct the data flow and manage hardware components. This new design has been implemented on a compact, low power, high performance hardware platform for micro UV applications. It is able to process 15 frames of 640×480 image per second and with much improved accuracy. Higher frame rate can be achieved with further optimization and additional memory space.

  14. SEE rate estimation based on diffusion approximation of charge collection

    Science.gov (United States)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.

    2018-03-01

    The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.

  15. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Eric; Gonder, Jeff; Jehlik, Forrest

    2017-01-01

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  16. A new validation technique for estimations of body segment inertia tensors: Principal axes of inertia do matter.

    Science.gov (United States)

    Rossi, Marcel M; Alderson, Jacqueline; El-Sallam, Amar; Dowling, James; Reinbolt, Jeffrey; Donnelly, Cyril J

    2016-12-08

    The aims of this study were to: (i) establish a new criterion method to validate inertia tensor estimates by setting the experimental angular velocity data of an airborne objects as ground truth against simulations run with the estimated tensors, and (ii) test the sensitivity of the simulations to changes in the inertia tensor components. A rigid steel cylinder was covered with reflective kinematic markers and projected through a calibrated motion capture volume. Simulations of the airborne motion were run with two models, using inertia tensor estimated with geometric formula or the compound pendulum technique. The deviation angles between experimental (ground truth) and simulated angular velocity vectors and the root mean squared deviation angle were computed for every simulation. Monte Carlo analyses were performed to assess the sensitivity of simulations to changes in magnitude of principal moments of inertia within ±10% and to changes in orientation of principal axes of inertia within ±10° (of the geometric-based inertia tensor). Root mean squared deviation angles ranged between 2.9° and 4.3° for the inertia tensor estimated geometrically, and between 11.7° and 15.2° for the compound pendulum values. Errors up to 10% in magnitude of principal moments of inertia yielded root mean squared deviation angles ranging between 3.2° and 6.6°, and between 5.5° and 7.9° when lumped with errors of 10° in principal axes of inertia orientation. The proposed technique can effectively validate inertia tensors from novel estimation methods of body segment inertial parameter. Principal axes of inertia orientation should not be neglected when modelling human/animal mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Convergent validity of ActiGraph and Actical accelerometers for estimating physical activity in adults

    DEFF Research Database (Denmark)

    Duncan, Scott; Stewart, Tom; Bo Schneller, Mikkel

    2018-01-01

    PURPOSE: The aim of the present study was to examine the convergent validity of two commonly-used accelerometers for estimating time spent in various physical activity intensities in adults. METHODS: The sample comprised 37 adults (26 males) with a mean (SD) age of 37.6 (12.2) years from San Diego......, USA. Participants wore ActiGraph GT3X+ and Actical accelerometers for three consecutive days. Percent agreement was used to compare time spent within four physical activity intensity categories under three counts per minute (CPM) threshold protocols: (1) using thresholds developed specifically......Graph and Actical accelerometers provide significantly different estimates of time spent in various physical activity intensities. Regression and threshold adjustment were able to reduce these differences, although some level of non-agreement persisted. Researchers should be aware of the inherent limitations...

  18. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    Science.gov (United States)

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.

  19. Validation of the CHIRPS Satellite Rainfall Estimates over Eastern of Africa

    Science.gov (United States)

    Dinku, T.; Funk, C. C.; Tadesse, T.; Ceccato, P.

    2017-12-01

    Long and temporally consistent rainfall time series are essential in climate analyses and applications. Rainfall data from station observations are inadequate over many parts of the world due to sparse or non-existent observation networks, or limited reporting of gauge observations. As a result, satellite rainfall estimates have been used as an alternative or as a supplement to station observations. However, many satellite-based rainfall products with long time series suffer from coarse spatial and temporal resolutions and inhomogeneities caused by variations in satellite inputs. There are some satellite rainfall products with reasonably consistent time series, but they are often limited to specific geographic areas. The Climate Hazards Group Infrared Precipitation (CHIRP) and CHIRP combined with station observations (CHIRPS) are recently produced satellite-based rainfall products with relatively high spatial and temporal resolutions and quasi-global coverage. In this study, CHIRP and CHIRPS were evaluated over East Africa at daily, dekadal (10-day) and monthly time scales. The evaluation was done by comparing the satellite products with rain gauge data from about 1200 stations. The is unprecedented number of validation stations for this region covering. The results provide a unique region-wide understanding of how satellite products perform over different climatic/geographic (low lands, mountainous regions, and coastal) regions. The CHIRP and CHIRPS products were also compared with two similar satellite rainfall products: the African Rainfall Climatology version 2 (ARC2) and the latest release of the Tropical Applications of Meteorology using Satellite data (TAMSAT). The results show that both CHIRP and CHIRPS products are significantly better than ARC2 with higher skill and low or no bias. These products were also found to be slightly better than the latest version of the TAMSAT product. A comparison was also done between the latest release of the TAMSAT product

  20. Validity and reliability of central blood pressure estimated by upper arm oscillometric cuff pressure.

    Science.gov (United States)

    Climie, Rachel E D; Schultz, Martin G; Nikolic, Sonja B; Ahuja, Kiran D K; Fell, James W; Sharman, James E

    2012-04-01

    Noninvasive central blood pressure (BP) independently predicts mortality, but current methods are operator-dependent, requiring skill to obtain quality recordings. The aims of this study were first, to determine the validity of an automatic, upper arm oscillometric cuff method for estimating central BP (O(CBP)) by comparison with the noninvasive reference standard of radial tonometry (T(CBP)). Second, we determined the intratest and intertest reliability of O(CBP). To assess validity, central BP was estimated by O(CBP) (Pulsecor R6.5B monitor) and compared with T(CBP) (SphygmoCor) in 47 participants free from cardiovascular disease (aged 57 ± 9 years) in supine, seated, and standing positions. Brachial mean arterial pressure (MAP) and diastolic BP (DBP) from the O(CBP) device were used to calibrate in both devices. Duplicate measures were recorded in each position on the same day to assess intratest reliability, and participants returned within 10 ± 7 days for repeat measurements to assess intertest reliability. There was a strong intraclass correlation (ICC = 0.987, P difference (1.2 ± 2.2 mm Hg) for central systolic BP (SBP) determined by O(CBP) compared with T(CBP). Ninety-six percent of all comparisons (n = 495 acceptable recordings) were within 5 mm Hg. With respect to reliability, there were strong correlations but higher limits of agreement for the intratest (ICC = 0.975, P difference 0.6 ± 4.5 mm Hg) and intertest (ICC = 0.895, P difference 4.3 ± 8.0 mm Hg) comparisons. Estimation of central SBP using cuff oscillometry is comparable to radial tonometry and has good reproducibility. As a noninvasive, relatively operator-independent method, O(CBP) may be as useful as T(CBP) for estimating central BP in clinical practice.

  1. Simulation Based Studies in Software Engineering: A Matter of Validity

    Directory of Open Access Journals (Sweden)

    Breno Bernard Nicolau de França

    2015-04-01

    Full Text Available Despite the possible lack of validity when compared with other science areas, Simulation-Based Studies (SBS in Software Engineering (SE have supported the achievement of some results in the field. However, as it happens with any other sort of experimental study, it is important to identify and deal with threats to validity aiming at increasing their strength and reinforcing results confidence. OBJECTIVE: To identify potential threats to SBS validity in SE and suggest ways to mitigate them. METHOD: To apply qualitative analysis in a dataset resulted from the aggregation of data from a quasi-systematic literature review combined with ad-hoc surveyed information regarding other science areas. RESULTS: The analysis of data extracted from 15 technical papers allowed the identification and classification of 28 different threats to validity concerned with SBS in SE according Cook and Campbell’s categories. Besides, 12 verification and validation procedures applicable to SBS were also analyzed and organized due to their ability to detect these threats to validity. These results were used to make available an improved set of guidelines regarding the planning and reporting of SBS in SE. CONCLUSIONS: Simulation based studies add different threats to validity when compared with traditional studies. They are not well observed and therefore, it is not easy to identify and mitigate all of them without explicit guidance, as the one depicted in this paper.

  2. Validation of SMAP Root Zone Soil Moisture Estimates with Improved Cosmic-Ray Neutron Probe Observations

    Science.gov (United States)

    Babaeian, E.; Tuller, M.; Sadeghi, M.; Franz, T.; Jones, S. B.

    2017-12-01

    Soil Moisture Active Passive (SMAP) soil moisture products are commonly validated based on point-scale reference measurements, despite the exorbitant spatial scale disparity. The difference between the measurement depth of point-scale sensors and the penetration depth of SMAP further complicates evaluation efforts. Cosmic-ray neutron probes (CRNP) with an approximately 500-m radius footprint provide an appealing alternative for SMAP validation. This study is focused on the validation of SMAP level-4 root zone soil moisture products with 9-km spatial resolution based on CRNP observations at twenty U.S. reference sites with climatic conditions ranging from semiarid to humid. The CRNP measurements are often biased by additional hydrogen sources such as surface water, atmospheric vapor, or mineral lattice water, which sometimes yield unrealistic moisture values in excess of the soil water storage capacity. These effects were removed during CRNP data analysis. Comparison of SMAP data with corrected CRNP observations revealed a very high correlation for most of the investigated sites, which opens new avenues for validation of current and future satellite soil moisture products.

  3. Estimating Computer-Based Training Development Times

    Science.gov (United States)

    1987-10-14

    beginners , must be sure they interpret terms correctly. As a result of this informal validation, the authors suggest refinements in the tool which...Productivity tools available: automated design tools, text processor interfaces, flowcharting software, software interfaces a Multimedia interfaces e

  4. Development and Statistical Validation of Spectrophotometric Methods for the Estimation of Nabumetone in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    A. R. Rote

    2010-01-01

    Full Text Available Three new simple, economic spectrophotometric methods were developed and validated for the estimation of nabumetone in bulk and tablet dosage form. First method includes determination of nabumetone at absorption maxima 330 nm, second method applied was area under curve for analysis of nabumetone in the wavelength range of 326-334 nm and third method was First order derivative spectra with scaling factor 4. Beer law obeyed in the concentration range of 10-30 μg/mL for all three methods. The correlation coefficients were found to be 0.9997, 0.9998 and 0.9998 by absorption maxima, area under curve and first order derivative spectra. Results of analysis were validated statistically and by performing recovery studies. The mean percent recoveries were found satisfactory for all three methods. The developed methods were also compared statistically using one way ANOVA. The proposed methods have been successfully applied for the estimation of nabumetone in bulk and pharmaceutical tablet dosage form.

  5. Reliability Estimation Based Upon Test Plan Results

    National Research Council Canada - National Science Library

    Read, Robert

    1997-01-01

    The report contains a brief summary of aspects of the Maximus reliability point and interval estimation technique as it has been applied to the reliability of a device whose surveillance tests contain...

  6. Monte Carlo-Based Tail Exponent Estimator

    Czech Academy of Sciences Publication Activity Database

    Baruník, Jozef; Vácha, Lukáš

    2010-01-01

    Roč. 2010, č. 6 (2010), s. 1-26 R&D Projects: GA ČR GA402/09/0965; GA ČR GD402/09/H045; GA ČR GP402/08/P207 Institutional research plan: CEZ:AV0Z10750506 Keywords : Hill estimator * α-stable distributions * tail exponent estimation Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/barunik-0342493.pdf

  7. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    Science.gov (United States)

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological

  8. Validation of the Maslach Burnout Inventory-Human Services Survey for Estimating Burnout in Dental Students.

    Science.gov (United States)

    Montiel-Company, José María; Subirats-Roig, Cristian; Flores-Martí, Pau; Bellot-Arcís, Carlos; Almerich-Silla, José Manuel

    2016-11-01

    The aim of this study was to examine the validity and reliability of the Maslach Burnout Inventory-Human Services Survey (MBI-HSS) as a tool for assessing the prevalence and level of burnout in dental students in Spanish universities. The survey was adapted from English to Spanish. A sample of 533 dental students from 15 Spanish universities and a control group of 188 medical students self-administered the survey online, using the Google Drive service. The test-retest reliability or reproducibility showed an Intraclass Correlation Coefficient of 0.95. The internal consistency of the survey was 0.922. Testing the construct validity showed two components with an eigenvalue greater than 1.5, which explained 51.2% of the total variance. Factor I (36.6% of the variance) comprised the items that estimated emotional exhaustion and depersonalization. Factor II (14.6% of the variance) contained the items that estimated personal accomplishment. The cut-off point for the existence of burnout achieved a sensitivity of 92.2%, a specificity of 92.1%, and an area under the curve of 0.96. Comparison of the total dental students sample and the control group of medical students showed significantly higher burnout levels for the dental students (50.3% vs. 40.4%). In this study, the MBI-HSS was found to be viable, valid, and reliable for measuring burnout in dental students. Since the study also found that the dental students suffered from high levels of this syndrome, these results suggest the need for preventive burnout control programs.

  9. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    Directory of Open Access Journals (Sweden)

    Suzana Papile Maciel Carvalho

    2013-07-01

    Full Text Available Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. OBJECTIVE: This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995, previously used in a population sample from Northeast Brazil. MATERIAL AND METHODS: The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. RESULTS: The results demonstrated that the application of the method of Oliveira, et al. (1995 in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. CONCLUSION: It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995 presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South

  10. The Air Force Mobile Forward Surgical Team (MFST): Using the Estimating Supplies Program to Validate Clinical Requirement

    National Research Council Canada - National Science Library

    Nix, Ralph E; Onofrio, Kathleen; Konoske, Paula J; Galarneau, Mike R; Hill, Martin

    2004-01-01

    .... The primary objective of the study was to provide the Air Force with the ability to validate clinical requirements of the MFST assemblage, with the goal of using NHRC's Estimating Supplies Program (ESP...

  11. Reproducibility and relative validity of a food frequency questionnaire to estimate intake of dietary phylloquinone and menaquinones.

    NARCIS (Netherlands)

    Zwakenberg, S R; Engelen, A I P; Dalmeijer, G W; Booth, S L; Vermeer, C; Drijvers, J J M M; Ocke, M C; Feskens, E J M; van der Schouw, Y T; Beulens, J W J

    2017-01-01

    This study aims to investigate the reproducibility and relative validity of the Dutch food frequency questionnaire (FFQ), to estimate intake of dietary phylloquinone and menaquinones compared with 24-h dietary recalls (24HDRs) and plasma markers of vitamin K status.

  12. Are cannabis prevalence estimates comparable across countries and regions? A cross-cultural validation using search engine query data.

    Science.gov (United States)

    Steppan, Martin; Kraus, Ludwig; Piontek, Daniela; Siciliano, Valeria

    2013-01-01

    Prevalence estimation of cannabis use is usually based on self-report data. Although there is evidence on the reliability of this data source, its cross-cultural validity is still a major concern. External objective criteria are needed for this purpose. In this study, cannabis-related search engine query data are used as an external criterion. Data on cannabis use were taken from the 2007 European School Survey Project on Alcohol and Other Drugs (ESPAD). Provincial data came from three Italian nation-wide studies using the same methodology (2006-2008; ESPAD-Italia). Information on cannabis-related search engine query data was based on Google search volume indices (GSI). (1) Reliability analysis was conducted for GSI. (2) Latent measurement models of "true" cannabis prevalence were tested using perceived availability, web-based cannabis searches and self-reported prevalence as indicators. (3) Structure models were set up to test the influences of response tendencies and geographical position (latitude, longitude). In order to test the stability of the models, analyses were conducted on country level (Europe, US) and on provincial level in Italy. Cannabis-related GSI were found to be highly reliable and constant over time. The overall measurement model was highly significant in both data sets. On country level, no significant effects of response bias indicators and geographical position on perceived availability, web-based cannabis searches and self-reported prevalence were found. On provincial level, latitude had a significant positive effect on availability indicating that perceived availability of cannabis in northern Italy was higher than expected from the other indicators. Although GSI showed weaker associations with cannabis use than perceived availability, the findings underline the external validity and usefulness of search engine query data as external criteria. The findings suggest an acceptable relative comparability of national (provincial) prevalence

  13. Validation and Refinement of Prediction Models to Estimate Exercise Capacity in Cancer Survivors Using the Steep Ramp Test

    NARCIS (Netherlands)

    Stuiver, Martijn M.; Kampshoff, Caroline S.; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J. M.; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M.

    2017-01-01

    Objective: To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo2(peak)) and peak power output (W-peak).&

  14. Validation and measurement uncertainty estimation in food microbiology: differences between quantitative and qualitative methods

    Directory of Open Access Journals (Sweden)

    Vesna Režić Dereani

    2010-09-01

    Full Text Available The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the technical competence of the staff involved in the tests, recently are widely introduced in food microbiology laboratories in Croatia. In addition to quality manual introduction, and a lot of general documents, some of the most demanding procedures in routine microbiology laboratories are measurement uncertainty (MU procedures and validation experiment design establishment. Those procedures are not standardized yet even at international level, and they require practical microbiological knowledge, altogether with statistical competence. Differences between validation experiments design for quantitative and qualitative food microbiology analysis are discussed in this research, and practical solutions are shortly described. MU for quantitative determinations is more demanding issue than qualitative MU calculation. MU calculations are based on external proficiency testing data and internal validation data. In this paper, practical schematic descriptions for both procedures are shown.

  15. Online Internal Temperature Estimation for Lithium-Ion Batteries Based on Kalman Filter

    Directory of Open Access Journals (Sweden)

    Jinlei Sun

    2015-05-01

    Full Text Available The battery internal temperature estimation is important for the thermal safety in applications, because the internal temperature is hard to measure directly. In this work, an online internal temperature estimation method based on a simplified thermal model using a Kalman filter is proposed. As an improvement, the influences of entropy change and overpotential on heat generation are analyzed quantitatively. The model parameters are identified through a current pulse test. The charge/discharge experiments under different current rates are carried out on the same battery to verify the estimation results. The internal and surface temperatures are measured with thermocouples for result validation and model construction. The accuracy of the estimated result is validated with a maximum estimation error of around 1 K.

  16. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry--BREALD-30.

    Science.gov (United States)

    Junkes, Monica C; Fraiz, Fabian C; Sardenberg, Fernanda; Lee, Jessica Y; Paiva, Saul M; Ferreira, Fernanda M

    2015-01-01

    The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis. The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.

  17. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  18. Web-based Interspecies Correlation Estimation

    Science.gov (United States)

    Web-ICE estimates acute toxicity (LC50/LD50) of a chemical to a species, genus, or family from the known toxicity of the chemical to a surrogate species. Web-ICE has modules to predict acute toxicity to aquatic (fish and invertebrates) and wildlife (birds and mammals) taxa for us...

  19. Validating the use of 137Cs and 210Pbex measurements to estimate rates of soil loss from cultivated land in southern Italy

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.

    2012-01-01

    Soil erosion represents an important threat to the long-term sustainability of agriculture and forestry in many areas of the world, including southern Italy. Numerous models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution, based on the local topography, hydrometeorology, soil type and land management. However, there remains an important need for empirical measurements to provide a basis for validating and calibrating such models and prediction procedures as well as to support specific investigations and experiments. In this context, erosion plots provide useful information on gross rates of soil loss, but are unable to document the efficiency of the onward transfer of the eroded sediment within a field and towards the stream system, and thus net rates of soil loss from larger areas. The use of environmental radionuclides, particularly caesium-137 ( 137 Cs) and excess lead-210 ( 210 Pb ex ), as a means of estimating rates of soil erosion and deposition has attracted increasing attention in recent years and the approach has now been recognised as possessing several important advantages. In order to provide further confirmation of the validity of the estimates of longer-term erosion and soil redistribution rates provided by 137 Cs and 210 Pb ex measurements, there is a need for studies aimed explicitly at validating the results obtained. In this context, the authors directed attention to the potential offered by a set of small erosion plots located near Reggio Calabria in southern Italy, for validating estimates of soil loss provided by 137 Cs and 210 Pb ex measurements. A preliminary assessment suggested that, notwithstanding the limitations and constraints involved, a worthwhile investigation aimed at validating the use of 137 Cs and 210 Pb ex measurements to estimate rates of soil loss from cultivated land could be undertaken. The results demonstrate a close consistency between the measured rates of soil loss and

  20. Quantification of construction waste prevented by BIM-based design validation: Case studies in South Korea.

    Science.gov (United States)

    Won, Jongsung; Cheng, Jack C P; Lee, Ghang

    2016-03-01

    Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Experimental study on the plant state estimation for the condition-based maintenance

    International Nuclear Information System (INIS)

    Harada, J. I.; Takahashi, M.; Kitamura, M.; Wakabayashi, T.

    2006-01-01

    A framework of maintenance support system based on the plant state estimation using diverse methods has been proposed and the validity of the plant state estimation methods has been experimentally evaluated. The focus has been set on the construction of the BN for the objective system with the scale and complexity as same as real world systems. Another focus has been set on the other functions for maintenance support system such as signal processing tool and similarity matching. The validity of the proposed inference method has been confirmed through numerical experiments. (authors)

  2. Validation of Walk Score® for Estimating Neighborhood Walkability: An Analysis of Four US Metropolitan Areas

    Science.gov (United States)

    Duncan, Dustin T.; Aldstadt, Jared; Whalen, John; Melly, Steven J.; Gortmaker, Steven L.

    2011-01-01

    Neighborhood walkability can influence physical activity. We evaluated the validity of Walk Score® for assessing neighborhood walkability based on GIS (objective) indicators of neighborhood walkability with addresses from four US metropolitan areas with several street network buffer distances (i.e., 400-, 800-, and 1,600-meters). Address data come from the YMCA-Harvard After School Food and Fitness Project, an obesity prevention intervention involving children aged 5–11 years and their families participating in YMCA-administered, after-school programs located in four geographically diverse metropolitan areas in the US (n = 733). GIS data were used to measure multiple objective indicators of neighborhood walkability. Walk Scores were also obtained for the participant’s residential addresses. Spearman correlations between Walk Scores and the GIS neighborhood walkability indicators were calculated as well as Spearman correlations accounting for spatial autocorrelation. There were many significant moderate correlations between Walk Scores and the GIS neighborhood walkability indicators such as density of retail destinations and intersection density (p walkability. Correlations generally became stronger with a larger spatial scale, and there were some geographic differences. Walk Score® is free and publicly available for public health researchers and practitioners. Results from our study suggest that Walk Score® is a valid measure of estimating certain aspects of neighborhood walkability, particularly at the 1600-meter buffer. As such, our study confirms and extends the generalizability of previous findings demonstrating that Walk Score is a valid measure of estimating neighborhood walkability in multiple geographic locations and at multiple spatial scales. PMID:22163200

  3. Validation and scale dependencies of the triangle method for the evaporative fraction estimation over heterogeneous areas

    DEFF Research Database (Denmark)

    de Tomás, Alberto; Nieto, Héctor; Guzinski, Radoslaw

    2014-01-01

    Remote sensing has proved to be a consistent tool for monitoring water fluxes at regional scales. The triangle method, in particular, estimates the evaporative fraction (EF), defined as the ratio of latent heat flux (LE) to available energy, based on the relationship between satellite observations...... of land surface temperature and a vegetation index. Among other methodologies, this approach has been commonly used as an approximation to estimate LE, mainly over large semi-arid areas with uniform landscape features. In this study, an interpretation of the triangular space has been applied over...

  4. Measurement-Based Transmission Line Parameter Estimation with Adaptive Data Selection Scheme

    DEFF Research Database (Denmark)

    Li, Changgang; Zhang, Yaping; Zhang, Hengxu

    2017-01-01

    Accurate parameters of transmission lines are critical for power system operation and control decision making. Transmission line parameter estimation based on measured data is an effective way to enhance the validity of the parameters. This paper proposes a multi-point transmission line parameter...

  5. An examination of healthy aging across a conceptual continuum: prevalence estimates, demographic patterns, and validity.

    Science.gov (United States)

    McLaughlin, Sara J; Jette, Alan M; Connell, Cathleen M

    2012-06-01

    Although the notion of healthy aging has gained wide acceptance in gerontology, measuring the phenomenon is challenging. Guided by a prominent conceptualization of healthy aging, we examined how shifting from a more to less stringent definition of healthy aging influences prevalence estimates, demographic patterns, and validity. Data are from adults aged 65 years and older who participated in the Health and Retirement Study. We examined four operational definitions of healthy aging. For each, we calculated prevalence estimates and examined the odds of healthy aging by age, education, gender, and race-ethnicity in 2006. We also examined the association between healthy aging and both self-rated health and death. Across definitions, the prevalence of healthy aging ranged from 3.3% to 35.5%. For all definitions, those classified as experiencing healthy aging had lower odds of fair or poor self-rated health and death over an 8-year period. The odds of being classified as "healthy" were lower among those of advanced age, those with less education, and women than for their corresponding counterparts across all definitions. Moving across the conceptual continuum--from a more to less rigid definition of healthy aging--markedly increases the measured prevalence of healthy aging. Importantly, results suggest that all examined definitions identified a subgroup of older adults who had substantially lower odds of reporting fair or poor health and dying over an 8-year period, providing evidence of the validity of our definitions. Conceptualizations that emphasize symptomatic disease and functional health may be particularly useful for public health purposes.

  6. Lactate minimum in a ramp protocol and its validity to estimate the maximal lactate steady state

    Directory of Open Access Journals (Sweden)

    Emerson Pardono

    2009-01-01

    Full Text Available http://dx.doi.org/10.5007/1980-0037.2009v11n2p174   The objectives of this study were to evaluate the validity of the lactate minimum (LM using a ramp protocol for the determination of LM intensity (LMI, and to estimate the exercise intensity corresponding to maximal blood lactate steady state (MLSS. In addition, the possibility of determining aerobic and anaerobic fitness was investigated. Fourteen male cyclists of regional level performed one LM protocol on a cycle ergometer (Excalibur–Lode consisting of an incremental test at an initial workload of 75 Watts, with increments of 1 Watt every 6 seconds. Hyperlactatemia was induced by a 30-second Wingate anaerobic test (WAT (Monark–834E at a workload corresponding to 8.57% of the volunteer’s body weight. Peak power (11.5±2 Watts/kg, mean power output (9.8±1.7 Watts/kg, fatigue index (33.7±2.3% and lactate 7 min after WAT (10.5±2.3 mmol/L were determined. The incremental test identified LMI (207.8±17.7 Watts and its respective blood lactate concentration (2.9±0.7 mmol/L, heart rate (153.6±10.6 bpm, and also maximal aerobic power (305.2±31.0 Watts. MLSS intensity was identified by 2 to 4 constant exercise tests (207.8±17.7 Watts, with no difference compared to LMI and good agreement between the two parameters. The LM test using a ramp protocol seems to be a valid method for the identification of LMI and estimation of MLSS intensity in regional cyclists. In addition, both anaerobic and aerobic fitness parameters were identified during a single session.

  7. Model-based state estimator for an intelligent tire

    NARCIS (Netherlands)

    Goos, J.; Teerhuis, A. P.; Schmeitz, A. J.C.; Besselink, I.; Nijmeijer, H.

    2017-01-01

    In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into

  8. Model-based State Estimator for an Intelligent Tire

    NARCIS (Netherlands)

    Goos, J.; Teerhuis, A.P.; Schmeitz, A.J.C.; Besselink, I.J.M.; Nijmeijer, H.

    2016-01-01

    In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into

  9. Validity of anthropometric procedures to estimate body density and body fat percent in military men

    Directory of Open Access Journals (Sweden)

    Ciro Romélio Rodriguez-Añez

    1999-12-01

    Full Text Available The objective of this study was to verify the validity of the Katch e McArdle’s equation (1973,which uses the circumferences of the arm, forearm and abdominal to estimate the body density and the procedure of Cohen (1986 which uses the circumferences of the neck and abdominal to estimate the body fat percent (%F in military men. Therefore data from 50 military men, with mean age of 20.26 ± 2.04 years serving in Santa Maria, RS, was collected. The circumferences were measured according with Katch e McArdle (1973 and Cohen (1986 procedures. The body density measured (Dm obtained under water weighting was used as criteria and its mean value was 1.0706 ± 0.0100 g/ml. The residual lung volume was estimated using the Goldman’s e Becklake’s equation (1959. The %F was obtained with the Siri’s equation (1961 and its mean value was 12.70 ± 4.71%. The validation criterion suggested by Lohman (1992 was followed. The analysis of the results indicated that the procedure developed by Cohen (1986 has concurrent validity to estimate %F in military men or in other samples with similar characteristics with standard error of estimate of 3.45%. . RESUMO Através deste estudo objetivou-se verificar a validade: da equação de Katch e McArdle (1973 que envolve os perímetros do braço, antebraço e abdômen, para estimar a densidade corporal; e, o procedimento de Cohen (1986 que envolve os perímetros do pescoço e abdômen, para estimar o % de gordura (%G; para militares. Para tanto, coletou-se os dados de 50 militares masculinos, com idade média de 20,26 ± 2,04 anos, lotados na cidade de Santa Maria, RS. Mensurou-se os perímetros conforme procedimentos de Katch e McArdle (1973 e Cohen (1986. Utilizou-se a densidade corporal mensurada (Dm através da pesagem hidrostática como critério de validação, cujo valor médio foi de 1,0706 ± 0,0100 g/ml. Estimou-se o volume residual pela equação de Goldman e Becklake (1959. O %G derivado da Dm estimou

  10. Validity in work-based assessment: expanding our horizons

    NARCIS (Netherlands)

    Govaerts, M.; Vleuten, C.P.M. van der

    2013-01-01

    CONTEXT: Although work-based assessments (WBA) may come closest to assessing habitual performance, their use for summative purposes is not undisputed. Most criticism of WBA stems from approaches to validity consistent with the quantitative psychometric framework. However, there is increasing

  11. Estimation of rumen microbial protein production from purine derivatives in urine. A laboratory manual for the FAO/IAEA co-ordinated research programme on development, standardization and validation of nuclear based technologies for measuring microbial protein supply in ruminant livestock for improving productivity

    International Nuclear Information System (INIS)

    1997-05-01

    This laboratory manual contains the methodologies used in the standardization and validation of the urine purine derivative technique for estimating microbial protein supply to the rumen. It includes descriptions of methods that involve both radioactive and stable isotopes as well as non isotopic techniques such as chemical assays, since it has been recognised that while isotopic trace techniques provide a powerful tool for nutrition research they can not and should not be used in isolation. Refs, figs, tabs

  12. Estimating cardiovascular disease incidence from prevalence: a spreadsheet based model

    Directory of Open Access Journals (Sweden)

    Xue Feng Hu

    2017-01-01

    Full Text Available Abstract Background Disease incidence and prevalence are both core indicators of population health. Incidence is generally not as readily accessible as prevalence. Cohort studies and electronic health record systems are two major way to estimate disease incidence. The former is time-consuming and expensive; the latter is not available in most developing countries. Alternatively, mathematical models could be used to estimate disease incidence from prevalence. Methods We proposed and validated a method to estimate the age-standardized incidence of cardiovascular disease (CVD, with prevalence data from successive surveys and mortality data from empirical studies. Hallett’s method designed for estimating HIV infections in Africa was modified to estimate the incidence of myocardial infarction (MI in the U.S. population and incidence of heart disease in the Canadian population. Results Model-derived estimates were in close agreement with observed incidence from cohort studies and population surveillance systems. This method correctly captured the trend in incidence given sufficient waves of cross-sectional surveys. The estimated MI declining rate in the U.S. population was in accordance with the literature. This method was superior to closed cohort, in terms of the estimating trend of population cardiovascular disease incidence. Conclusion It is possible to estimate CVD incidence accurately at the population level from cross-sectional prevalence data. This method has the potential to be used for age- and sex- specific incidence estimates, or to be expanded to other chronic conditions.

  13. PEANO, a toolbox for real-time process signal validation and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Fantoni, Paolo F.; Figedy, Stefan; Racz, Attila

    1998-02-01

    PEANO (Process Evaluation and Analysis by Neural Operators), a toolbox for real time process signal validation and condition monitoring has been developed. This system analyses the signals, which are e.g. the readings of process monitoring sensors, computes their expected values and alerts if real values are deviated from the expected ones more than limits allow. The reliability level of the current analysis is also produced. The system is based on neuro-fuzzy techniques. Artificial Neural Networks and Fuzzy Logic models can be combined to exploit learning and generalisation capability of the first technique with the approximate reasoning embedded in the second approach. Real-time process signal validation is an application field where the use of this technique can improve the diagnosis of faulty sensors and the identification of outliers in a robust and reliable way. This study implements a fuzzy and possibilistic clustering algorithm to classify the operating region where the validation process has to be performed. The possibilistic approach (rather than probabilistic) allows a ''don't know'' classification that results in a fast detection of unforeseen plant conditions or outliers. Specialised Artificial Neural Networks are used for the validation process, one for each fuzzy cluster in which the operating map has been divided. There are two main advantages in using this technique: the accuracy and generalisation capability is increased compared to the case of a single network working in the entire operating region, and the ability to identify abnormal conditions, where the system is not capable to operate with a satisfactory accuracy, is improved. This model has been tested in a simulated environment on a French PWR, to monitor safety-related reactor variables over the entire power-flow operating map. (author)

  14. PEANO, a toolbox for real-time process signal validation and estimation

    International Nuclear Information System (INIS)

    Fantoni, Paolo F.; Figedy, Stefan; Racz, Attila

    1998-02-01

    PEANO (Process Evaluation and Analysis by Neural Operators), a toolbox for real time process signal validation and condition monitoring has been developed. This system analyses the signals, which are e.g. the readings of process monitoring sensors, computes their expected values and alerts if real values are deviated from the expected ones more than limits allow. The reliability level of the current analysis is also produced. The system is based on neuro-fuzzy techniques. Artificial Neural Networks and Fuzzy Logic models can be combined to exploit learning and generalisation capability of the first technique with the approximate reasoning embedded in the second approach. Real-time process signal validation is an application field where the use of this technique can improve the diagnosis of faulty sensors and the identification of outliers in a robust and reliable way. This study implements a fuzzy and possibilistic clustering algorithm to classify the operating region where the validation process has to be performed. The possibilistic approach (rather than probabilistic) allows a ''don't know'' classification that results in a fast detection of unforeseen plant conditions or outliers. Specialised Artificial Neural Networks are used for the validation process, one for each fuzzy cluster in which the operating map has been divided. There are two main advantages in using this technique: the accuracy and generalisation capability is increased compared to the case of a single network working in the entire operating region, and the ability to identify abnormal conditions, where the system is not capable to operate with a satisfactory accuracy, is improved. This model has been tested in a simulated environment on a French PWR, to monitor safety-related reactor variables over the entire power-flow operating map. (author)

  15. Relative validity of an FFQ to estimate daily food and nutrient intakes for Chilean adults.

    Science.gov (United States)

    Dehghan, Mahshid; Martinez, Solange; Zhang, Xiaohe; Seron, Pamela; Lanas, Fernando; Islam, Shofiqul; Merchant, Anwar T

    2013-10-01

    FFQ are commonly used to rank individuals by their food and nutrient intakes in large epidemiological studies. The purpose of the present study was to develop and validate an FFQ to rank individuals participating in an ongoing Prospective Urban and Rural Epidemiological (PURE) study in Chile. An FFQ and four 24 h dietary recalls were completed over 1 year. Pearson correlation coefficients, energy-adjusted and de-attenuated correlations and weighted kappa were computed between the dietary recalls and the FFQ. The level of agreement between the two dietary assessment methods was evaluated by Bland-Altman analysis. Temuco, Chile. Overall, 166 women and men enrolled in the present study. One hundred men and women participated in FFQ development and sixty-six individuals participated in FFQ validation. The FFQ consisted of 109 food items. For nutrients, the crude correlation coefficients between the dietary recalls and FFQ varied from 0.14 (protein) to 0.44 (fat). Energy adjustment and de-attenuation improved correlation coefficients and almost all correlation coefficients exceeded 0.40. Similar correlation coefficients were observed for food groups; the highest de-attenuated energy adjusted correlation coefficient was found for margarine and butter (0.75) and the lowest for potatoes (0.12). The FFQ showed moderate to high agreement for most nutrients and food groups, and can be used to rank individuals based on energy, nutrient and food intakes. The validation study was conducted in a unique setting and indicated that the tool is valid for use by adults in Chile.

  16. Robust Covariance Estimators Based on Information Divergences and Riemannian Manifold

    Directory of Open Access Journals (Sweden)

    Xiaoqiang Hua

    2018-03-01

    Full Text Available This paper proposes a class of covariance estimators based on information divergences in heterogeneous environments. In particular, the problem of covariance estimation is reformulated on the Riemannian manifold of Hermitian positive-definite (HPD matrices. The means associated with information divergences are derived and used as the estimators. Without resorting to the complete knowledge of the probability distribution of the sample data, the geometry of the Riemannian manifold of HPD matrices is considered in mean estimators. Moreover, the robustness of mean estimators is analyzed using the influence function. Simulation results indicate the robustness and superiority of an adaptive normalized matched filter with our proposed estimators compared with the existing alternatives.

  17. Evidence-based research: understanding the best estimate

    Directory of Open Access Journals (Sweden)

    Bauer JG

    2016-09-01

    Full Text Available Janet G Bauer,1 Sue S Spackman,2 Robert Fritz,2 Amanjyot K Bains,3 Jeanette Jetton-Rangel3 1Advanced Education Services, 2Division of General Dentistry, 3Center of Dental Research, Loma Linda University School of Dentistry, Loma Linda, CA, USA Introduction: Best estimates of intervention outcomes are used when uncertainties in decision making are evidenced. Best estimates are often, out of necessity, from a context of less than quality evidence or needing more evidence to provide accuracy. Purpose: The purpose of this article is to understand the best estimate behavior, so that clinicians and patients may have confidence in its quantification and validation. Methods: To discover best estimates and quantify uncertainty, critical appraisals of the literature, gray literature and its resources, or both are accomplished. Best estimates of pairwise comparisons are calculated using meta-analytic methods; multiple comparisons use network meta-analysis. Manufacturers provide margins of performance of proprietary material(s. Lower margin performance thresholds or requirements (functional failure of materials are determined by a distribution of tests to quantify performance or clinical competency. The same is done for the high margin performance thresholds (estimated true value of success and clinician-derived critical values (material failure to function clinically. This quantification of margins and uncertainties assists clinicians in determining if reported best estimates are progressing toward true value as new knowledge is reported. Analysis: The best estimate of outcomes focuses on evidence-centered care. In stochastic environments, we are not able to observe all events in all situations to know without uncertainty the best estimates of predictable outcomes. Point-in-time analyses of best estimates using quantification of margins and uncertainties do this. Conclusion: While study design and methodology are variables known to validate the quality of

  18. Trace-based post-silicon validation for VLSI circuits

    CERN Document Server

    Liu, Xiao

    2014-01-01

    This book first provides a comprehensive coverage of state-of-the-art validation solutions based on real-time signal tracing to guarantee the correctness of VLSI circuits.  The authors discuss several key challenges in post-silicon validation and provide automated solutions that are systematic and cost-effective.  A series of automatic tracing solutions and innovative design for debug (DfD) techniques are described, including techniques for trace signal selection for enhancing visibility of functional errors, a multiplexed signal tracing strategy for improving functional error detection, a tracing solution for debugging electrical errors, an interconnection fabric for increasing data bandwidth and supporting multi-core debug, an interconnection fabric design and optimization technique to increase transfer flexibility and a DfD design and associated tracing solution for improving debug efficiency and expanding tracing window. The solutions presented in this book improve the validation quality of VLSI circuit...

  19. Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation

    Energy Technology Data Exchange (ETDEWEB)

    Hines, J. Wesley [Univ. of Tennessee, Knoxville, TN (United States); Upadhyaya, Belle [Univ. of Tennessee, Knoxville, TN (United States); Sharp, Michael [Univ. of Tennessee, Knoxville, TN (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jeffries, Brien [Univ. of Tennessee, Knoxville, TN (United States); Nam, Alan [Univ. of Tennessee, Knoxville, TN (United States); Strong, Eric [Univ. of Tennessee, Knoxville, TN (United States); Tong, Matthew [Univ. of Tennessee, Knoxville, TN (United States); Welz, Zachary [Univ. of Tennessee, Knoxville, TN (United States); Barbieri, Federico [Univ. of Tennessee, Knoxville, TN (United States); Langford, Seth [Univ. of Tennessee, Knoxville, TN (United States); Meinweiser, Gregory [Univ. of Tennessee, Knoxville, TN (United States); Weeks, Matthew [Univ. of Tennessee, Knoxville, TN (United States)

    2014-11-06

    On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning of component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for

  20. Validity of bioelectrical impedance analysis in estimation of fat-free mass in colorectal cancer patients.

    Science.gov (United States)

    Ræder, Hanna; Kværner, Ane Sørlie; Henriksen, Christine; Florholmen, Geir; Henriksen, Hege Berg; Bøhn, Siv Kjølsrud; Paur, Ingvild; Smeland, Sigbjørn; Blomhoff, Rune

    2018-02-01

    Bioelectrical impedance analysis (BIA) is an accessible and cheap method to measure fat-free mass (FFM). However, BIA estimates are subject to uncertainty in patient populations with altered body composition and hydration. The aim of the current study was to validate a whole-body and a segmental BIA device against dual-energy X-ray absorptiometry (DXA) in colorectal cancer (CRC) patients, and to investigate the ability of different empiric equations for BIA to predict DXA FFM (FFM DXA ). Forty-three non-metastatic CRC patients (aged 50-80 years) were enrolled in this study. Whole-body and segmental BIA FFM estimates (FFM whole-bodyBIA , FFM segmentalBIA ) were calculated using 14 empiric equations, including the equations from the manufacturers, before comparison to FFM DXA estimates. Strong linear relationships were observed between FFM BIA and FFM DXA estimates for all equations (R 2  = 0.94-0.98 for both devices). However, there were large discrepancies in FFM estimates depending on the equations used with mean differences in the ranges -6.5-6.8 kg and -11.0-3.4 kg for whole-body and segmental BIA, respectively. For whole-body BIA, 77% of BIA derived FFM estimates were significantly different from FFM DXA , whereas for segmental BIA, 85% were significantly different. For whole-body BIA, the Schols* equation gave the highest agreement with FFM DXA with mean difference ±SD of -0.16 ± 1.94 kg (p = 0.582). The manufacturer's equation gave a small overestimation of FFM with 1.46 ± 2.16 kg (p FFM DXA (0.17 ± 1.83 kg (p = 0.546)). Using the manufacturer's equation, no difference in FFM estimates was observed (-0.34 ± 2.06 kg (p = 0.292)), however, a clear proportional bias was detected (r = 0.69, p FFM compared to DXA using the optimal equation. In a population of non-metastatic CRC patients, mostly consisting of Caucasian adults and with a wide range of body composition measures, both the whole-body BIA and segmental BIA device

  1. Validation test case generation based on safety analysis ontology

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Wang, Wen-Shing

    2012-01-01

    Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.

  2. Estimation of nonpaternity in the Mexican population of Nuevo Leon: a validation study with blood group markers.

    Science.gov (United States)

    Cerda-Flores, R M; Barton, S A; Marty-Gonzalez, L F; Rivas, F; Chakraborty, R

    1999-07-01

    A method for estimating the general rate of nonpaternity in a population was validated using phenotype data on seven blood groups (A1A2BO, MNSs, Rh, Duffy, Lutheran, Kidd, and P) on 396 mother, child, and legal father trios from Nuevo León, Mexico. In all, 32 legal fathers were excluded as the possible father based on genetic exclusions at one or more loci (combined average exclusion probability of 0.694 for specific mother-child phenotype pairs). The maximum likelihood estimate of the general nonpaternity rate in the population was 0.118 +/- 0.020. The nonpaternity rates in Nuevo León were also seen to be inversely related with the socioeconomic status of the families, i.e., the highest in the low and the lowest in the high socioeconomic class. We further argue that with the moderately low (69.4%) power of exclusion for these seven blood group systems, the traditional critical values of paternity index (PI > or = 19) were not good indicators of true paternity, since a considerable fraction (307/364) of nonexcluded legal fathers had a paternity index below 19 based on the seven markers. Implications of these results in the context of genetic-epidemiological studies as well as for detection of true fathers for child-support adjudications are discussed, implying the need to employ a battery of genetic markers (possibly DNA-based tests) that yield a higher power of exclusion. We conclude that even though DNA markers are more informative, the probabilistic approach developed here would still be needed to estimate the true rate of nonpaternity in a population or to evaluate the precision of detecting true fathers.

  3. Development and Cross-Validation of Equation for Estimating Percent Body Fat of Korean Adults According to Body Mass Index

    Directory of Open Access Journals (Sweden)

    Hoyong Sung

    2017-06-01

    Full Text Available Background : Using BMI as an independent variable is the easiest way to estimate percent body fat. Thus far, few studies have investigated the development and cross-validation of an equation for estimating the percent body fat of Korean adults according to the BMI. The goals of this study were the development and cross-validation of an equation for estimating the percent fat of representative Korean adults using the BMI. Methods : Samples were obtained from the Korea National Health and Nutrition Examination Survey between 2008 and 2011. The samples from 2008-2009 and 2010-2011 were labeled as the validation group (n=10,624 and the cross-validation group (n=8,291, respectively. The percent fat was measured using dual-energy X-ray absorptiometry, and the body mass index, gender, and age were included as independent variables to estimate the measured percent fat. The coefficient of determination (R², standard error of estimation (SEE, and total error (TE were calculated to examine the accuracy of the developed equation. Results : The cross-validated R² was 0.731 for Model 1 and 0.735 for Model 2. The SEE was 3.978 for Model 1 and 3.951 for Model 2. The equations developed in this study are more accurate for estimating percent fat of the cross-validation group than those previously published by other researchers. Conclusion : The newly developed equations are comparatively accurate for the estimation of the percent fat of Korean adults.

  4. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    Science.gov (United States)

    Guo, T. H.; Musgrave, J.

    1992-11-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using

  5. Estimating mortality from external causes using data from retrospective surveys: A validation study in Niakhar (Senegal

    Directory of Open Access Journals (Sweden)

    Gilles Pison

    2018-03-01

    Full Text Available Background: In low- and middle-income countries (LMICs, data on causes of death is often inaccurate or incomplete. In this paper, we test whether adding a few questions about injuries and accidents to mortality questionnaires used in representative household surveys would yield accurate estimates of the extent of mortality due to external causes (accidents, homicides, or suicides. Methods: We conduct a validation study in Niakhar (Senegal, during which we compare reported survey data to high-quality prospective records of deaths collected by a health and demographic surveillance system (HDSS. Results: Survey respondents more frequently list the deaths of their adult siblings who die of external causes than the deaths of those who die from other causes. The specificity of survey data is high, but sensitivity is low. Among reported deaths, less than 60Š of the deaths classified as due to external causes by the HDSS are also classified as such by survey respondents. Survey respondents better report deaths due to road-traffic accidents than deaths from suicides and homicides. Conclusions: Asking questions about deaths resulting from injuries and accidents during surveys might help measure mortality from external causes in LMICs, but the resulting data displays systematic bias in a rural population of Senegal. Future studies should 1 investigate whether similar biases also apply in other settings and 2 test new methods to further improve the accuracy of survey data on mortality from external causes. Contribution: This study helps strengthen the monitoring of sustainable development targets in LMICs by validating a simple approach for the measurement of mortality from external causes.

  6. Assessment of heat transfer correlations for supercritical water in the frame of best-estimate code validation

    International Nuclear Information System (INIS)

    Jaeger, Wadim; Espinoza, Victor H. Sanchez; Schneider, Niko; Hurtado, Antonio

    2009-01-01

    Within the frame of the Generation IV international forum six innovative reactor concepts are the subject of comprehensive investigations. In some projects supercritical water will be considered as coolant, moderator (as for the High Performance Light Water Reactor) or secondary working fluid (one possible option for Liquid Metal-cooled Fast Reactors). Supercritical water is characterized by a pronounced change of the thermo-physical properties when crossing the pseudo-critical line, which goes hand in hand with a change in the heat transfer (HT) behavior. Hence, it is essential to estimate, in a proper way, the heat-transfer coefficient and subsequently the wall temperature. The scope of this paper is to present and discuss the activities at the Institute for Reactor Safety (IRS) related to the implementation of correlations for wall-to-fluid HT at supercritical conditions in Best-Estimate codes like TRACE as well as its validation. It is important to validate TRACE before applying it to safety analyses of HPLWR or of other reactor systems. In the past 3 decades various experiments have been performed all over the world to reveal the peculiarities of wall-to-fluid HT at supercritical conditions. Several different heat transfer phenomena such as HT enhancement (due to higher Prandtl numbers in the vicinity of the pseudo-critical point) or HT deterioration (due to strong property variations) were observed. Since TRACE is a component based system code with a finite volume method the resolution capabilities are limited and not all physical phenomena can be modeled properly. But Best -Estimate system codes are nowadays the preferred option for safety related investigations of full plants or other integral systems. Thus, the increase of the confidence in such codes is of high priority. In this paper, the post-test analysis of experiments with supercritical parameters will be presented. For that reason various correlations for the HT, which considers the characteristics

  7. An Estimator for Attitude and Heading Reference Systems Based on Virtual Horizontal Reference

    DEFF Research Database (Denmark)

    Wang, Yunlong; Soltani, Mohsen; Hussain, Dil muhammed Akbar

    2016-01-01

    makes it possible to correct the output of roll and pitch of the attitude estimator in the situations without accelerometer measurements, which cannot be achieved by the conventional nonlinear attitude estimator. The performance of VHR is tested both in simulation and hardware environment to validate......The output of the attitude determination systems suffers from large errors in case of accelerometer malfunctions. In this paper, an attitude estimator, based on Virtual Horizontal Reference (VHR), is designed for an Attitude Heading and Reference System (AHRS) to cope with this problem. The VHR...... their estimation performance. Moreover, the hardware test results are compared with that of a high-precision commercial AHRS to verify the estimation results. The implemented algorithm has shown high accuracy of attitude estimation that makes the system suitable for many applications....

  8. Screening for cognitive impairment in older individuals. Validation study of a computer-based test.

    Science.gov (United States)

    Green, R C; Green, J; Harrison, J M; Kutner, M H

    1994-08-01

    This study examined the validity of a computer-based cognitive test that was recently designed to screen the elderly for cognitive impairment. Criterion-related validity was examined by comparing test scores of impaired patients and normal control subjects. Construct-related validity was computed through correlations between computer-based subtests and related conventional neuropsychological subtests. University center for memory disorders. Fifty-two patients with mild cognitive impairment by strict clinical criteria and 50 unimpaired, age- and education-matched control subjects. Control subjects were rigorously screened by neurological, neuropsychological, imaging, and electrophysiological criteria to identify and exclude individuals with occult abnormalities. Using a cut-off total score of 126, this computer-based instrument had a sensitivity of 0.83 and a specificity of 0.96. Using a prevalence estimate of 10%, predictive values, positive and negative, were 0.70 and 0.96, respectively. Computer-based subtests correlated significantly with conventional neuropsychological tests measuring similar cognitive domains. Thirteen (17.8%) of 73 volunteers with normal medical histories were excluded from the control group, with unsuspected abnormalities on standard neuropsychological tests, electroencephalograms, or magnetic resonance imaging scans. Computer-based testing is a valid screening methodology for the detection of mild cognitive impairment in the elderly, although this particular test has important limitations. Broader applications of computer-based testing will require extensive population-based validation. Future studies should recognize that normal control subjects without a history of disease who are typically used in validation studies may have a high incidence of unsuspected abnormalities on neurodiagnostic studies.

  9. Accuracy and feasibility of estimated tumour volumetry in primary gastric gastrointestinal stromal tumours: validation using semiautomated technique in 127 patients.

    Science.gov (United States)

    Tirumani, Sree Harsha; Shinagare, Atul B; O'Neill, Ailbhe C; Nishino, Mizuki; Rosenthal, Michael H; Ramaiya, Nikhil H

    2016-01-01

    To validate estimated tumour volumetry in primary gastric gastrointestinal stromal tumours (GISTs) using semiautomated volumetry. In this IRB-approved retrospective study, we measured the three longest diameters in x, y, z axes on CTs of primary gastric GISTs in 127 consecutive patients (52 women, 75 men, mean age 61 years) at our institute between 2000 and 2013. Segmented volumes (Vsegmented) were obtained using commercial software by two radiologists. Estimate volumes (V1-V6) were obtained using formulae for spheres and ellipsoids. Intra- and interobserver agreement of Vsegmented and agreement of V1-6 with Vsegmented were analysed with concordance correlation coefficients (CCC) and Bland-Altman plots. Median Vsegmented and V1-V6 were 75.9, 124.9, 111.6, 94.0, 94.4, 61.7 and 80.3 cm(3), respectively. There was strong intra- and interobserver agreement for Vsegmented. Agreement with Vsegmented was highest for V6 (scalene ellipsoid, x ≠ y ≠ z), with CCC of 0.96 [95 % CI 0.95-0.97]. Mean relative difference was smallest for V6 (0.6 %), while it was -19.1 % for V5, +14.5 % for V4, +17.9 % for V3, +32.6 % for V2 and +47 % for V1. Ellipsoidal approximations of volume using three measured axes may be used to closely estimate Vsegmented when semiautomated techniques are unavailable. Estimation of tumour volume in primary GIST using mathematical formulae is feasible. Gastric GISTs are rarely spherical. Segmented volumes are highly concordant with three axis-based scalene ellipsoid volumes. Ellipsoid volume can be used as an alternative for automated tumour volumetry.

  10. Validation of a protocol for the estimation of three-dimensional body center of mass kinematics in sport.

    Science.gov (United States)

    Mapelli, Andrea; Zago, Matteo; Fusini, Laura; Galante, Domenico; Colombo, Andrea; Sforza, Chiarella

    2014-01-01

    Since strictly related to balance and stability control, body center of mass (CoM) kinematics is a relevant quantity in sport surveys. Many methods have been proposed to estimate CoM displacement. Among them, segmental method appears to be suitable to investigate CoM kinematics in sport: human body is assumed as a system of rigid bodies, hence the whole-body CoM is calculated as the weighted average of the CoM of each segment. The number of landmarks represents a crucial choice in the protocol design process: one have to find the proper compromise between accuracy and invasivity. In this study, using a motion analysis system, a protocol based upon the segmental method is validated, adopting an anatomical model comprising 14 landmarks. Two sets of experiments were conducted. Firstly, our protocol was compared to the ground reaction force method (GRF), accounted as a standard in CoM estimation. In the second experiment, we investigated the aerial phase typical of many disciplines, comparing our protocol with: (1) an absolute reference, the parabolic regression of the vertical CoM trajectory during the time of flight; (2) two common approaches to estimate CoM kinematics in gait, known as sacrum and reconstructed pelvis methods. Recognized accuracy indexes proved that the results obtained were comparable to the GRF; what is more, during the aerial phases our protocol showed to be significantly more accurate than the two other methods. The protocol assessed can therefore be adopted as a reliable tool for CoM kinematics estimation in further sport researches. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    2000-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting μCi/g or μCi/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000)

  12. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  13. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  14. Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study

    OpenAIRE

    Hippisley-Cox, Julia; Coupland, Carol

    2017-01-01

    Objective: To develop and externally validate risk prediction equations to estimate absolute and conditional survival in patients with colorectal cancer. \\ud \\ud Design: Cohort study.\\ud \\ud Setting: General practices in England providing data for the QResearch database linked to the national cancer registry.\\ud \\ud Participants: 44 145 patients aged 15-99 with colorectal cancer from 947 practices to derive the equations. The equations were validated in 15 214 patients with colorectal cancer ...

  15. Evaluation of Different Estimation Methods for Accuracy and Precision in Biological Assay Validation.

    Science.gov (United States)

    Yu, Binbing; Yang, Harry

    2017-01-01

    Biological assays ( bioassays ) are procedures to estimate the potency of a substance by studying its effects on living organisms, tissues, and cells. Bioassays are essential tools for gaining insight into biologic systems and processes including, for example, the development of new drugs and monitoring environmental pollutants. Two of the most important parameters of bioassay performance are relative accuracy (bias) and precision. Although general strategies and formulas are provided in USP, a comprehensive understanding of the definitions of bias and precision remain elusive. Additionally, whether there is a beneficial use of data transformation in estimating intermediate precision remains unclear. Finally, there are various statistical estimation methods available that often pose a dilemma for the analyst who must choose the most appropriate method. To address these issues, we provide both a rigorous definition of bias and precision as well as three alternative methods for calculating relative standard deviation (RSD). All methods perform similarly when the RSD ≤10%. However, the USP estimates result in larger bias and root-mean-square error (RMSE) compared to the three proposed methods when the actual variation was large. Therefore, the USP method should not be used for routine analysis. For data with moderate skewness and deviation from normality, the estimates based on the original scale perform well. The original scale method is preferred, and the method based on log-transformation may be used for noticeably skewed data. LAY ABSTRACT: Biological assays, or bioassays, are essential in the development and manufacture of biopharmaceutical products for potency testing and quality monitoring. Two important parameters of assay performance are relative accuracy (bias) and precision. The definitions of bias and precision in USP 〈1033〉 are elusive and confusing. Another complicating issue is whether log-transformation should be used for calculating the

  16. Contingency inferences driven by base rates: Valid by sampling

    Directory of Open Access Journals (Sweden)

    Florian Kutzner

    2011-04-01

    Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.

  17. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry – BREALD-30

    Science.gov (United States)

    Junkes, Monica C.; Fraiz, Fabian C.; Sardenberg, Fernanda; Lee, Jessica Y.; Paiva, Saul M.; Ferreira, Fernanda M.

    2015-01-01

    Objective The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. Methods After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. Results The BREALD-30 demonstrated good internal reliability. Cronbach’s alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent’s perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent’s perception regarding his/her child's oral health remained significant in the multivariate analysis. Conclusion The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil. PMID:26158724

  18. Design and validation of new genotypic tools for easy and reliable estimation of HIV tropism before using CCR5 antagonists.

    Science.gov (United States)

    Poveda, Eva; Seclén, Eduardo; González, María del Mar; García, Federico; Chueca, Natalia; Aguilera, Antonio; Rodríguez, Jose Javier; González-Lahoz, Juan; Soriano, Vincent

    2009-05-01

    Genotypic tools may allow easier and less expensive estimation of HIV tropism before prescription of CCR5 antagonists compared with the Trofile assay (Monogram Biosciences, South San Francisco, CA, USA). Paired genotypic and Trofile results were compared in plasma samples derived from the maraviroc expanded access programme (EAP) in Europe. A new genotypic approach was built to improve the sensitivity to detect X4 variants based on an optimization of the webPSSM algorithm. Then, the new tool was validated in specimens from patients included in the ALLEGRO trial, a multicentre study conducted in Spain to assess the prevalence of R5 variants in treatment-experienced HIV patients. A total of 266 specimens from the maraviroc EAP were tested. Overall geno/pheno concordance was above 72%. A high specificity was generally seen for the detection of X4 variants using genotypic tools (ranging from 58% to 95%), while sensitivity was low (ranging from 31% to 76%). The PSSM score was then optimized to enhance the sensitivity to detect X4 variants changing the original threshold for R5 categorization. The new PSSM algorithms, PSSM(X4R5-8) and PSSM(SINSI-6.4), considered as X4 all V3 scoring values above -8 or -6.4, respectively, increasing the sensitivity to detect X4 variants up to 80%. The new algorithms were then validated in 148 specimens derived from patients included in the ALLEGRO trial. The sensitivity/specificity to detect X4 variants was 93%/69% for PSSM(X4R5-8) and 93%/70% for PSSM(SINSI-6.4). PSSM(X4R5-8) and PSSM(SINSI-6.4) may confidently assist therapeutic decisions for using CCR5 antagonists in HIV patients, providing an easier and rapid estimation of tropism in clinical samples.

  19. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    Science.gov (United States)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some

  20. Primary Sclerosing Cholangitis Risk Estimate Tool (PREsTo) Predicts Outcomes in PSC: A Derivation & Validation Study Using Machine Learning.

    Science.gov (United States)

    Eaton, John E; Vesterhus, Mette; McCauley, Bryan M; Atkinson, Elizabeth J; Schlicht, Erik M; Juran, Brian D; Gossard, Andrea A; LaRusso, Nicholas F; Gores, Gregory J; Karlsen, Tom H; Lazaridis, Konstantinos N

    2018-05-09

    Improved methods are needed to risk stratify and predict outcomes in patients with primary sclerosing cholangitis (PSC). Therefore, we sought to derive and validate a new prediction model and compare its performance to existing surrogate markers. The model was derived using 509 subjects from a multicenter North American cohort and validated in an international multicenter cohort (n=278). Gradient boosting, a machine based learning technique, was used to create the model. The endpoint was hepatic decompensation (ascites, variceal hemorrhage or encephalopathy). Subjects with advanced PSC or cholangiocarcinoma at baseline were excluded. The PSC risk estimate tool (PREsTo) consists of 9 variables: bilirubin, albumin, serum alkaline phosphatase (SAP) times the upper limit of normal (ULN), platelets, AST, hemoglobin, sodium, patient age and the number of years since PSC was diagnosed. Validation in an independent cohort confirms PREsTo accurately predicts decompensation (C statistic 0.90, 95% confidence interval (CI) 0.84-0.95) and performed well compared to MELD score (C statistic 0.72, 95% CI 0.57-0.84), Mayo PSC risk score (C statistic 0.85, 95% CI 0.77-0.92) and SAP statistic 0.65, 95% CI 0.55-0.73). PREsTo continued to be accurate among individuals with a bilirubin statistic 0.90, 95% CI 0.82-0.96) and when the score was re-applied at a later course in the disease (C statistic 0.82, 95% CI 0.64-0.95). PREsTo accurately predicts hepatic decompensation in PSC and exceeds the performance among other widely available, noninvasive prognostic scoring systems. This article is protected by copyright. All rights reserved. © 2018 by the American Association for the Study of Liver Diseases.

  1. MODIS Observation of Aerosols over Southern Africa During SAFARI 2000: Data, Validation, and Estimation of Aerosol Radiative Forcing

    Science.gov (United States)

    Ichoku, Charles; Kaufman, Yoram; Remer, Lorraine; Chu, D. Allen; Mattoo, Shana; Tanre, Didier; Levy, Robert; Li, Rong-Rong; Kleidman, Richard; Lau, William K. M. (Technical Monitor)

    2001-01-01

    Aerosol properties, including optical thickness and size parameters, are retrieved operationally from the MODIS sensor onboard the Terra satellite launched on 18 December 1999. The predominant aerosol type over the Southern African region is smoke, which is generated from biomass burning on land and transported over the southern Atlantic Ocean. The SAFARI-2000 period experienced smoke aerosol emissions from the regular biomass burning activities as well as from the prescribed burns administered on the auspices of the experiment. The MODIS Aerosol Science Team (MAST) formulates and implements strategies for the retrieval of aerosol products from MODIS, as well as for validating and analyzing them in order to estimate aerosol effects in the radiative forcing of climate as accurately as possible. These activities are carried out not only from a global perspective, but also with a focus on specific regions identified as having interesting characteristics, such as the biomass burning phenomenon in southern Africa and the associated smoke aerosol, particulate, and trace gas emissions. Indeed, the SAFARI-2000 aerosol measurements from the ground and from aircraft, along with MODIS, provide excellent data sources for a more intensive validation and a closer study of the aerosol characteristics over Southern Africa. The SAFARI-2000 ground-based measurements of aerosol optical thickness (AOT) from both the automatic Aerosol Robotic Network (AERONET) and handheld Sun photometers have been used to validate MODIS retrievals, based on a sophisticated spatio-temporal technique. The average global monthly distribution of aerosol from MODIS has been combined with other data to calculate the southern African aerosol daily averaged (24 hr) radiative forcing over the ocean for September 2000. It is estimated that on the average, for cloud free conditions over an area of 9 million square kin, this predominantly smoke aerosol exerts a forcing of -30 W/square m C lose to the terrestrial

  2. OWL-based reasoning methods for validating archetypes.

    Science.gov (United States)

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  4. Validation of a Robust Neural Real-Time Voltage Estimator for Active Distribution Grids on Field Data

    DEFF Research Database (Denmark)

    Pertl, Michael; Douglass, Philip James; Heussen, Kai

    2018-01-01

    network approach for voltage estimation in active distribution grids by means of measured data from two feeders of a real low voltage distribution grid. The approach enables a real-time voltage estimation at locations in the distribution grid, where otherwise only non-real-time measurements are available......The installation of measurements in distribution grids enables the development of data driven methods for the power system. However, these methods have to be validated in order to understand the limitations and capabilities for their use. This paper presents a systematic validation of a neural...

  5. Validation of the iPhone app using the force platform to estimate vertical jump height.

    Science.gov (United States)

    Carlos-Vivas, Jorge; Martin-Martinez, Juan P; Hernandez-Mocholi, Miguel A; Perez-Gomez, Jorge

    2018-03-01

    Vertical jump performance has been evaluated with several devices: force platforms, contact mats, Vertec, accelerometers, infrared cameras and high-velocity cameras; however, the force platform is considered the gold standard for measuring vertical jump height. The purpose of this study was to validate an iPhone app called My Jump, that measures vertical jump height by comparing it with other methods that use the force platform to estimate vertical jump height, namely, vertical velocity at take-off and time in the air. A total of 40 sport sciences students (age 21.4±1.9 years) completed five countermovement jumps (CMJs) over a force platform. Thus, 200 CMJ heights were evaluated from the vertical velocity at take-off and the time in the air using the force platform, and from the time in the air with the My Jump mobile application. The height obtained was compared using the intraclass correlation coefficient (ICC). Correlation between APP and force platform using the time in the air was perfect (ICC=1.000, PJump, is an appropriate method to evaluate the vertical jump performance; however, vertical jump height is slightly overestimated compared with that of the force platform.

  6. Development and validation of RP-HPLC method for estimation of eplerenone in spiked human plasma

    Directory of Open Access Journals (Sweden)

    Paraag Gide

    2012-10-01

    Full Text Available A rapid and simple high performance liquid chromatography (HPLC method with a UV detection (241 nm was developed and validated for estimation of eplerenone from spiked human plasma. The analyte and the internal standard (valdecoxib were extracted with a mixture of dichloromethane and diethyl ether. The chromatographic separation was performed on a HiQSil C-18HS column (250 mm×4.6 mm, 5 μm with a mobile phase consisting of acetonitrile:water (50:50, v/v at flow rate of 1 mL/min. The calibration curve was linear in the range 100–3200 ng/mL and the heteroscedasticity was minimized by using weighted least squares regression with weighting factor 1/X. Keywords: Eplerenone, Liquid–liquid extraction, Weighted regression, HPLC–UV

  7. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  8. Smartphone based automatic organ validation in ultrasound video.

    Science.gov (United States)

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  9. Validation of KENO-based criticality calculations at Rocky Flats

    International Nuclear Information System (INIS)

    Felsher, P.D.; McKamy, J.N.; Monahan, S.P.

    1992-01-01

    In the absence of experimental data, it is necessary to rely on computer-based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two-part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the code's ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper, the authors discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG and G Rocky Flats. The validation process at Rocky Flats consists of both global and local techniques. The global validation resulted in a maximum k eff limit of 0.95 for the limiting-accident scanarios of a criticality evaluation

  10. Evidence Based Validation of Indian Traditional Medicine – Way Forward

    Directory of Open Access Journals (Sweden)

    Pulok K Mukherjee

    2016-01-01

    Full Text Available Evidence based validation of the ethno-pharmacological claims on traditional medicine (TM is the need of the day for its globalization and reinforcement. Combining the unique features of identifying biomarkers that are highly conserved across species, this can offer an innovative approach to biomarker-driven drug discovery and development. TMs are an integral component of alternative health care systems. India has a rich wealth of TMs and the potential to accept the challenge to meet the global demand for them. Ayurveda, Yoga, Unani, Siddha and Homeopathy (AYUSH medicine are the major healthcare systems in Indian Traditional Medicine. The plant species mentioned in the ancient texts of these systems may be explored with the modern scientific approaches for better leads in the healthcare. TM is the best sources of chemical diversity for finding new drugs and leads. Authentication and scientific validation of medicinal plant is a fundamental requirement of industry and other organizations dealing with herbal drugs. Quality control (QC of botanicals, validated processes of manufacturing, customer awareness and post marketing surveillance are the key points, which could ensure the quality, safety and efficacy of TM. For globalization of TM, there is a need for harmonization with respect to its chemical and metabolite profiling, standardization, QC, scientific validation, documentation and regulatory aspects of TM. Therefore, the utmost attention is necessary for the promotion and development of TM through global collaboration and co-ordination by national and international programme.

  11. Development and Validation of Spectrophotometric Methods for Simultaneous Estimation of Valsartan and Hydrochlorothiazide in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    Monika L. Jadhav

    2014-01-01

    Full Text Available Two UV-spectrophotometric methods have been developed and validated for simultaneous estimation of valsartan and hydrochlorothiazide in a tablet dosage form. The first method employed solving of simultaneous equations based on the measurement of absorbance at two wavelengths, 249.4 nm and 272.6 nm, λmax for valsartan and hydrochlorothiazide, respectively. The second method was absorbance ratio method, which involves formation of Q-absorbance equation at 258.4 nm (isoabsorptive point and also at 272.6 nm (λmax of hydrochlorothiazide. The methods were found to be linear between the range of 5–30 µg/mL for valsartan and 4–24 μg/mL for hydrochlorothiazide using 0.1 N NaOH as solvent. The mean percentage recovery was found to be 100.20% and 100.19% for the simultaneous equation method and 98.56% and 97.96% for the absorbance ratio method, for valsartan and hydrochlorothiazide, respectively, at three different levels of standard additions. The precision (intraday, interday of methods was found within limits (RSD<2%. It could be concluded from the results obtained in the present investigation that the two methods for simultaneous estimation of valsartan and hydrochlorothiazide in tablet dosage form are simple, rapid, accurate, precise and economical and can be used, successfully, in the quality control of pharmaceutical formulations and other routine laboratory analysis.

  12. Development and Validation of a Calculator for Estimating the Probability of Urinary Tract Infection in Young Febrile Children.

    Science.gov (United States)

    Shaikh, Nader; Hoberman, Alejandro; Hum, Stephanie W; Alberty, Anastasia; Muniz, Gysella; Kurs-Lasky, Marcia; Landsittel, Douglas; Shope, Timothy

    2018-06-01

    Accurately estimating the probability of urinary tract infection (UTI) in febrile preverbal children is necessary to appropriately target testing and treatment. To develop and test a calculator (UTICalc) that can first estimate the probability of UTI based on clinical variables and then update that probability based on laboratory results. Review of electronic medical records of febrile children aged 2 to 23 months who were brought to the emergency department of Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania. An independent training database comprising 1686 patients brought to the emergency department between January 1, 2007, and April 30, 2013, and a validation database of 384 patients were created. Five multivariable logistic regression models for predicting risk of UTI were trained and tested. The clinical model included only clinical variables; the remaining models incorporated laboratory results. Data analysis was performed between June 18, 2013, and January 12, 2018. Documented temperature of 38°C or higher in children aged 2 months to less than 2 years. With the use of culture-confirmed UTI as the main outcome, cutoffs for high and low UTI risk were identified for each model. The resultant models were incorporated into a calculation tool, UTICalc, which was used to evaluate medical records. A total of 2070 children were included in the study. The training database comprised 1686 children, of whom 1216 (72.1%) were female and 1167 (69.2%) white. The validation database comprised 384 children, of whom 291 (75.8%) were female and 200 (52.1%) white. Compared with the American Academy of Pediatrics algorithm, the clinical model in UTICalc reduced testing by 8.1% (95% CI, 4.2%-12.0%) and decreased the number of UTIs that were missed from 3 cases to none. Compared with empirically treating all children with a leukocyte esterase test result of 1+ or higher, the dipstick model in UTICalc would have reduced the number of treatment delays by 10.6% (95% CI

  13. Cosmic Ray Neutron Sensing: Use, Calibration and Validation for Soil Moisture Estimation

    International Nuclear Information System (INIS)

    2017-03-01

    Nuclear and related techniques can help develop climate-smart agricultural practices by optimizing water use efficiency. The measurement of soil water content is essential to improve the use of this resource in agriculture. However, most sensors monitor small areas (less than 1m in radius), hence a large number of sensors are needed to obtain soil water content across a large area. This can be both costly and labour intensive and so larger scale measuring devices are needed as an alternative to traditional point-based soil moisture sensing techniques. The cosmic ray neutron sensor (CRNS) is such a device that monitors soil water content in a non-invasive and continuous way. This publication provides background information about this novel technique, and explains in detail the calibration and validation process.

  14. Validation of the Visible Occlusal Plaque Index (VOPI) in estimating caries lesion activity

    DEFF Research Database (Denmark)

    Carvalho, J.C.; Mestrinho, H D; Oliveira, L S

    2017-01-01

    ). RESULTS: Construct validity was assumed based on qualitative assessment as no plaque (score 0) and thin plaque (score 1) reflected the theoretical knowledge that a regular disorganization of the dental biofilm either maintains the caries process at sub-clinical levels or inactivate it clinically. The VOPI...... of the VOPI was evidenced with multivariable analysis (GEE), by its ability to discriminate between the groups of adolescents with different oral hygiene status; negative association between adolescents with thick and heavy plaque and those with sound occlusal surfaces was found (OR=0.3, p... of oral hygiene and caries lesion activity. The VOPI is recommended to standardize and categorize information on the occlusal biofilm, thus being suitable for direct application in research and clinical settings....

  15. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    1999-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999) and the Final Safety Analysis Report (FSAR) (FDH 1999) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in developing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks

  16. Development and Validation of a Prediction Model to Estimate Individual Risk of Pancreatic Cancer.

    Science.gov (United States)

    Yu, Ami; Woo, Sang Myung; Joo, Jungnam; Yang, Hye-Ryung; Lee, Woo Jin; Park, Sang-Jae; Nam, Byung-Ho

    2016-01-01

    There is no reliable screening tool to identify people with high risk of developing pancreatic cancer even though pancreatic cancer represents the fifth-leading cause of cancer-related death in Korea. The goal of this study was to develop an individualized risk prediction model that can be used to screen for asymptomatic pancreatic cancer in Korean men and women. Gender-specific risk prediction models for pancreatic cancer were developed using the Cox proportional hazards model based on an 8-year follow-up of a cohort study of 1,289,933 men and 557,701 women in Korea who had biennial examinations in 1996-1997. The performance of the models was evaluated with respect to their discrimination and calibration ability based on the C-statistic and Hosmer-Lemeshow type χ2 statistic. A total of 1,634 (0.13%) men and 561 (0.10%) women were newly diagnosed with pancreatic cancer. Age, height, BMI, fasting glucose, urine glucose, smoking, and age at smoking initiation were included in the risk prediction model for men. Height, BMI, fasting glucose, urine glucose, smoking, and drinking habit were included in the risk prediction model for women. Smoking was the most significant risk factor for developing pancreatic cancer in both men and women. The risk prediction model exhibited good discrimination and calibration ability, and in external validation it had excellent prediction ability. Gender-specific risk prediction models for pancreatic cancer were developed and validated for the first time. The prediction models will be a useful tool for detecting high-risk individuals who may benefit from increased surveillance for pancreatic cancer.

  17. Lightning stroke distance estimation from single station observation and validation with WWLLN data

    Directory of Open Access Journals (Sweden)

    V. Ramachandran

    2007-07-01

    Full Text Available A simple technique to estimate the distance of the lightning strikes d with a single VLF electromagnetic wave receiver at a single station is described. The technique is based on the recording of oscillatory waveforms of the electric fields of sferics. Even though the process of estimating d using the waveform is a rather classical one, a novel and simple procedure for finding d is proposed in this paper. The procedure adopted provides two independent estimates of the distance of the stroke. The accuracy of measurements has been improved by employing high speed (333 ns sampling rate signal processing techniques. GPS time is used as the reference time, which enables us to compare the calculated distances of the lightning strikes, by both methods, with those calculated from the data obtained by the World-Wide Lightning Location Network (WWLLN, which uses a multi-station technique. The estimated distances of the lightning strikes (77, whose times correlated, ranged from ~3000–16 250 km. When dd compared with those calculated with the multi-station lightning location system is ~4.7%, while for all the strokes it was ~8.8%. One of the lightnings which was recorded by WWLLN, whose field pattern was recorded and the spectrogram of the sferic was also recorded at the site, is analyzed in detail. The deviations in d calculated from the field pattern and from the arrival time of the sferic were 3.2% and 1.5%, respectively, compared to d calculated from the WWLLN location. FFT analysis of the waveform showed that only a narrow band of frequencies is received at the site, which is confirmed by the intensity of the corresponding sferic in the spectrogram.

  18. Performing Verification and Validation in Reuse-Based Software Engineering

    Science.gov (United States)

    Addy, Edward A.

    1999-01-01

    The implementation of reuse-based software engineering not only introduces new activities to the software development process, such as domain analysis and domain modeling, it also impacts other aspects of software engineering. Other areas of software engineering that are affected include Configuration Management, Testing, Quality Control, and Verification and Validation (V&V). Activities in each of these areas must be adapted to address the entire domain or product line rather than a specific application system. This paper discusses changes and enhancements to the V&V process, in order to adapt V&V to reuse-based software engineering.

  19. Validation of a Dish-Based Semiquantitative Food Questionnaire in Rural Bangladesh

    Directory of Open Access Journals (Sweden)

    Pi-I. D. Lin

    2017-01-01

    Full Text Available A locally validated tool was needed to evaluate long-term dietary intake in rural Bangladesh. We assessed the validity of a 42-item dish-based semi-quantitative food frequency questionnaire (FFQ using two 3-day food diaries (FDs. We selected a random subset of 47 families (190 participants from a longitudinal arsenic biomonitoring study in Bangladesh to administer the FFQ. Two 3-day FDs were completed by the female head of the households and we used an adult male equivalent method to estimate the FD for the other participants. Food and nutrient intakes measured by FFQ and FD were compared using Pearson’s and Spearman’s correlation, paired t-test, percent difference, cross-classification, weighted Kappa, and Bland–Altman analysis. Results showed good validity for total energy intake (paired t-test, p < 0.05; percent difference <10%, with no presence of proportional bias (Bland–Altman correlation, p > 0.05. After energy-adjustment and de-attenuation for within-person variation, macronutrient intakes had excellent correlations ranging from 0.55 to 0.70. Validity for micronutrients was mixed. High intraclass correlation coefficients (ICCs were found for most nutrients between the two seasons, except vitamin A. This dish-based FFQ provided adequate validity to assess and rank long-term dietary intake in rural Bangladesh for most food groups and nutrients, and should be useful for studying dietary-disease relationships.

  20. Validation of equations using anthropometric and bioelectrical impedance for estimating body composition of the elderly

    Directory of Open Access Journals (Sweden)

    Cassiano Ricardo Rech

    2006-08-01

    Full Text Available The increase of the elderly population has enhanced the need for studying aging-related issues. In this context, the analysis of morphological alterations occurring with the age has been discussed thoroughly. Evidences point that there are few information on valid methods for estimating body composition of senior citizens in Brazil. Therefore, the objective of this study was to cross-validate equations using either anthropometric or bioelectrical impedance (BIA data for estimation of body fat (%BF and of fat-free mass (FFM in a sample of older individuals from Florianópolis-SC, having the dual energy x-ray absorptiometry (DEXA as the criterion-measurement. The group was composed by 180 subjects (60 men and 120 women who participated in four community Groups for the elderly and were systematically randomly selected by a telephone interview, with age ranging from 60 to 81 years. The variables stature, body mass, body circumferences, skinfold thickness, reactance and resistance were measured in the morning at The Sports Center of the Federal University of Santa Catarina. The DEXA evaluation was performed in the afternoon at The Diagnosis Center through Image in Florianópolis-SC. Twenty anthropometric and 8 BIA equations were analyzed for cross-validation. For those equations that estimate body density, the equation of Siri (1961 and the adapted-equation by Deurenberg et al. (1989 were used for conversion into %BF. The analyses were performed with the statistical package SPSS, version 11.5, establishing the level of significance at 5%. The criteria of cross-validation suggested by Lohman (1992 and the graphic dispersion analyses in relation to the mean, as proposed by Bland and Altman (1986 were used. The group presented values for the body mass index (BMI between 18.4kg.m-2 and 39.3kg.m-2. The mean %BF was of 23.1% (sd=5.8 for men and 37.3% (sd=6.9 in women, varying from 6% to 51.4%. There were no differences among the estimates of the equations

  1. Validation of an Innovative Satellite-Based UV Dosimeter

    Science.gov (United States)

    Morelli, Marco; Masini, Andrea; Simeone, Emilio; Khazova, Marina

    2016-08-01

    We present an innovative satellite-based UV (ultraviolet) radiation dosimeter with a mobile app interface that has been validated by exploiting both ground-based measurements and an in-vivo assessment of the erythemal effects on some volunteers having a controlled exposure to solar radiation.Both validations showed that the satellite-based UV dosimeter has a good accuracy and reliability needed for health-related applications.The app with this satellite-based UV dosimeter also includes other related functionalities such as the provision of safe sun exposure time updated in real-time and end exposure visual/sound alert. This app will be launched on the global market by siHealth Ltd in May 2016 under the name of "HappySun" and available both for Android and for iOS devices (more info on http://www.happysun.co.uk).Extensive R&D activities are on-going for further improvement of the satellite-based UV dosimeter's accuracy.

  2. A short 18 items food frequency questionnaire biochemically validated to estimate zinc status in humans.

    Science.gov (United States)

    Trame, Sarah; Wessels, Inga; Haase, Hajo; Rink, Lothar

    2018-02-21

    Inadequate dietary zinc intake is wide-spread in the world's population. Despite the clinical significance of zinc deficiency there is no established method or biomarker to reliably evaluate the zinc status. The aim of our study was to develop a biochemically validated questionnaire as a clinically useful tool that can predict the risk of an individual being zinc deficient. From 71 subjects aged 18-55 years blood and urine samples were collected. Zinc concentrations in serum and urine were determined by atomic absorption spectrometry. A food frequency questionnaire (FFQ) including 38 items was filled out representing the consumption during the last 6 months obtaining nutrient diet scores. Latter were calculated by multiplication of the particular frequency of consumption, the nutrient intake of the respective portion size and the extent of the consumed quantity. Results from the FFQ were compared with nutrient intake information gathered in 24-h dietary recalls. A hemogram was performed and cytokine concentrations were obtained using Enzyme-linked Immunosorbent Assay. Reducing the items of the primary FFQ from 38 to 18 did not result in a significant variance between both calculated scores. Zinc diet scores showed highly significant correlation with serum zinc (r = 0.37; p < 0.01) and urine zinc concentrations (r = 0.34; p < 0.01). Serum zinc concentrations and zinc diet scores showed a significant positive correlation with animal protein intake (r = 0.37; p < 0.01/r = 0.54; p < 0.0001). Higher zinc diet scores were found in omnivores compared to vegetarians (213.5 vs. 111.9; p < 0.0001). The 18 items FFQ seems to be a sufficient tool to provide a good estimation of the zinc status. Moreover, shortening of the questionnaire to 18 items without a loss of predictive efficiency enables a facilitated and resource-saving routine use. A validation of the questionnaire in other cohorts could enable the progression towards clinical

  3. Validation of risk-based performance indicators: Safety system function trends

    International Nuclear Information System (INIS)

    Boccio, J.L.; Vesely, W.E.; Azarm, M.A.; Carbonaro, J.F.; Usher, J.L.; Oden, N.

    1989-10-01

    This report describes and applies a process for validating a model for a risk-based performance indicator. The purpose of the risk-based indicator evaluated, Safety System Function Trend (SSFT), is to monitor the unavailability of selected safety systems. Interim validation of this indicator is based on three aspects: a theoretical basis, an empirical basis relying on statistical correlations, and case studies employing 25 plant years of historical data collected from five plants for a number of safety systems. Results using the SSFT model are encouraging. Application of the model through case studies dealing with the performance of important safety systems shows that statistically significant trends in, and levels of, system performance can be discerned which thereby can provide leading indications of degrading and/or improving performances. Methods for developing system performance tolerance bounds are discussed and applied to aid in the interpretation of the trends in this risk-based indicator. Some additional characteristics of the SSFT indicator, learned through the data-collection efforts and subsequent data analyses performed, are also discussed. The usefulness and practicality of other data sources for validation purposes are explored. Further validation of this indicator is noted. Also, additional research is underway in developing a more detailed estimator of system unavailability. 9 refs., 18 figs., 5 tabs

  4. Observation-based Quantitative Uncertainty Estimation for Realtime Tsunami Inundation Forecast using ABIC and Ensemble Simulation

    Science.gov (United States)

    Takagawa, T.

    2016-12-01

    An ensemble forecasting scheme for tsunami inundation is presented. The scheme consists of three elemental methods. The first is a hierarchical Bayesian inversion using Akaike's Bayesian Information Criterion (ABIC). The second is Montecarlo sampling from a probability density function of multidimensional normal distribution. The third is ensamble analysis of tsunami inundation simulations with multiple tsunami sources. Simulation based validation of the model was conducted. A tsunami scenario of M9.1 Nankai earthquake was chosen as a target of validation. Tsunami inundation around Nagoya Port was estimated by using synthetic tsunami waveforms at offshore GPS buoys. The error of estimation of tsunami inundation area was about 10% even if we used only ten minutes observation data. The estimation accuracy of waveforms on/off land and spatial distribution of maximum tsunami inundation depth is demonstrated.

  5. Variables influencing wearable sensor outcome estimates in individuals with stroke and incomplete spinal cord injury: a pilot investigation validating two research grade sensors.

    Science.gov (United States)

    Jayaraman, Chandrasekaran; Mummidisetty, Chaithanya Krishna; Mannix-Slobig, Alannah; McGee Koch, Lori; Jayaraman, Arun

    2018-03-13

    Monitoring physical activity and leveraging wearable sensor technologies to facilitate active living in individuals with neurological impairment has been shown to yield benefits in terms of health and quality of living. In this context, accurate measurement of physical activity estimates from these sensors are vital. However, wearable sensor manufacturers generally only provide standard proprietary algorithms based off of healthy individuals to estimate physical activity metrics which may lead to inaccurate estimates in population with neurological impairment like stroke and incomplete spinal cord injury (iSCI). The main objective of this cross-sectional investigation was to evaluate the validity of physical activity estimates provided by standard proprietary algorithms for individuals with stroke and iSCI. Two research grade wearable sensors used in clinical settings were chosen and the outcome metrics estimated using standard proprietary algorithms were validated against designated golden standard measures (Cosmed K4B2 for energy expenditure and metabolic equivalent and manual tallying for step counts). The influence of sensor location, sensor type and activity characteristics were also studied. 28 participants (Healthy (n = 10); incomplete SCI (n = 8); stroke (n = 10)) performed a spectrum of activities in a laboratory setting using two wearable sensors (ActiGraph and Metria-IH1) at different body locations. Manufacturer provided standard proprietary algorithms estimated the step count, energy expenditure (EE) and metabolic equivalent (MET). These estimates were compared with the estimates from gold standard measures. For verifying validity, a series of Kruskal Wallis ANOVA tests (Games-Howell multiple comparison for post-hoc analyses) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with the designated gold standard measurements. The sensor type, sensor location

  6. Fetal QRS detection and heart rate estimation: a wavelet-based approach

    International Nuclear Information System (INIS)

    Almeida, Rute; Rocha, Ana Paula; Gonçalves, Hernâni; Bernardes, João

    2014-01-01

    Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR. (paper)

  7. Estimate-Merge-Technique-based algorithms to track an underwater ...

    Indian Academy of Sciences (India)

    D V A N Ravi Kumar

    2017-07-04

    Jul 4, 2017 ... In this paper, two novel methods based on the Estimate Merge Technique ... mentioned advantages of the proposed novel methods is shown by carrying out Monte Carlo simulation in .... equations are converted to sequential equations to make ... estimation error and low convergence time) at feasibly high.

  8. Artificial Neural Network Based State Estimators Integrated into Kalmtool

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Ravn, Ole; Poulsen, Niels Kjølstad

    2012-01-01

    In this paper we present a toolbox enabling easy evaluation and comparison of dierent ltering algorithms. The toolbox is called Kalmtool and is a set of MATLAB tools for state estimation of nonlinear systems. The toolbox now contains functions for Articial Neural Network Based State Estimation as...

  9. A Kalman-based Fundamental Frequency Estimation Algorithm

    DEFF Research Database (Denmark)

    Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    Fundamental frequency estimation is an important task in speech and audio analysis. Harmonic model-based methods typically have superior estimation accuracy. However, such methods usually as- sume that the fundamental frequency and amplitudes are station- ary over a short time frame. In this pape...

  10. Estimating security betas using prior information based on firm fundamentals

    NARCIS (Netherlands)

    Cosemans, M.; Frehen, R.; Schotman, P.C.; Bauer, R.

    2010-01-01

    This paper proposes a novel approach for estimating time-varying betas of individual stocks that incorporates prior information based on fundamentals. We shrink the rolling window estimate of beta towards a firm-specific prior that is motivated by asset pricing theory. The prior captures structural

  11. Particle filter based MAP state estimation: A comparison

    NARCIS (Netherlands)

    Saha, S.; Boers, Y.; Driessen, J.N.; Mandal, Pranab K.; Bagchi, Arunabha

    2009-01-01

    MAP estimation is a good alternative to MMSE for certain applications involving nonlinear non Gaussian systems. Recently a new particle filter based MAP estimator has been derived. This new method extracts the MAP directly from the output of a running particle filter. In the recent past, a Viterbi

  12. Validating the use of 137Cs and 210Pbex measurements to estimate rates of soil loss from cultivated land in southern Italy.

    Science.gov (United States)

    Porto, Paolo; Walling, Des E

    2012-04-01

    Soil erosion represents an important threat to the long-term sustainability of agriculture and forestry in many areas of the world, including southern Italy. Numerous models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution, based on the local topography, hydrometeorology, soil type and land management. However, there remains an important need for empirical measurements to provide a basis for validating and calibrating such models and prediction procedures as well as to support specific investigations and experiments. In this context, erosion plots provide useful information on gross rates of soil loss, but are unable to document the efficiency of the onward transfer of the eroded sediment within a field and towards the stream system, and thus net rates of soil loss from larger areas. The use of environmental radionuclides, particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)), as a means of estimating rates of soil erosion and deposition has attracted increasing attention in recent years and the approach has now been recognised as possessing several important advantages. In order to provide further confirmation of the validity of the estimates of longer-term erosion and soil redistribution rates provided by (137)Cs and (210)Pb(ex) measurements, there is a need for studies aimed explicitly at validating the results obtained. In this context, the authors directed attention to the potential offered by a set of small erosion plots located near Reggio Calabria in southern Italy, for validating estimates of soil loss provided by (137)Cs and (210)Pb(ex) measurements. A preliminary assessment suggested that, notwithstanding the limitations and constraints involved, a worthwhile investigation aimed at validating the use of (137)Cs and (210)Pb(ex) measurements to estimate rates of soil loss from cultivated land could be undertaken. The results demonstrate a close consistency between the measured rates of soil

  13. Using plot experiments to test the validity of mass balance models employed to estimate soil redistribution rates from 137Cs and 210Pbex measurements

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.

    2012-01-01

    Environmental radionuclides ( 137 Cs and 210 Pb ex ) provide a valuable means of estimating medium- and longer-term soil erosion rates. ► It is, however, important that the basic assumptions involved in the use of mass balance models to estimate soil erosion rates based on 137 Cs and 210 Pb ex measurements should be validated. ► The data provided by a set of small erosion plots located in southern Italy are used to validate several of the assumptions associated with the use of mass balance models to estimate soil erosion rates from 137 Cs and 210 Pb ex measurements.

  14. Prevalence Estimation and Validation of New Instruments in Psychiatric Research: An Application of Latent Class Analysis and Sensitivity Analysis

    Science.gov (United States)

    Pence, Brian Wells; Miller, William C.; Gaynes, Bradley N.

    2009-01-01

    Prevalence and validation studies rely on imperfect reference standard (RS) diagnostic instruments that can bias prevalence and test characteristic estimates. The authors illustrate 2 methods to account for RS misclassification. Latent class analysis (LCA) combines information from multiple imperfect measures of an unmeasurable latent condition to…

  15. Validating fatty acid intake as estimated by an FFQ: how does the 24 h recall perform as reference method compared with the duplicate portion?

    Science.gov (United States)

    Trijsburg, Laura; de Vries, Jeanne Hm; Hollman, Peter Ch; Hulshof, Paul Jm; van 't Veer, Pieter; Boshuizen, Hendriek C; Geelen, Anouk

    2018-05-08

    To compare the performance of the commonly used 24 h recall (24hR) with the more distinct duplicate portion (DP) as reference method for validation of fatty acid intake estimated with an FFQ. Intakes of SFA, MUFA, n-3 fatty acids and linoleic acid (LA) were estimated by chemical analysis of two DP and by on average five 24hR and two FFQ. Plasma n-3 fatty acids and LA were used to objectively compare ranking of individuals based on DP and 24hR. Multivariate measurement error models were used to estimate validity coefficients and attenuation factors for the FFQ with the DP and 24hR as reference methods. Wageningen, the Netherlands. Ninety-two men and 106 women (aged 20-70 years). Validity coefficients for the fatty acid estimates by the FFQ tended to be lower when using the DP as reference method compared with the 24hR. Attenuation factors for the FFQ tended to be slightly higher based on the DP than those based on the 24hR as reference method. Furthermore, when using plasma fatty acids as reference, the DP showed comparable to slightly better ranking of participants according to their intake of n-3 fatty acids (0·33) and n-3:LA (0·34) than the 24hR (0·22 and 0·24, respectively). The 24hR gives only slightly different results compared with the distinctive but less feasible DP, therefore use of the 24hR seems appropriate as the reference method for FFQ validation of fatty acid intake.

  16. Ontology-based validation and identification of regulatory phenotypes

    KAUST Repository

    Kulmanov, Maxat

    2018-01-31

    Motivation: Function annotations of gene products, and phenotype annotations of genotypes, provide valuable information about molecular mechanisms that can be utilized by computational methods to identify functional and phenotypic relatedness, improve our understanding of disease and pathobiology, and lead to discovery of drug targets. Identifying functions and phenotypes commonly requires experiments which are time-consuming and expensive to carry out; creating the annotations additionally requires a curator to make an assertion based on reported evidence. Support to validate the mutual consistency of functional and phenotype annotations as well as a computational method to predict phenotypes from function annotations, would greatly improve the utility of function annotations Results: We developed a novel ontology-based method to validate the mutual consistency of function and phenotype annotations. We apply our method to mouse and human annotations, and identify several inconsistencies that can be resolved to improve overall annotation quality. Our method can also be applied to the rule-based prediction of phenotypes from functions. We show that the predicted phenotypes can be utilized for identification of protein-protein interactions and gene-disease associations. Based on experimental functional annotations, we predict phenotypes for 1,986 genes in mouse and 7,301 genes in human for which no experimental phenotypes have yet been determined.

  17. Ontology-based validation and identification of regulatory phenotypes

    KAUST Repository

    Kulmanov, Maxat; Schofield, Paul N; Gkoutos, Georgios V; Hoehndorf, Robert

    2018-01-01

    Motivation: Function annotations of gene products, and phenotype annotations of genotypes, provide valuable information about molecular mechanisms that can be utilized by computational methods to identify functional and phenotypic relatedness, improve our understanding of disease and pathobiology, and lead to discovery of drug targets. Identifying functions and phenotypes commonly requires experiments which are time-consuming and expensive to carry out; creating the annotations additionally requires a curator to make an assertion based on reported evidence. Support to validate the mutual consistency of functional and phenotype annotations as well as a computational method to predict phenotypes from function annotations, would greatly improve the utility of function annotations Results: We developed a novel ontology-based method to validate the mutual consistency of function and phenotype annotations. We apply our method to mouse and human annotations, and identify several inconsistencies that can be resolved to improve overall annotation quality. Our method can also be applied to the rule-based prediction of phenotypes from functions. We show that the predicted phenotypes can be utilized for identification of protein-protein interactions and gene-disease associations. Based on experimental functional annotations, we predict phenotypes for 1,986 genes in mouse and 7,301 genes in human for which no experimental phenotypes have yet been determined.

  18. Remaining useful life estimation based on discriminating shapelet extraction

    International Nuclear Information System (INIS)

    Malinowski, Simon; Chebel-Morello, Brigitte; Zerhouni, Noureddine

    2015-01-01

    In the Prognostics and Health Management domain, estimating the remaining useful life (RUL) of critical machinery is a challenging task. Various research topics including data acquisition, fusion, diagnostics and prognostics are involved in this domain. This paper presents an approach, based on shapelet extraction, to estimate the RUL of equipment. This approach extracts, in an offline step, discriminative rul-shapelets from an history of run-to-failure data. These rul-shapelets are patterns that are selected for their correlation with the remaining useful life of the equipment. In other words, every selected rul-shapelet conveys its own information about the RUL of the equipment. In an online step, these rul-shapelets are compared to testing units and the ones that match these units are used to estimate their RULs. Therefore, RUL estimation is based on patterns that have been selected for their high correlation with the RUL. This approach is different from classical similarity-based approaches that attempt to match complete testing units (or only late instants of testing units) with training ones to estimate the RUL. The performance of our approach is evaluated on a case study on the remaining useful life estimation of turbofan engines and performance is compared with other similarity-based approaches. - Highlights: • A data-driven RUL estimation technique based on pattern extraction is proposed. • Patterns are extracted for their correlation with the RUL. • The proposed method shows good performance compared to other techniques

  19. Frequency Estimator Performance for a Software-Based Beacon Receiver

    Science.gov (United States)

    Zemba, Michael J.; Morse, Jacquelynne Rose; Nessel, James A.; Miranda, Felix

    2014-01-01

    As propagation terminals have evolved, their design has trended more toward a software-based approach that facilitates convenient adjustment and customization of the receiver algorithms. One potential improvement is the implementation of a frequency estimation algorithm, through which the primary frequency component of the received signal can be estimated with a much greater resolution than with a simple peak search of the FFT spectrum. To select an estimator for usage in a QV-band beacon receiver, analysis of six frequency estimators was conducted to characterize their effectiveness as they relate to beacon receiver design.

  20. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  1. Global stereo matching algorithm based on disparity range estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Gu, Feifei

    2017-09-01

    The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.

  2. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.; Hussain, Syed Imtiaz; Ç elebi, Hasari Burak; Abdallah, Mohamed M.; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine

  3. Development of a Reference Data Set (RDS) for dental age estimation (DAE) and testing of this with a separate Validation Set (VS) in a southern Chinese population.

    Science.gov (United States)

    Jayaraman, Jayakumar; Wong, Hai Ming; King, Nigel M; Roberts, Graham J

    2016-10-01

    Many countries have recently experienced a rapid increase in the demand for forensic age estimates of unaccompanied minors. Hong Kong is a major tourist and business center where there has been an increase in the number of people intercepted with false travel documents. An accurate estimation of age is only possible when a dataset for age estimation that has been derived from the corresponding ethnic population. Thus, the aim of this study was to develop and validate a Reference Data Set (RDS) for dental age estimation for southern Chinese. A total of 2306 subjects were selected from the patient archives of a large dental hospital and the chronological age for each subject was recorded. This age was assigned to each specific stage of dental development for each tooth to create a RDS. To validate this RDS, a further 484 subjects were randomly chosen from the patient archives and their dental age was assessed based on the scores from the RDS. Dental age was estimated using meta-analysis command corresponding to random effects statistical model. Chronological age (CA) and Dental Age (DA) were compared using the paired t-test. The overall difference between the chronological and dental age (CA-DA) was 0.05 years (2.6 weeks) for males and 0.03 years (1.6 weeks) for females. The paired t-test indicated that there was no statistically significant difference between the chronological and dental age (p > 0.05). The validated southern Chinese reference dataset based on dental maturation accurately estimated the chronological age. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  4. Fuzzy logic based ELF magnetic field estimation in substations

    International Nuclear Information System (INIS)

    Kosalay, I.

    2008-01-01

    This paper examines estimation of the extremely low frequency magnetic fields (MF) in the power substation. First, the results of the previous relevant research studies and the MF measurements in a sample power substation are presented. Then, a fuzzy logic model based on the geometric definitions in order to estimate the MF distribution is explained. Visual software, which has a three-dimensional screening unit, based on the fuzzy logic technique, has been developed. (authors)

  5. Videoconference-based mini mental state examination: a validation study.

    Science.gov (United States)

    Timpano, Francesca; Pirrotta, Fabio; Bonanno, Lilla; Marino, Silvia; Marra, Angela; Bramanti, Placido; Lanzafame, Pietro

    2013-12-01

    Neuropsychological testing is a prime criterion of good practice to document cognitive deficits in a rapidly aging population. Telecommunication technologies may overcome limitations related to test administration. We compared performance of the Italian videoconference-based version of the Mini Mental State Examination (VMMSE) with performance of the standard MMSE administered face-to-face (F2F), to validate the Italian version of the 28-item VMMSE. To validate the Italian version of the VMMSE, we compared its performance with standard F2F. The sample (n=342) was administered three VMMSEs within 6 weeks after F2F testing. We identified the optimal cutoff through the receiver operating characteristic curve, as well as the VMMSE consistency through inter- and intrarater reliability (Inter/RR and Intra/RR) analysis. We found high levels of sensitivity and specificity for the optimal VMMSE cutoff identification and an accuracy of 0.96 (95% confidence interval 0.94-0.98). Intra/RR and inter/RR were highly significant. This study demonstrates that VMMSE is a valid instrument in clinical and research screening and monitoring of subjects affected by cognitive disorders. This study shows a significant correlation between videoconference assessment and the F2F one, providing an important impetus to expand studies and the knowledge about the usefulness of tele-assistance services. Our findings have important implications for both longitudinal assistance and clinical care of demented patients.

  6. Head pose estimation algorithm based on deep learning

    Science.gov (United States)

    Cao, Yuanming; Liu, Yijun

    2017-05-01

    Head pose estimation has been widely used in the field of artificial intelligence, pattern recognition and intelligent human-computer interaction and so on. Good head pose estimation algorithm should deal with light, noise, identity, shelter and other factors robustly, but so far how to improve the accuracy and robustness of attitude estimation remains a major challenge in the field of computer vision. A method based on deep learning for pose estimation is presented. Deep learning with a strong learning ability, it can extract high-level image features of the input image by through a series of non-linear operation, then classifying the input image using the extracted feature. Such characteristics have greater differences in pose, while they are robust of light, identity, occlusion and other factors. The proposed head pose estimation is evaluated on the CAS-PEAL data set. Experimental results show that this method is effective to improve the accuracy of pose estimation.

  7. Fast LCMV-based Methods for Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Glentis, George-Othon; Christensen, Mads Græsbøll

    2013-01-01

    peaks and require matrix inversions for each point in the search grid. In this paper, we therefore consider fast implementations of LCMV-based fundamental frequency estimators, exploiting the estimators' inherently low displacement rank of the used Toeplitz-like data covariance matrices, using...... with several orders of magnitude, but, as we show, further computational savings can be obtained by the adoption of an approximative IAA-based data covariance matrix estimator, reminiscent of the recently proposed Quasi-Newton IAA technique. Furthermore, it is shown how the considered pitch estimators can...... as such either the classic time domain averaging covariance matrix estimator, or, if aiming for an increased spectral resolution, the covariance matrix resulting from the application of the recent iterative adaptive approach (IAA). The proposed exact implementations reduce the required computational complexity...

  8. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  9. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  10. Bootstrap-Based Inference for Cube Root Consistent Estimators

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Jansson, Michael; Nagasawa, Kenichi

    This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known...... to be inconsistent. Our method restores consistency of the nonparametric bootstrap by altering the shape of the criterion function defining the estimator whose distribution we seek to approximate. This modification leads to a generic and easy-to-implement resampling method for inference that is conceptually distinct...... from other available distributional approximations based on some form of modified bootstrap. We offer simulation evidence showcasing the performance of our inference method in finite samples. An extension of our methodology to general M-estimation problems is also discussed....

  11. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  12. Validation of GPU based TomoTherapy dose calculation engine.

    Science.gov (United States)

    Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond

    2012-04-01

    The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.

  13. A novel method for coil efficiency estimation: Validation with a 13C birdcage

    DEFF Research Database (Denmark)

    Giovannetti, Giulio; Frijia, Francesca; Hartwig, Valentina

    2012-01-01

    Coil efficiency, defined as the B1 magnetic field induced at a given point on the square root of supplied power P, is an important parameter that characterizes both the transmit and receive performance of the radiofrequency (RF) coil. Maximizing coil efficiency will maximize also the signal......-to-noise ratio. In this work, we propose a novel method for RF coil efficiency estimation based on the use of a perturbing loop. The proposed method consists of loading the coil with a known resistor by inductive coupling and measuring the quality factor with and without the load. We tested the method...... by measuring the efficiency of a 13C birdcage coil tuned at 32.13 MHz and verified its accuracy by comparing the results with the nuclear magnetic resonance nutation experiment. The method allows coil performance characterization in a short time and with great accuracy, and it can be used both on the bench...

  14. A statistic to estimate the variance of the histogram-based mutual information estimator based on dependent pairs of observations

    NARCIS (Netherlands)

    Moddemeijer, R

    In the case of two signals with independent pairs of observations (x(n),y(n)) a statistic to estimate the variance of the histogram based mutual information estimator has been derived earlier. We present such a statistic for dependent pairs. To derive this statistic it is necessary to avail of a

  15. Process-based Cost Estimation for Ramjet/Scramjet Engines

    Science.gov (United States)

    Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John

    2003-01-01

    Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.

  16. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  17. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  18. HOTELLING'S T2 CONTROL CHARTS BASED ON ROBUST ESTIMATORS

    Directory of Open Access Journals (Sweden)

    SERGIO YÁÑEZ

    2010-01-01

    Full Text Available Under the presence of multivariate outliers, in a Phase I analysis of historical set of data, the T 2 control chart based on the usual sample mean vector and sample variance covariance matrix performs poorly. Several alternative estimators have been proposed. Among them, estimators based on the minimum volume ellipsoid (MVE and the minimum covariance determinant (MCD are powerful in detecting a reasonable number of outliers. In this paper we propose a T 2 control chart using the biweight S estimators for the location and dispersion parameters when monitoring multivariate individual observations. Simulation studies show that this method outperforms the T 2 control chart based on MVE estimators for a small number of observations.

  19. Validation and Refinement of Prediction Models to Estimate Exercise Capacity in Cancer Survivors Using the Steep Ramp Test.

    Science.gov (United States)

    Stuiver, Martijn M; Kampshoff, Caroline S; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J M; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M

    2017-11-01

    To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo 2peak ) and peak power output (W peak ). Cross-sectional study. Multicenter. Cancer survivors (N=283) in 2 randomized controlled exercise trials. Not applicable. Prediction model accuracy was assessed by intraclass correlation coefficients (ICCs) and limits of agreement (LOA). Multiple linear regression was used for model extension. Clinical performance was judged by the percentage of accurate endurance exercise prescriptions. ICCs of SRT-predicted Vo 2peak and W peak with these values as obtained by the cardiopulmonary exercise test were .61 and .73, respectively, using the previously published prediction models. 95% LOA were ±705mL/min with a bias of 190mL/min for Vo 2peak and ±59W with a bias of 5W for W peak . Modest improvements were obtained by adding body weight and sex to the regression equation for the prediction of Vo 2peak (ICC, .73; 95% LOA, ±608mL/min) and by adding age, height, and sex for the prediction of W peak (ICC, .81; 95% LOA, ±48W). Accuracy of endurance exercise prescription improved from 57% accurate prescriptions to 68% accurate prescriptions with the new prediction model for W peak . Predictions of Vo 2peak and W peak based on the SRT are adequate at the group level, but insufficiently accurate in individual patients. The multivariable prediction model for W peak can be used cautiously (eg, supplemented with a Borg score) to aid endurance exercise prescription. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  20. Size-based estimation of the status of fish stocks: simulation analysis and comparison with age-based estimations

    DEFF Research Database (Denmark)

    Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders

    , were investigated and our estimations were compared to the ICES advice. Only size-specific catch data were used, in order to emulate data limited situations. The simulation analysis reveals that the status of the stock, i.e. F/Fmsy, is estimated more accurately than the fishing mortality F itself....... Specific knowledge of the natural mortality improves the estimation more than having information about all other life history parameters. Our approach gives, at least qualitatively, an estimated stock status which is similar to the results of an age-based assessment. Since our approach only uses size...

  1. Web-based Food Behaviour Questionnaire: validation with grades six to eight students.

    Science.gov (United States)

    Hanning, Rhona M; Royall, Dawna; Toews, Jenn E; Blashill, Lindsay; Wegener, Jessica; Driezen, Pete

    2009-01-01

    The web-based Food Behaviour Questionnaire (FBQ) includes a 24-hour diet recall, a food frequency questionnaire, and questions addressing knowledge, attitudes, intentions, and food-related behaviours. The survey has been revised since it was developed and initially validated. The current study was designed to obtain qualitative feedback and to validate the FBQ diet recall. "Think aloud" techniques were used in cognitive interviews with dietitian experts (n=11) and grade six students (n=21). Multi-ethnic students (n=201) in grades six to eight at urban southern Ontario schools completed the FBQ and, subsequently, one-on-one diet recall interviews with trained dietitians. Food group and nutrient intakes were compared. Users provided positive feedback on the FBQ. Suggestions included adding more foods, more photos for portion estimation, and online student feedback. Energy and nutrient intakes were positively correlated between FBQ and dietitian interviews, overall and by gender and grade (all p<0.001). Intraclass correlation coefficients were ≥0.5 for energy and macro-nutrients, although the web-based survey underestimated energy (10.5%) and carbohydrate (-15.6%) intakes (p<0.05). Under-estimation of rice and pasta portions on the web accounted for 50% of this discrepancy. The FBQ is valid, relative to 24-hour recall interviews, for dietary assessment in diverse populations of Ontario children in grades six to eight.

  2. Assessing Error Correlations in Remote Sensing-Based Estimates of Forest Attributes for Improved Composite Estimation

    Directory of Open Access Journals (Sweden)

    Sarah Ehlers

    2018-04-01

    Full Text Available Today, non-expensive remote sensing (RS data from different sensors and platforms can be obtained at short intervals and be used for assessing several kinds of forest characteristics at the level of plots, stands and landscapes. Methods such as composite estimation and data assimilation can be used for combining the different sources of information to obtain up-to-date and precise estimates of the characteristics of interest. In composite estimation a standard procedure is to assign weights to the different individual estimates inversely proportional to their variance. However, in case the estimates are correlated, the correlations must be considered in assigning weights or otherwise a composite estimator may be inefficient and its variance be underestimated. In this study we assessed the correlation of plot level estimates of forest characteristics from different RS datasets, between assessments using the same type of sensor as well as across different sensors. The RS data evaluated were SPOT-5 multispectral data, 3D airborne laser scanning data, and TanDEM-X interferometric radar data. Studies were made for plot level mean diameter, mean height, and growing stock volume. All data were acquired from a test site dominated by coniferous forest in southern Sweden. We found that the correlation between plot level estimates based on the same type of RS data were positive and strong, whereas the correlations between estimates using different sources of RS data were not as strong, and weaker for mean height than for mean diameter and volume. The implications of such correlations in composite estimation are demonstrated and it is discussed how correlations may affect results from data assimilation procedures.

  3. Certification Testing as an Illustration of Argument-Based Validation

    Science.gov (United States)

    Kane, Michael

    2004-01-01

    The theories of validity developed over the past 60 years are quite sophisticated, but the methodology of validity is not generally very effective. The validity evidence for major testing programs is typically much weaker than the evidence for more technical characteristics such as reliability. In addition, most validation efforts have a strong…

  4. Sparse estimation of model-based diffuse thermal dust emission

    Science.gov (United States)

    Irfan, Melis O.; Bobin, Jérôme

    2018-03-01

    Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

  5. Validation and uncertainty estimation of fast neutron activation analysis method for Cu, Fe, Al, Si elements in sediment samples

    International Nuclear Information System (INIS)

    Sunardi; Samin Prihatin

    2010-01-01

    Validation and uncertainty estimation of Fast Neutron Activation Analysis (FNAA) method for Cu, Fe, Al, Si elements in sediment samples has been conduced. The aim of the research is to confirm whether FNAA method is still matches to ISO/lEC 17025-2005 standard. The research covered the verification, performance, validation of FNM and uncertainty estimation. Standard of SRM 8704 and sediments were weighted for certain weight and irradiated with 14 MeV fast neutron and then counted using gamma spectrometry. The result of validation method for Cu, Fe, Al, Si element showed that the accuracy were in the range of 95.89-98.68 %, while the precision were in the range 1.13-2.29 %. The result of uncertainty estimation for Cu, Fe, Al, and Si were 2.67, 1.46, 1.71 and 1.20 % respectively. From this data, it can be concluded that the FNM method is still reliable and valid for element contents analysis in samples, because the accuracy is up to 95 % and the precision is under 5 %, while the uncertainty are relatively small and suitable for the range 95 % level of confidence where the uncertainty maximum is 5 %. (author)

  6. Validation of Underwater Sensor Package Using Feature Based SLAM

    Directory of Open Access Journals (Sweden)

    Christopher Cain

    2016-03-01

    Full Text Available Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package.

  7. Validation of Underwater Sensor Package Using Feature Based SLAM

    Science.gov (United States)

    Cain, Christopher; Leonessa, Alexander

    2016-01-01

    Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package. PMID:26999142

  8. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time

    OpenAIRE

    Martin, C. K.; Correa, J. B.; Han, H.; Allen, H. R.; Rood, J.; Champagne, C. M.; Gunturk, B. K.; Bray, G. A.

    2011-01-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1’s objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake with the Remote Food Photography Method (RFPM) over six days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server...

  9. Enhanced closed loop State of Charge estimator for lithium-ion batteries based on Extended Kalman Filter

    International Nuclear Information System (INIS)

    Pérez, Gustavo; Garmendia, Maitane; Reynaud, Jean François; Crego, Jon; Viscarret, Unai

    2015-01-01

    Highlights: • Based on a general model valid in full range of SOC considering varied dynamics. • Integration of an accurate OCV model in EKF taking into account hysteresis effect. • Experimental validation with different current profiles: pulses, EV and lift. • Validated with specifically designed profile demanding accurate OCV modeling. - Abstract: The accurate State of Charge (SOC) estimation in a Li-ion battery requires a suitable model of the cell behavior. In this work an enhanced closed loop estimator based on Extended Kalman Filter (EKF) is proposed, considering a precise model of the cell dynamics valid for different current profiles and SOCs, and a complete model of the Open Circuit Voltage (OCV) which takes into account the hysteresis influence. The employed model and proposed estimator are validated with experimental results obtained from the response of a 40 Ah NMC Li-ion cell to several current profiles. These tests include current pulses, FUDS driving cycles, residential lift profiles, and specially designed profiles which demand an accurate modeling of the transitions between OCV boundaries. In each case, it is demonstrated that the enhanced model can reduce the estimation error nearly by half compared to an estimator ignoring the hysteresis effect. Furthermore, the good performance of the cell dynamics model allows an accurate and stable estimation over different conditions

  10. State of charge estimation of lithium-ion batteries based on an improved parameter identification method

    International Nuclear Information System (INIS)

    Xia, Bizhong; Chen, Chaoren; Tian, Yong; Wang, Mingwang; Sun, Wei; Xu, Zhihui

    2015-01-01

    The SOC (state of charge) is the most important index of the battery management systems. However, it cannot be measured directly with sensors and must be estimated with mathematical techniques. An accurate battery model is crucial to exactly estimate the SOC. In order to improve the model accuracy, this paper presents an improved parameter identification method. Firstly, the concept of polarization depth is proposed based on the analysis of polarization characteristics of the lithium-ion batteries. Then, the nonlinear least square technique is applied to determine the model parameters according to data collected from pulsed discharge experiments. The results show that the proposed method can reduce the model error as compared with the conventional approach. Furthermore, a nonlinear observer presented in the previous work is utilized to verify the validity of the proposed parameter identification method in SOC estimation. Finally, experiments with different levels of discharge current are carried out to investigate the influence of polarization depth on SOC estimation. Experimental results show that the proposed method can improve the SOC estimation accuracy as compared with the conventional approach, especially under the conditions of large discharge current. - Highlights: • The polarization characteristics of lithium-ion batteries are analyzed. • The concept of polarization depth is proposed to improve model accuracy. • A nonlinear least square technique is applied to determine the model parameters. • A nonlinear observer is used as the SOC estimation algorithm. • The validity of the proposed method is verified by experimental results.

  11. Response-Based Estimation of Sea State Parameters

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...... calculated by a 3-D time domain code and by closed-form (analytical) expressions, respectively. Based on comparisons with wave radar measurements and satellite measurements it is seen that the wave estimations based on closedform expressions exhibit a reasonable energy content, but the distribution of energy...

  12. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  13. A Gossip-based Churn Estimator for Large Dynamic Networks

    NARCIS (Netherlands)

    Giuffrida, C.; Ortolani, S.

    2010-01-01

    Gossip-based aggregation is an emerging paradigm to perform distributed computations and measurements in a large-scale setting. In this paper we explore the possibility of using gossip-based aggregation to estimate churn in arbitrarily large networks. To this end, we introduce a new model to compute

  14. Numerical experiment to estimate the validity of negative ion diagnostic using photo-detachment combined with Langmuir probing

    Energy Technology Data Exchange (ETDEWEB)

    Oudini, N. [Laboratoire des plasmas de décharges, Centre de Développement des Technologies Avancées, Cité du 20 Aout BP 17 Baba Hassen, 16081 Algiers (Algeria); Sirse, N.; Ellingboe, A. R. [Plasma Research Laboratory, School of Physical Sciences and NCPST, Dublin City University, Dublin 9 (Ireland); Benallal, R. [Unité de Recherche Matériaux et Energies Renouvelables, BP 119, Université Abou Bekr Belkaïd, Tlemcen 13000 (Algeria); Taccogna, F. [Istituto di Metodologie Inorganiche e di Plasmi, CNR, via Amendola 122/D, 70126 Bari (Italy); Aanesland, A. [Laboratoire de Physique des Plasmas, (CNRS, Ecole Polytechnique, Sorbonne Universités, UPMC Univ Paris 06, Univ Paris-Sud), École Polytechnique, 91128 Palaiseau Cedex (France); Bendib, A. [Laboratoire d' Electronique Quantique, Faculté de Physique, USTHB, El Alia BP 32, Bab Ezzouar, 16111 Algiers (Algeria)

    2015-07-15

    This paper presents a critical assessment of the theory of photo-detachment diagnostic method used to probe the negative ion density and electronegativity α = n{sub -}/n{sub e}. In this method, a laser pulse is used to photo-detach all negative ions located within the electropositive channel (laser spot region). The negative ion density is estimated based on the assumption that the increase of the current collected by an electrostatic probe biased positively to the plasma is a result of only the creation of photo-detached electrons. In parallel, the background electron density and temperature are considered as constants during this diagnostics. While the numerical experiments performed here show that the background electron density and temperature increase due to the formation of an electrostatic potential barrier around the electropositive channel. The time scale of potential barrier rise is about 2 ns, which is comparable to the time required to completely photo-detach the negative ions in the electropositive channel (∼3 ns). We find that neglecting the effect of the potential barrier on the background plasma leads to an erroneous determination of the negative ion density. Moreover, the background electron velocity distribution function within the electropositive channel is not Maxwellian. This is due to the acceleration of these electrons through the electrostatic potential barrier. In this work, the validity of the photo-detachment diagnostic assumptions is questioned and our results illustrate the weakness of these assumptions.

  15. Estimating Driving Performance Based on EEG Spectrum Analysis

    Directory of Open Access Journals (Sweden)

    Jung Tzyy-Ping

    2005-01-01

    Full Text Available The growing number of traffic accidents in recent years has become a serious concern to society. Accidents caused by driver's drowsiness behind the steering wheel have a high fatality rate because of the marked decline in the driver's abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedbacks to maintain their maximum performance. This paper proposes an EEG-based drowsiness estimation system that combines electroencephalogram (EEG log subband power spectrum, correlation analysis, principal component analysis, and linear regression models to indirectly estimate driver's drowsiness level in a virtual-reality-based driving simulator. Our results demonstrated that it is feasible to accurately estimate quantitatively driving performance, expressed as deviation between the center of the vehicle and the center of the cruising lane, in a realistic driving simulator.

  16. Permeability Estimation of Rock Reservoir Based on PCA and Elman Neural Networks

    Science.gov (United States)

    Shi, Ying; Jian, Shaoyong

    2018-03-01

    an intelligent method which based on fuzzy neural networks with PCA algorithm, is proposed to estimate the permeability of rock reservoir. First, the dimensionality reduction process is utilized for these parameters by principal component analysis method. Further, the mapping relationship between rock slice characteristic parameters and permeability had been found through fuzzy neural networks. The estimation validity and reliability for this method were tested with practical data from Yan’an region in Ordos Basin. The result showed that the average relative errors of permeability estimation for this method is 6.25%, and this method had the better convergence speed and more accuracy than other. Therefore, by using the cheap rock slice related information, the permeability of rock reservoir can be estimated efficiently and accurately, and it is of high reliability, practicability and application prospect.

  17. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jinhua [Fudan University, Department of Electronic Engineering, Shanghai (China); Computing and Computer-Assisted Intervention, Key Laboratory of Medical Imaging, Shanghai (China); Shi, Zhifeng; Chen, Liang; Mao, Ying [Fudan University, Department of Neurosurgery, Huashan Hospital, Shanghai (China); Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan [Fudan University, Department of Electronic Engineering, Shanghai (China)

    2017-08-15

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. (orig.)

  18. Estimation of Compaction Parameters Based on Soil Classification

    Science.gov (United States)

    Lubis, A. S.; Muis, Z. A.; Hastuty, I. P.; Siregar, I. M.

    2018-02-01

    Factors that must be considered in compaction of the soil works were the type of soil material, field control, maintenance and availability of funds. Those problems then raised the idea of how to estimate the density of the soil with a proper implementation system, fast, and economical. This study aims to estimate the compaction parameter i.e. the maximum dry unit weight (γ dmax) and optimum water content (Wopt) based on soil classification. Each of 30 samples were being tested for its properties index and compaction test. All of the data’s from the laboratory test results, were used to estimate the compaction parameter values by using linear regression and Goswami Model. From the research result, the soil types were A4, A-6, and A-7 according to AASHTO and SC, SC-SM, and CL based on USCS. By linear regression, the equation for estimation of the maximum dry unit weight (γdmax *)=1,862-0,005*FINES- 0,003*LL and estimation of the optimum water content (wopt *)=- 0,607+0,362*FINES+0,161*LL. By Goswami Model (with equation Y=mLogG+k), for estimation of the maximum dry unit weight (γdmax *) with m=-0,376 and k=2,482, for estimation of the optimum water content (wopt *) with m=21,265 and k=-32,421. For both of these equations a 95% confidence interval was obtained.

  19. Validation of internal dosimetry protocols based on stochastic method

    International Nuclear Information System (INIS)

    Mendes, Bruno M.; Fonseca, Telma C.F.; Almeida, Iassudara G.; Trindade, Bruno M.; Campos, Tarcisio P.R.

    2015-01-01

    Computational phantoms adapted to Monte Carlo codes have been applied successfully in radiation dosimetry fields. NRI research group has been developing Internal Dosimetry Protocols - IDPs, addressing distinct methodologies, software and computational human-simulators, to perform internal dosimetry, especially for new radiopharmaceuticals. Validation of the IDPs is critical to ensure the reliability of the simulations results. Inter comparisons of data from literature with those produced by our IDPs is a suitable method for validation. The aim of this study was to validate the IDPs following such inter comparison procedure. The Golem phantom has been reconfigured to run on MCNP5. The specific absorbed fractions (SAF) for photon at 30, 100 and 1000 keV energies were simulated based on the IDPs and compared with reference values (RV) published by Zankl and Petoussi-Henss, 1998. The SAF average differences from RV and those obtained in IDP simulations was 2.3 %. The SAF largest differences were found in situations involving low energy photons at 30 keV. The Adrenals and thyroid, i.e. the lowest mass organs, had the highest SAF discrepancies towards RV as 7.2 % and 3.8 %, respectively. The statistic differences of SAF applying our IDPs from reference values were considered acceptable at the 30, 100 and 1000 keV spectra. We believe that the main reason for the discrepancies in IDPs run, found in lower masses organs, was due to our source definition methodology. Improvements of source spatial distribution in the voxels may provide outputs more consistent with reference values for lower masses organs. (author)

  20. Validation of internal dosimetry protocols based on stochastic method

    Energy Technology Data Exchange (ETDEWEB)

    Mendes, Bruno M.; Fonseca, Telma C.F., E-mail: bmm@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Almeida, Iassudara G.; Trindade, Bruno M.; Campos, Tarcisio P.R., E-mail: tprcampos@yahoo.com.br [Universidade Federal de Minas Gerais (DEN/UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2015-07-01

    Computational phantoms adapted to Monte Carlo codes have been applied successfully in radiation dosimetry fields. NRI research group has been developing Internal Dosimetry Protocols - IDPs, addressing distinct methodologies, software and computational human-simulators, to perform internal dosimetry, especially for new radiopharmaceuticals. Validation of the IDPs is critical to ensure the reliability of the simulations results. Inter comparisons of data from literature with those produced by our IDPs is a suitable method for validation. The aim of this study was to validate the IDPs following such inter comparison procedure. The Golem phantom has been reconfigured to run on MCNP5. The specific absorbed fractions (SAF) for photon at 30, 100 and 1000 keV energies were simulated based on the IDPs and compared with reference values (RV) published by Zankl and Petoussi-Henss, 1998. The SAF average differences from RV and those obtained in IDP simulations was 2.3 %. The SAF largest differences were found in situations involving low energy photons at 30 keV. The Adrenals and thyroid, i.e. the lowest mass organs, had the highest SAF discrepancies towards RV as 7.2 % and 3.8 %, respectively. The statistic differences of SAF applying our IDPs from reference values were considered acceptable at the 30, 100 and 1000 keV spectra. We believe that the main reason for the discrepancies in IDPs run, found in lower masses organs, was due to our source definition methodology. Improvements of source spatial distribution in the voxels may provide outputs more consistent with reference values for lower masses organs. (author)

  1. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  2. State Estimation-based Transmission line parameter identification

    Directory of Open Access Journals (Sweden)

    Fredy Andrés Olarte Dussán

    2010-01-01

    Full Text Available This article presents two state-estimation-based algorithms for identifying transmission line parameters. The identification technique used simultaneous state-parameter estimation on an artificial power system composed of several copies of the same transmission line, using measurements at different points in time. The first algorithm used active and reactive power measurements at both ends of the line. The second method used synchronised phasor voltage and current measurements at both ends. The algorithms were tested in simulated conditions on the 30-node IEEE test system. All line parameters for this system were estimated with errors below 1%.

  3. Time of arrival based location estimation for cooperative relay networks

    KAUST Repository

    Çelebi, Hasari Burak

    2010-09-01

    In this paper, we investigate the performance of a cooperative relay network performing location estimation through time of arrival (TOA). We derive Cramer-Rao lower bound (CRLB) for the location estimates using the relay network. The analysis is extended to obtain average CRLB considering the signal fluctuations in both relay and direct links. The effects of the channel fading of both relay and direct links and amplification factor and location of the relay node on average CRLB are investigated. Simulation results show that the channel fading of both relay and direct links and amplification factor and location of relay node affect the accuracy of TOA based location estimation. ©2010 IEEE.

  4. Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis

    Science.gov (United States)

    Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel

    2013-01-01

    This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007

  5. Time of arrival based location estimation for cooperative relay networks

    KAUST Repository

    Ç elebi, Hasari Burak; Abdallah, Mohamed M.; Hussain, Syed Imtiaz; Qaraqe, Khalid A.; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, we investigate the performance of a cooperative relay network performing location estimation through time of arrival (TOA). We derive Cramer-Rao lower bound (CRLB) for the location estimates using the relay network. The analysis is extended to obtain average CRLB considering the signal fluctuations in both relay and direct links. The effects of the channel fading of both relay and direct links and amplification factor and location of the relay node on average CRLB are investigated. Simulation results show that the channel fading of both relay and direct links and amplification factor and location of relay node affect the accuracy of TOA based location estimation. ©2010 IEEE.

  6. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  7. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  8. Estimating evaporative vapor generation from automobiles based on parking activities

    International Nuclear Information System (INIS)

    Dong, Xinyi; Tschantz, Michael; Fu, Joshua S.

    2015-01-01

    A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade–Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5–8% less than calculation without considering parking activity. - Highlights: • We applied real parking distribution data to estimate evaporative vapor generation. • We applied real hourly temperature data to estimate hourly incremental vapor generation rate. • Evaporative emission for Florence is estimated based on parking distribution and hourly rate. - A new approach is proposed to quantify the weighted evaporative vapor generation based on parking distribution with an hourly incremental vapor generation rate

  9. Development and validation of a method to estimate the potential wind erosion risk in Germany

    Science.gov (United States)

    Funk, Roger; Deumlich, Detlef; Völker, Lidia

    2017-04-01

    The introduction of the Cross Compliance (CC) regulations for soil protection resulted in the demand for the classification of the the wind erosion risk on agricultural areas in Germany nationwide. A spatial highly resolved method was needed based on uniform data sets and validation principles, which provides a fair and equivalent procedure for all affected farmers. A GIS-procedure was developed, which derives the site specific wind erosion risk from the main influencing factors: soil texture, wind velocity, wind direction and landscape structure following the German standard DIN 19706. The procedure enables different approaches in the Federal States and comparable classification results. Here, we present the approach of the Federal State of Brandenburg. In the first step a complete soil data map was composed in a grid size of 10 x 10 m. Data were taken from 1.) the Soil quality Appraisal (scale 1:10.000), 2.) the Medium-scale Soil Mapping (MMK, 1:25.000), 3.) extrapolating the MMK, 4.) new Soil quality Appraisal (new areas after coal-mining). Based on the texture and carbon content the wind erosion susceptibility was divided in 6 classes. This map was combined with data of the annual average wind velocity resulting in an increase of the risk classes for wind velocities > 5 ms-1 and a decrease for structure is regarded by allocating a height to each landscape element, corresponding to the described features in the digital "Biotope and Land Use Map". The "hill shade" procedure of ArcGIS was used to set virtual shadows behind the landscape elements for eight directions. The relative frequency of wind from each direction was used as a weighting factor and multiplied with the numerical values of the shadowed cells. Depending on the distance to the landscape element the shadowing effect was combined with the risk classes. The results show that the wind erosion risk is obviously reduced by integrating landscape structures into the risk assessment. After the renewed

  10. Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles

    Science.gov (United States)

    Cortés, Camilo; Unzueta, Luis; de los Reyes-Guzmán, Ana; Ruiz, Oscar E.; Flórez, Julián

    2016-01-01

    In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR. PMID:27403044

  11. Validation by theoretical approach to the experimental estimation of efficiency for gamma spectrometry of gas in 100 ml standard flask

    International Nuclear Information System (INIS)

    Mohan, V.; Chudalayandi, K.; Sundaram, M.; Krishnamony, S.

    1996-01-01

    Estimation of gaseous activity forms an important component of air monitoring at Madras Atomic Power Station (MAPS). The gases of importance are argon 41 an air activation product and fission product noble gas xenon 133. For estimating the concentration, the experimental method is used in which a grab sample is collected in a 100 ml volumetric standard flask. The activity of gas is then computed by gamma spectrometry using a predetermined efficiency estimated experimentally. An attempt is made using theoretical approach to validate the experimental method of efficiency estimation. Two analytical models named relative flux model and absolute activity model were developed independently of each other. Attention is focussed on the efficiencies for 41 Ar and 133 Xe. Results show that the present method of sampling and analysis using 100 ml volumetric flask is adequate and acceptable. (author). 5 refs., 2 tabs

  12. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma.

    Science.gov (United States)

    Yu, Jinhua; Shi, Zhifeng; Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan; Chen, Liang; Mao, Ying

    2017-08-01

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. • Noninvasive IDH1 status estimation can be obtained with a radiomics approach. • Automatic and quantitative processes were established for noninvasive biomarker estimation. • High-throughput MRI features are highly correlated to IDH1 states. • Area under the ROC curve of the proposed estimation method reached 0.86.

  13. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    Directory of Open Access Journals (Sweden)

    Sekhar S Chandra

    2004-01-01

    Full Text Available We address the problem of estimating instantaneous frequency (IF of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE. The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD-based IF estimators for different signal-to-noise ratio (SNR.

  14. A novel SURE-based criterion for parametric PSF estimation.

    Science.gov (United States)

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  15. Estimation of Thermal Sensation Based on Wrist Skin Temperatures

    Science.gov (United States)

    Sim, Soo Young; Koh, Myung Jun; Joo, Kwang Min; Noh, Seungwoo; Park, Sangyun; Kim, Youn Ho; Park, Kwang Suk

    2016-01-01

    Thermal comfort is an essential environmental factor related to quality of life and work effectiveness. We assessed the feasibility of wrist skin temperature monitoring for estimating subjective thermal sensation. We invented a wrist band that simultaneously monitors skin temperatures from the wrist (i.e., the radial artery and ulnar artery regions, and upper wrist) and the fingertip. Skin temperatures from eight healthy subjects were acquired while thermal sensation varied. To develop a thermal sensation estimation model, the mean skin temperature, temperature gradient, time differential of the temperatures, and average power of frequency band were calculated. A thermal sensation estimation model using temperatures of the fingertip and wrist showed the highest accuracy (mean root mean square error [RMSE]: 1.26 ± 0.31). An estimation model based on the three wrist skin temperatures showed a slightly better result to the model that used a single fingertip skin temperature (mean RMSE: 1.39 ± 0.18). When a personalized thermal sensation estimation model based on three wrist skin temperatures was used, the mean RMSE was 1.06 ± 0.29, and the correlation coefficient was 0.89. Thermal sensation estimation technology based on wrist skin temperatures, and combined with wearable devices may facilitate intelligent control of one’s thermal environment. PMID:27023538

  16. Perceptually Valid Facial Expressions for Character-Based Applications

    Directory of Open Access Journals (Sweden)

    Ali Arya

    2009-01-01

    Full Text Available This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”

  17. Parametric validations of analytical lifetime estimates for radiation belt electron diffusion by whistler waves

    Directory of Open Access Journals (Sweden)

    A. V. Artemyev

    2013-04-01

    Full Text Available The lifetimes of electrons trapped in Earth's radiation belts can be calculated from quasi-linear pitch-angle diffusion by whistler-mode waves, provided that their frequency spectrum is broad enough and/or their average amplitude is not too large. Extensive comparisons between improved analytical lifetime estimates and full numerical calculations have been performed in a broad parameter range representative of a large part of the magnetosphere from L ~ 2 to 6. The effects of observed very oblique whistler waves are taken into account in both numerical and analytical calculations. Analytical lifetimes (and pitch-angle diffusion coefficients are found to be in good agreement with full numerical calculations based on CRRES and Cluster hiss and lightning-generated wave measurements inside the plasmasphere and Cluster lower-band chorus waves measurements in the outer belt for electron energies ranging from 100 keV to 5 MeV. Comparisons with lifetimes recently obtained from electron flux measurements on SAMPEX, SCATHA, SAC-C and DEMETER also show reasonable agreement.

  18. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. For certain types of products, Storage reliability may not always be 100% at the beginning of storage, unlike the operational reliability, which exist possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new integrated technique, the non-parametric measure based on the E-Bayesian estimates of current failure probabilities is combined with the parametric measure based on the exponential reliability function, is proposed to estimate and predict the storage reliability of products with possible initial failures, where the non-parametric method is used to estimate the number of failed products and the reliability at each testing time, and the parameter method is used to estimate the initial reliability and the failure rate of storage product. The proposed method has taken into consideration that, the reliability test data of storage products containing the unexamined before and during the storage process, is available for providing more accurate estimates of both the initial failure probability and the storage failure probability. When storage reliability prediction that is the main concern in this field should be made, the non-parametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is presented in this paper. Finally, a numerical example is given to illustrate the method. Furthermore, a detailed comparison between the proposed and traditional method, for examining the rationality of assessment and prediction on the storage reliability, is investigated. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality. - Highlights:

  19. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2016-07-01

    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  20. A gradient-based method for segmenting FDG-PET images: methodology and validation

    International Nuclear Information System (INIS)

    Geets, Xavier; Lee, John A.; Gregoire, Vincent; Bol, Anne; Lonneux, Max

    2007-01-01

    A new gradient-based method for segmenting FDG-PET images is described and validated. The proposed method relies on the watershed transform and hierarchical cluster analysis. To allow a better estimation of the gradient intensity, iteratively reconstructed images were first denoised and deblurred with an edge-preserving filter and a constrained iterative deconvolution algorithm. Validation was first performed on computer-generated 3D phantoms containing spheres, then on a real cylindrical Lucite phantom containing spheres of different volumes ranging from 2.1 to 92.9 ml. Moreover, laryngeal tumours from seven patients were segmented on PET images acquired before laryngectomy by the gradient-based method and the thresholding method based on the source-to-background ratio developed by Daisne (Radiother Oncol 2003;69:247-50). For the spheres, the calculated volumes and radii were compared with the known values; for laryngeal tumours, the volumes were compared with the macroscopic specimens. Volume mismatches were also analysed. On computer-generated phantoms, the deconvolution algorithm decreased the mis-estimate of volumes and radii. For the Lucite phantom, the gradient-based method led to a slight underestimation of sphere volumes (by 10-20%), corresponding to negligible radius differences (0.5-1.1 mm); for laryngeal tumours, the segmented volumes by the gradient-based method agreed with those delineated on the macroscopic specimens, whereas the threshold-based method overestimated the true volume by 68% (p = 0.014). Lastly, macroscopic laryngeal specimens were totally encompassed by neither the threshold-based nor the gradient-based volumes. The gradient-based segmentation method applied on denoised and deblurred images proved to be more accurate than the source-to-background ratio method. (orig.)

  1. Validation of hospital register-based diagnosis of Parkinson's disease

    DEFF Research Database (Denmark)

    Wermuth, Lene; Lassen, Christina Funch; Himmerslev, Liselotte

    2012-01-01

    Denmark has a long-standing tradition of maintaining one of the world's largest health science specialized register data bases as the National Hospital Register (NHR). To estimate the prevalence and incidence of diseases, the correctness of the diagnoses recorded is critical. Parkinson's disease...... (PD) is a neurodegenerative disorder and only 75-80% of patients with parkinsonism will have idiopathic PD (iPD). It is necessary to follow patients in order to determine if some of them will develop other neurodegenerative diseases and a one-time-only diagnostic code for iPD reported in the register...

  2. Relative validation of a food frequency questionnaire to estimate food intake in an adult population.

    Science.gov (United States)

    Steinemann, Nina; Grize, Leticia; Ziesemer, Katrin; Kauf, Peter; Probst-Hensch, Nicole; Brombach, Christine

    2017-01-01

    Background : Scientifically valid descriptions of dietary intake at population level are crucial for investigating diet effects on health and disease. Food frequency questionnaires (FFQs) are the most common dietary tools used in large epidemiological studies. Objective : To examine the relative validity of a newly developed FFQ to be used as dietary assessment tool in epidemiological studies. Design : Validity was evaluated by comparing the FFQ and a 4-day weighed food record (4-d FR) at nutrient and food group levels, Spearman's correlations, Bland-Altman analysis and Wilcoxon rank sum tests were used. Fifty-six participants completed a paper format FFQ and a 4-d FR within 4 weeks. Results : Corrected correlations between the two instruments ranged from 0.27 (carbohydrates) to 0.55 (protein), and at food group level from 0.09 (soup) to 0.92 (alcohol). Nine out of 25 food groups showed correlations > 0.5, indicating moderate validity. More than half the food groups were overestimated in the FFQ, especially vegetables (82.8%) and fruits (56.3%). Water, tea and coffee were underestimated (-14.0%). Conclusions : The FFQ showed moderate relative validity for protein and the food groups fruits, egg, meat, sausage, nuts, salty snacks and beverages. This study supports the use of the FFQ as an acceptable tool for assessing nutrition as a health determinant in large epidemiological studies.

  3. Validation of an efficient visual method for estimating leaf area index ...

    African Journals Online (AJOL)

    This study aimed to evaluate the accuracy and applicability of a visual method for estimating LAI in clonal Eucalyptus grandis × E. urophylla plantations and to compare it with hemispherical photography, ceptometer and LAI-2000® estimates. Destructive sampling for direct determination of the actual LAI was performed in ...

  4. Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM

    Science.gov (United States)

    Sheng, Hanlin; Zhang, Tianhong

    2017-08-01

    In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.

  5. Design of Virtual Crank Angle Sensor based on Torque Estimation

    OpenAIRE

    Roswall, Tobias

    2016-01-01

    The topic of thesis is estimation of the crank angle based on pulse signals from an induction sensor placed on the flywheel. The engine management system performs many calculations in the crank angle domain which means that a good accuracy is needed for this measurement. To estimate the crank angle degree the torque balance on the crankshaft based on Newtons 2nd law is used. The resulting acceleration is integrated to give engine speed and crank angle. This approach is made for two crankshaft ...

  6. Convolution-based estimation of organ dose in tube current modulated CT

    Science.gov (United States)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The

  7. CANDU radiotoxicity inventories estimation: A calculated experiment cross-check for data verification and validation

    International Nuclear Information System (INIS)

    Pavelescu, Alexandru Octavian; Cepraga, Dan Gabriel

    2007-01-01

    This paper is related to the Clearance Potential Index, Ingestion and Inhalation Hazard Factors of the nuclear spent fuel and radioactive wastes. This study required a complex activity that consisted of various phases such us: the acquisition, setting up, validation and application of procedures, codes and libraries. The paper reflects the validation phase of this study. Its objective was to compare the measured inventories of selected actinide and fission products radionuclides in an element from a Pickering CANDU reactor with inventories predicted using a recent version of the ORIGEN-ARP from SCALE 5 coupled with the time dependent cross sections library, CANDU 28.lib, produced by the sequence SAS2H of SCALE 4.4a. In this way, the procedures, codes and libraries for the characterization of radioactive material in terms of radioactive inventories, clearance, and biological hazard factors are being qualified and validated, in support for the safety management of the radioactive wastes. (authors)

  8. Age estimation based on aspartic acid racemization in human sclera.

    Science.gov (United States)

    Klumb, Karolin; Matzenauer, Christian; Reckert, Alexandra; Lehmann, Klaus; Ritz-Timme, Stefanie

    2016-01-01

    Age estimation based on racemization of aspartic acid residues (AAR) in permanent proteins has been established in forensic medicine for years. While dentine is the tissue of choice for this molecular method of age estimation, teeth are not always available which leads to the need to identify other suitable tissues. We examined the suitability of total tissue samples of human sclera for the estimation of age at death. Sixty-five samples of scleral tissue were analyzed. The samples were hydrolyzed and after derivatization, the extent of aspartic acid racemization was determined by gas chromatography. The degree of AAR increased with age. In samples from younger individuals, the correlation of age and D-aspartic acid content was closer than in samples from older individuals. The age-dependent racemization in total tissue samples proves that permanent or at least long-living proteins are present in scleral tissue. The correlation of AAR in human sclera and age at death is close enough to serve as basis for age estimation. However, the precision of age estimation by this method is lower than that of age estimation based on the analysis of dentine which is due to molecular inhomogeneities of total tissue samples of sclera. Nevertheless, the approach may serve as a valuable alternative or addition in exceptional cases.

  9. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    Science.gov (United States)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  10. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship

    Science.gov (United States)

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab’s software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001) but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37). The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key points Some commercial devices allow to estimate 1 RM from the force-velocity relationship. These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription. Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations. PMID:24149641

  11. Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration

    Science.gov (United States)

    Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.

    2017-04-01

    Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn

  12. In Vivo Validation of a Blood Vector Velocity Estimator with MR Angiography

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Udesen, Jesper; Thomsen, Carsten

    2009-01-01

    Conventional Doppler methods for blood velocity estimation only estimate the velocity component along the ultrasound beam direction. This implies that a Doppler angle under examination close to 90° results in unreliable information about the true blood direction and blood velocity. The novel method...... indicate that reliable vector velocity estimates can be obtained in vivo using the presented angle-independent 2-D vector velocity method. The TO method can be a useful alternative to conventional Doppler systems by avoiding the angle artifact, thus giving quantitative velocity information....

  13. A Web-Based System for Bayesian Benchmark Dose Estimation.

    Science.gov (United States)

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  14. Repeated holdout Cross-Validation of Model to Estimate Risk of Lyme Disease by Landscape Attributes

    Science.gov (United States)

    We previously modeled Lyme disease (LD) risk at the landscape scale; here we evaluate the model's overall goodness-of-fit using holdout validation. Landscapes were characterized within road-bounded analysis units (AU). Observed LD cases (obsLD) were ascertained per AU. Data were ...

  15. Validation of traffic-related air pollution exposure estimates for long-term studies

    NARCIS (Netherlands)

    Van Roosbroeck, S.

    2007-01-01

    This thesis describes a series of studies that investigate the validity of using outdoor concentrations and/or traffic-related indicator exposure variables as a measure for exposure assessment in epidemiological studies on the long-term effect of traffic-related air pollution. A pilot study was

  16. Reliability and validity of food portion size estimation from images using manual flexible digital virtual meshes

    Science.gov (United States)

    The eButton takes frontal images at 4 second intervals throughout the day. A three-dimensional (3D) manually administered wire mesh procedure has been developed to quantify portion sizes from the two-dimensional (2D) images. This paper reports a test of the interrater reliability and validity of use...

  17. Online Synchrophasor-Based Dynamic State Estimation using Real-Time Digital Simulator

    DEFF Research Database (Denmark)

    Khazraj, Hesam; Adewole, Adeyemi Charles; Udaya, Annakkage

    2018-01-01

    Dynamic state estimation is a very important control center application used in the dynamic monitoring of state variables. This paper presents and validates a time-synchronized phasor measurement unit (PMU)-based for dynamic state estimation by unscented Kalman filter (UKF) method using the real-...... using the RTDS (real-time digital simulator). The dynamic state variables of multi-machine systems are monitored and measured for the study on the transient behavior of power systems.......Dynamic state estimation is a very important control center application used in the dynamic monitoring of state variables. This paper presents and validates a time-synchronized phasor measurement unit (PMU)-based for dynamic state estimation by unscented Kalman filter (UKF) method using the real......-time digital simulator (RTDS). The dynamic state variables of the system are the rotor angle and speed of the generators. The performance of the UKF method is tested with PMU measurements as inputs using the IEEE 14-bus test system. This test system was modeled in the RSCAD software and tested in real time...

  18. Single event upset threshold estimation based on local laser irradiation

    International Nuclear Information System (INIS)

    Chumakov, A.I.; Egorov, A.N.; Mavritsky, O.B.; Yanenko, A.V.

    1999-01-01

    An approach for estimation of ion-induced SEU threshold based on local laser irradiation is presented. Comparative experiment and software simulation research were performed at various pulse duration and spot size. Correlation of single event threshold LET to upset threshold laser energy under local irradiation was found. The computer analysis of local laser irradiation of IC structures was developed for SEU threshold LET estimation. The correlation of local laser threshold energy with SEU threshold LET was shown. Two estimation techniques were suggested. The first one is based on the determination of local laser threshold dose taking into account the relation of sensitive area to local irradiated area. The second technique uses the photocurrent peak value instead of this relation. The agreement between the predicted and experimental results demonstrates the applicability of this approach. (authors)

  19. Improved air ventilation rate estimation based on a statistical model

    International Nuclear Information System (INIS)

    Brabec, M.; Jilek, K.

    2004-01-01

    A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements

  20. Validation databases for simulation models: aboveground biomass and net primary productive, (NPP) estimation using eastwide FIA data

    Science.gov (United States)

    Jennifer C. Jenkins; Richard A. Birdsey

    2000-01-01

    As interest grows in the role of forest growth in the carbon cycle, and as simulation models are applied to predict future forest productivity at large spatial scales, the need for reliable and field-based data for evaluation of model estimates is clear. We created estimates of potential forest biomass and annual aboveground production for the Chesapeake Bay watershed...

  1. Ekman estimates of upwelling at cape columbine based on ...

    African Journals Online (AJOL)

    Ekman estimates of upwelling at cape columbine based on measurements of longshore wind from a 35-year time-series. AS Johnson, G Nelson. Abstract. Cape Columbine is a prominent headland on the south-west coast of Africa at approximately 32°50´S, where there is a substantial upwelling tongue, enhancing the ...

  2. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  3. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  4. A model-based approach to estimating forest area

    Science.gov (United States)

    Ronald E. McRoberts

    2006-01-01

    A logistic regression model based on forest inventory plot data and transformations of Landsat Thematic Mapper satellite imagery was used to predict the probability of forest for 15 study areas in Indiana, USA, and 15 in Minnesota, USA. Within each study area, model-based estimates of forest area were obtained for circular areas with radii of 5 km, 10 km, and 15 km and...

  5. Estimating population cause-specific mortality fractions from in-hospital mortality: validation of a new method.

    Directory of Open Access Journals (Sweden)

    Christopher J L Murray

    2007-11-01

    Full Text Available Cause-of-death data for many developing countries are not available. Information on deaths in hospital by cause is available in many low- and middle-income countries but is not a representative sample of deaths in the population. We propose a method to estimate population cause-specific mortality fractions (CSMFs using data already collected in many middle-income and some low-income developing nations, yet rarely used: in-hospital death records.For a given cause of death, a community's hospital deaths are equal to total community deaths multiplied by the proportion of deaths occurring in hospital. If we can estimate the proportion dying in hospital, we can estimate the proportion dying in the population using deaths in hospital. We propose to estimate the proportion of deaths for an age, sex, and cause group that die in hospital from the subset of the population where vital registration systems function or from another population. We evaluated our method using nearly complete vital registration (VR data from Mexico 1998-2005, which records whether a death occurred in a hospital. In this validation test, we used 45 disease categories. We validated our method in two ways: nationally and between communities. First, we investigated how the method's accuracy changes as we decrease the amount of Mexican VR used to estimate the proportion of each age, sex, and cause group dying in hospital. Decreasing VR data used for this first step from 100% to 9% produces only a 12% maximum relative error between estimated and true CSMFs. Even if Mexico collected full VR information only in its capital city with 9% of its population, our estimation method would produce an average relative error in CSMFs across the 45 causes of just over 10%. Second, we used VR data for the capital zone (Distrito Federal and Estado de Mexico and estimated CSMFs for the three lowest-development states. Our estimation method gave an average relative error of 20%, 23%, and 31% for

  6. Optical Tracking Data Validation and Orbit Estimation for Sparse Observations of Satellites by the OWL-Net.

    Science.gov (United States)

    Choi, Jin; Jo, Jung Hyun; Yim, Hong-Suh; Choi, Eun-Jung; Cho, Sungki; Park, Jang-Hyun

    2018-06-07

    An Optical Wide-field patroL-Network (OWL-Net) has been developed for maintaining Korean low Earth orbit (LEO) satellites' orbital ephemeris. The OWL-Net consists of five optical tracking stations. Brightness signals of reflected sunlight of the targets were detected by a charged coupled device (CCD). A chopper system was adopted for fast astrometric data sampling, maximum 50 Hz, within a short observation time. The astrometric accuracy of the optical observation data was validated with precise orbital ephemeris such as Consolidated Prediction File (CPF) data and precise orbit determination result with onboard Global Positioning System (GPS) data from the target satellite. In the optical observation simulation of the OWL-Net for 2017, an average observation span for a single arc of 11 LEO observation targets was about 5 min, while an average optical observation separation time was 5 h. We estimated the position and velocity with an atmospheric drag coefficient of LEO observation targets using a sequential-batch orbit estimation technique after multi-arc batch orbit estimation. Post-fit residuals for the multi-arc batch orbit estimation and sequential-batch orbit estimation were analyzed for the optical measurements and reference orbit (CPF and GPS data). The post-fit residuals with reference show few tens-of-meters errors for in-track direction for multi-arc batch and sequential-batch orbit estimation results.

  7. Optical Tracking Data Validation and Orbit Estimation for Sparse Observations of Satellites by the OWL-Net

    Directory of Open Access Journals (Sweden)

    Jin Choi

    2018-06-01

    Full Text Available An Optical Wide-field patroL-Network (OWL-Net has been developed for maintaining Korean low Earth orbit (LEO satellites’ orbital ephemeris. The OWL-Net consists of five optical tracking stations. Brightness signals of reflected sunlight of the targets were detected by a charged coupled device (CCD. A chopper system was adopted for fast astrometric data sampling, maximum 50 Hz, within a short observation time. The astrometric accuracy of the optical observation data was validated with precise orbital ephemeris such as Consolidated Prediction File (CPF data and precise orbit determination result with onboard Global Positioning System (GPS data from the target satellite. In the optical observation simulation of the OWL-Net for 2017, an average observation span for a single arc of 11 LEO observation targets was about 5 min, while an average optical observation separation time was 5 h. We estimated the position and velocity with an atmospheric drag coefficient of LEO observation targets using a sequential-batch orbit estimation technique after multi-arc batch orbit estimation. Post-fit residuals for the multi-arc batch orbit estimation and sequential-batch orbit estimation were analyzed for the optical measurements and reference orbit (CPF and GPS data. The post-fit residuals with reference show few tens-of-meters errors for in-track direction for multi-arc batch and sequential-batch orbit estimation results.

  8. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship.

    Science.gov (United States)

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab's software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, pvelocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key pointsSome commercial devices allow to estimate 1 RM from the force-velocity relationship.These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription.Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations.

  9. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  10. Fine-tuning satellite-based rainfall estimates

    Science.gov (United States)

    Harsa, Hastuadi; Buono, Agus; Hidayat, Rahmat; Achyar, Jaumil; Noviati, Sri; Kurniawan, Roni; Praja, Alfan S.

    2018-05-01

    Rainfall datasets are available from various sources, including satellite estimates and ground observation. The locations of ground observation scatter sparsely. Therefore, the use of satellite estimates is advantageous, because satellite estimates can provide data on places where the ground observations do not present. However, in general, the satellite estimates data contain bias, since they are product of algorithms that transform the sensors response into rainfall values. Another cause may come from the number of ground observations used by the algorithms as the reference in determining the rainfall values. This paper describe the application of bias correction method to modify the satellite-based dataset by adding a number of ground observation locations that have not been used before by the algorithm. The bias correction was performed by utilizing Quantile Mapping procedure between ground observation data and satellite estimates data. Since Quantile Mapping required mean and standard deviation of both the reference and the being-corrected data, thus the Inverse Distance Weighting scheme was applied beforehand to the mean and standard deviation of the observation data in order to provide a spatial composition of them, which were originally scattered. Therefore, it was possible to provide a reference data point at the same location with that of the satellite estimates. The results show that the new dataset have statistically better representation of the rainfall values recorded by the ground observation than the previous dataset.

  11. Research on Bridge Sensor Validation Based on Correlation in Cluster

    Directory of Open Access Journals (Sweden)

    Huang Xiaowei

    2016-01-01

    Full Text Available In order to avoid the false alarm and alarm failure caused by sensor malfunction or failure, it has been critical to diagnose the fault and analyze the failure of the sensor measuring system in major infrastructures. Based on the real time monitoring of bridges and the study on the correlation probability distribution between multisensors adopted in the fault diagnosis system, a clustering algorithm based on k-medoid is proposed, by dividing sensors of the same type into k clusters. Meanwhile, the value of k is optimized by a specially designed evaluation function. Along with the further study of the correlation of sensors within the same cluster, this paper presents the definition and corresponding calculation algorithm of the sensor’s validation. The algorithm is applied to the analysis of the sensor data from an actual health monitoring system. The result reveals that the algorithm can not only accurately measure the failure degree and orientate the malfunction in time domain but also quantitatively evaluate the performance of sensors and eliminate error of diagnosis caused by the failure of the reference sensor.

  12. Estimation of monthly-mean daily global solar radiation based on MODIS and TRMM products

    International Nuclear Information System (INIS)

    Qin, Jun; Chen, Zhuoqi; Yang, Kun; Liang, Shunlin; Tang, Wenjun

    2011-01-01

    Global solar radiation (GSR) is required in a large number of fields. Many parameterization schemes are developed to estimate it using routinely measured meteorological variables, since GSR is directly measured at a limited number of stations. Even so, meteorological stations are sparse, especially, in remote areas. Satellite signals (radiance at the top of atmosphere in most cases) can be used to estimate continuous GSR in space. However, many existing remote sensing products have a relatively coarse spatial resolution and these inversion algorithms are too complicated to be mastered by experts in other research fields. In this study, the artificial neural network (ANN) is utilized to build the mathematical relationship between measured monthly-mean daily GSR and several high-level remote sensing products available for the public, including Moderate Resolution Imaging Spectroradiometer (MODIS) monthly averaged land surface temperature (LST), the number of days in which the LST retrieval is performed in 1 month, MODIS enhanced vegetation index, Tropical Rainfall Measuring Mission satellite (TRMM) monthly precipitation. After training, GSR estimates from this ANN are verified against ground measurements at 12 radiation stations. Then, comparisons are performed among three GSR estimates, including the one presented in this study, a surface data-based estimate, and a remote sensing product by Japan Aerospace Exploration Agency (JAXA). Validation results indicate that the ANN-based method presented in this study can estimate monthly-mean daily GSR at a spatial resolution of about 5 km with high accuracy.

  13. Relative Validity and Reproducibility of a Food-Frequency Questionnaire for Estimating Food Intakes among Flemish Preschoolers

    Directory of Open Access Journals (Sweden)

    Inge Huybrechts

    2009-01-01

    Full Text Available The aims of this study were to assess the relative validity and reproducibility of a semi-quantitative food-frequency questionnaire (FFQ applied in a large region-wide survey among 2.5-6.5 year-old children for estimating food group intakes. Parents/guardians were used as a proxy. Estimated diet records (3d were used as reference method and reproducibility was measured by repeated FFQ administrations five weeks apart. In total 650 children were included in the validity analyses and 124 in the reproducibility analyses. Comparing median FFQ1 to FFQ2 intakes, almost all evaluated food groups showed median differences within a range of ± 15%. However, for median vegetables, fruit and cheese intake, FFQ1 was > 20% higher than FFQ2. For most foods a moderate correlation (0.5-0.7 was obtained between FFQ1 and FFQ2. For cheese, sugared drinks and fruit juice intakes correlations were even > 0.7. For median differences between the 3d EDR and the FFQ, six food groups (potatoes & grains; vegetables Fruit; cheese; meat, game, poultry and fish; and sugared drinks gave a difference > 20%. The largest corrected correlations (>0.6 were found for the intake of potatoes and grains, fruit, milk products, cheese, sugared drinks, and fruit juice, while the lowest correlations (<0.4 for bread and meat products. The proportion of subjects classified within one quartile (in the same/adjacent category by FFQ and EDR ranged from 67% (for meat products to 88% (for fruit juice. Extreme misclassification into the opposite quartiles was for all food groups < 10%. The results indicate that our newly developed FFQ gives reproducible estimates of food group intake. Overall, moderate levels of relative validity were observed for estimates of food group intake.

  14. Vision-based stress estimation model for steel frame structures with rigid links

    Science.gov (United States)

    Park, Hyo Seon; Park, Jun Su; Oh, Byung Kwan

    2017-07-01

    This paper presents a stress estimation model for the safety evaluation of steel frame structures with rigid links using a vision-based monitoring system. In this model, the deformed shape of a structure under external loads is estimated via displacements measured by a motion capture system (MCS), which is a non-contact displacement measurement device. During the estimation of the deformed shape, the effective lengths of the rigid link ranges in the frame structure are identified. The radius of the curvature of the structural member to be monitored is calculated using the estimated deformed shape and is employed to estimate stress. Using MCS in the presented model, the safety of a structure can be assessed gauge-freely. In addition, because the stress is directly extracted from the radius of the curvature obtained from the measured deformed shape, information on the loadings and boundary conditions of the structure are not required. Furthermore, the model, which includes the identification of the effective lengths of the rigid links, can consider the influences of the stiffness of the connection and support on the deformation in the stress estimation. To verify the applicability of the presented model, static loading tests for a steel frame specimen were conducted. By comparing the stress estimated by the model with the measured stress, the validity of the model was confirmed.

  15. Verification and Validation of Embedded Knowledge-Based Software Systems

    National Research Council Canada - National Science Library

    Santos, Eugene

    1999-01-01

    .... We pursued this by carefully examining the nature of uncertainty and information semantics and developing intelligent tools for verification and validation that provides assistance to the subject...

  16. Dictionary-based fiber orientation estimation with improved spatial consistency.

    Science.gov (United States)

    Ye, Chuyang; Prince, Jerry L

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that

  17. Engineering C-integral estimates for generalised creep behaviour and finite element validation

    International Nuclear Information System (INIS)

    Kim, Yun-Jae; Kim, Jin-Su; Huh, Nam-Su; Kim, Young-Jin

    2002-01-01

    This paper proposes an engineering method to estimate the creep C-integral for realistic creep laws to assess defective components operating at elevated temperatures. The proposed estimation method is mainly for the steady-state C * -integral, but a suggestion is also given for estimating the transient C(t)-integral. The reference stress approach is the basis of the proposed equation, but an enhancement in terms of accuracy is made through the definition of the reference stress. The proposed estimation equations are compared with extensive elastic-creep FE results employing various creep-deformation constitutive laws for six different geometries, including two-dimensional, axi-symmetric and three-dimensional geometries. Overall good agreement between the proposed method and the FE results provides confidence in the use of the proposed method for defect assessment of components at elevated temperatures. Moreover, it is shown that for surface cracks the proposed method can be used to estimate C * at any location along the crack front

  18. Using Clinical Factors and Mammographic Breast Density to Estimate Breast Cancer Risk: Development and Validation of a New Predictive Model

    Science.gov (United States)

    Tice, Jeffrey A.; Cummings, Steven R.; Smith-Bindman, Rebecca; Ichikawa, Laura; Barlow, William E.; Kerlikowske, Karla

    2009-01-01

    Background Current models for assessing breast cancer risk are complex and do not include breast density, a strong risk factor for breast cancer that is routinely reported with mammography. Objective To develop and validate an easy-to-use breast cancer risk prediction model that includes breast density. Design Empirical model based on Surveillance, Epidemiology, and End Results incidence, and relative hazards from a prospective cohort. Setting Screening mammography sites participating in the Breast Cancer Surveillance Consortium. Patients 1 095 484 women undergoing mammography who had no previous diagnosis of breast cancer. Measurements Self-reported age, race or ethnicity, family history of breast cancer, and history of breast biopsy. Community radiologists rated breast density by using 4 Breast Imaging Reporting and Data System categories. Results During 5.3 years of follow-up, invasive breast cancer was diagnosed in 14 766 women. The breast density model was well calibrated overall (expected–observed ratio, 1.03 [95% CI, 0.99 to 1.06]) and in racial and ethnic subgroups. It had modest discriminatory accuracy (concordance index, 0.66 [CI, 0.65 to 0.67]). Women with low-density mammograms had 5-year risks less than 1.67% unless they had a family history of breast cancer and were older than age 65 years. Limitation The model has only modest ability to discriminate between women who will develop breast cancer and those who will not. Conclusion A breast cancer prediction model that incorporates routinely reported measures of breast density can estimate 5-year risk for invasive breast cancer. Its accuracy needs to be further evaluated in independent populations before it can be recommended for clinical use. PMID:18316752

  19. Mathematical modeling for corrosion environment estimation based on concrete resistivity measurement directly above reinforcement

    International Nuclear Information System (INIS)

    Lim, Young-Chul; Lee, Han-Seung; Noguchi, Takafumi

    2009-01-01

    This study aims to formulate a resistivity model whereby the concrete resistivity expressing the environment of steel reinforcement can be directly estimated and evaluated based on measurement immediately above reinforcement as a method of evaluating corrosion deterioration in reinforced concrete structures. It also aims to provide a theoretical ground for the feasibility of durability evaluation by electric non-destructive techniques with no need for chipping of cover concrete. This Resistivity Estimation Model (REM), which is a mathematical model using the mirror method, combines conventional four-electrode measurement of resistivity with geometric parameters including cover depth, bar diameter, and electrode intervals. This model was verified by estimation using this model at areas directly above reinforcement and resistivity measurement at areas unaffected by reinforcement in regard to the assessment of the concrete resistivity. Both results strongly correlated, proving the validity of this model. It is expected to be applicable to laboratory study and field diagnosis regarding reinforcement corrosion. (author)

  20. Estimation of tool wear during CNC milling using neural network-based sensor fusion

    Science.gov (United States)

    Ghosh, N.; Ravi, Y. B.; Patra, A.; Mukhopadhyay, S.; Paul, S.; Mohanty, A. R.; Chattopadhyay, A. B.

    2007-01-01

    Cutting tool wear degrades the product quality in manufacturing processes. Monitoring tool wear value online is therefore needed to prevent degradation in machining quality. Unfortunately there is no direct way of measuring the tool wear online. Therefore one has to adopt an indirect method wherein the tool wear is estimated from several sensors measuring related process variables. In this work, a neural network-based sensor fusion model has been developed for tool condition monitoring (TCM). Features extracted from a number of machining zone signals, namely cutting forces, spindle vibration, spindle current, and sound pressure level have been fused to estimate the average flank wear of the main cutting edge. Novel strategies such as, signal level segmentation for temporal registration, feature space filtering, outlier removal, and estimation space filtering have been proposed. The proposed approach has been validated by both laboratory and industrial implementations.

  1. A Fuzzy Logic-Based Approach for Estimation of Dwelling Times of Panama Metro Stations

    Directory of Open Access Journals (Sweden)

    Aranzazu Berbey Alvarez

    2015-04-01

    Full Text Available Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation of the elements of the origin-destination matrix and the dwelling time of stations in a railway transport system. The fuzzy inference engine used in the algorithm is based in the principle of maximum entropy. The approach considers passengers’ preferences to assign a level of congestion in each car of the train in function of the properties of the station platforms. This approach is implemented to estimate the passenger flow and dwelling times of the recently opened Line 1 of the Panama Metro. The dwelling times obtained from the simulation are compared to real measurements to validate the approach.

  2. Estimating the operator's performance time of emergency procedural tasks based on a task complexity measure

    International Nuclear Information System (INIS)

    Jung, Won Dae; Park, Jink Yun

    2012-01-01

    It is important to understand the amount of time required to execute an emergency procedural task in a high-stress situation for managing human performance under emergencies in a nuclear power plant. However, the time to execute an emergency procedural task is highly dependent upon expert judgment due to the lack of actual data. This paper proposes an analytical method to estimate the operator's performance time (OPT) of a procedural task, which is based on a measure of the task complexity (TACOM). The proposed method for estimating an OPT is an equation that uses the TACOM as a variable, and the OPT of a procedural task can be calculated if its relevant TACOM score is available. The validity of the proposed equation is demonstrated by comparing the estimated OPTs with the observed OPTs for emergency procedural tasks in a steam generator tube rupture scenario.

  3. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  4. Fault Severity Estimation of Rotating Machinery Based on Residual Signals

    Directory of Open Access Journals (Sweden)

    Fan Jiang

    2012-01-01

    Full Text Available Fault severity estimation is an important part of a condition-based maintenance system, which can monitor the performance of an operation machine and enhance its level of safety. In this paper, a novel method based on statistical property and residual signals is developed for estimating the fault severity of rotating machinery. The fast Fourier transformation (FFT is applied to extract the so-called multifrequency-band energy (MFBE from the vibration signals of rotating machinery with different fault severity levels in the first stage. Usually these features of the working conditions with different fault sensitivities are different. Therefore a sensitive features-selecting algorithm is defined to construct the feature matrix and calculate the statistic parameter (mean in the second stage. In the last stage, the residual signals computed by the zero space vector are used to estimate the fault severity. Simulation and experimental results reveal that the proposed method based on statistics and residual signals is effective and feasible for estimating the severity of a rotating machine fault.

  5. Validation of a food quantification picture book and portion sizes estimation applying perception and memory methods.

    Science.gov (United States)

    Szenczi-Cseh, J; Horváth, Zs; Ambrus, Á

    2017-12-01

    We tested the applicability of EPIC-SOFT food picture series used in the context of a Hungarian food consumption survey gathering data for exposure assessment, and investigated errors in food portion estimation resulted from the visual perception and conceptualisation-memory. Sixty-two participants in three age groups (10 to foods. The results were considered acceptable if the relative difference between average estimated and actual weight obtained through the perception method was ≤25%, and the relative standard deviation of the individual weight estimates was food items were rated acceptable. Small portion sizes were tended to be overestimated, large ones were tended to be underestimated. Portions of boiled potato and creamed spinach were all over- and underestimated, respectively. Recalling the portion sizes resulted in overestimation with larger differences (up to 60.7%).

  6. In-vivo validation of fast spectral velocity estimation techniques – preliminary results

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Gran, Fredrik; Pedersen, Mads Møller

    2008-01-01

    Spectral Doppler is a common way to estimate blood velocities in medical ultrasound (US). The standard way of estimating spectrograms is by using Welch's method (WM). WM is dependent on a long observation window (OW) (about 100 transmissions) to produce spectrograms with sufficient spectral...... resolution and contrast. Two adaptive filterbank methods have been suggested to circumvent this problem: the Blood spectral Power Capon method (BPC) and the Blood Amplitude and Phase Estimation method (BAPES). Previously, simulations and flow rig experiments have indicated that the two adaptive methods can...... was scanned using the experimental ultrasound scanner RASMUS and a B-K Medical 5 MHz linear array transducer with an angle of insonation not exceeding 60deg. All 280 spectrograms were then randomised and presented to a radiologist blinded for method and OW for visual evaluation: useful or not useful. WMbw...

  7. HZETRN radiation transport validation using balloon-based experimental data

    Science.gov (United States)

    Warner, James E.; Norman, Ryan B.; Blattnig, Steve R.

    2018-05-01

    The deterministic radiation transport code HZETRN (High charge (Z) and Energy TRaNsport) was developed by NASA to study the effects of cosmic radiation on astronauts and instrumentation shielded by various materials. This work presents an analysis of computed differential flux from HZETRN compared with measurement data from three balloon-based experiments over a range of atmospheric depths, particle types, and energies. Model uncertainties were quantified using an interval-based validation metric that takes into account measurement uncertainty both in the flux and the energy at which it was measured. Average uncertainty metrics were computed for the entire dataset as well as subsets of the measurements (by experiment, particle type, energy, etc.) to reveal any specific trends of systematic over- or under-prediction by HZETRN. The distribution of individual model uncertainties was also investigated to study the range and dispersion of errors beyond just single scalar and interval metrics. The differential fluxes from HZETRN were generally well-correlated with balloon-based measurements; the median relative model difference across the entire dataset was determined to be 30%. The distribution of model uncertainties, however, revealed that the range of errors was relatively broad, with approximately 30% of the uncertainties exceeding ± 40%. The distribution also indicated that HZETRN systematically under-predicts the measurement dataset as a whole, with approximately 80% of the relative uncertainties having negative values. Instances of systematic bias for subsets of the data were also observed, including a significant underestimation of alpha particles and protons for energies below 2.5 GeV/u. Muons were found to be systematically over-predicted at atmospheric depths deeper than 50 g/cm2 but under-predicted for shallower depths. Furthermore, a systematic under-prediction of alpha particles and protons was observed below the geomagnetic cutoff, suggesting that

  8. Validity of Standing Posture Eight-electrode Bioelectrical Impedance to Estimate Body Composition in Taiwanese Elderly

    Directory of Open Access Journals (Sweden)

    Ling-Chun Lee

    2014-09-01

    Conclusion: The results of this study showed that the impedance index and LST in the whole body, upper limbs, and lower limbs derived from DXA findings were highly correlated. The LST and BF% estimated by BIA8 in whole body and various body segments were highly correlated with the corresponding DXA results; however, BC-418 overestimates the participants' appendicular LST and underestimates whole body BF%. Therefore, caution is needed when interpreting the results of appendicular LST and whole body BF% estimated for elderly adults.

  9. [Prognostic estimation in critical patients. Validation of a new and very simple system of prognostic estimation of survival in an intensive care unit].

    Science.gov (United States)

    Abizanda, R; Padron, A; Vidal, B; Mas, S; Belenguer, A; Madero, J; Heras, A

    2006-04-01

    To make the validation of a new system of prognostic estimation of survival in critical patients (EPEC) seen in a multidisciplinar Intensive care unit (ICU). Prospective analysis of a patient cohort seen in the ICU of a multidisciplinar Intensive Medicine Service of a reference teaching hospital with 19 beds. Four hundred eighty four patients admitted consecutively over 6 months in 2003. Data collection of a basic minimum data set that includes patient identification data (gender, age), reason for admission and their origin, prognostic estimation of survival by EPEC, MPM II 0 and SAPS II (the latter two considered as gold standard). Mortality was evaluated on hospital discharge. EPEC validation was done with analysis of its discriminating capacity (ROC curve), calibration of its prognostic capacity (Hosmer Lemeshow C test), resolution of the 2 x 2 Contingency tables around different probability values (20, 50, 70 and mean value of prognostic estimation). The standardized mortality rate (SMR) for each one of the methods was calculated. Linear regression of the EPEC regarding the MPM II 0 and SAPS II was established and concordance analyses were done (Bland-Altman test) of the prediction of mortality by the three systems. In spite of an apparently good linear correlation, similar accuracy of prediction and discrimination capacity, EPEC is not well-calibrated (no likelihood of death greater than 50%) and the concordance analyses show that more than 10% of the pairs were outside the 95% confidence interval. In spite of its ease of application and calculation and of incorporating delay of admission in ICU as a variable, EPEC does not offer any predictive advantage on MPM II 0 or SAPS II, and its predictions adapt to reality worse.

  10. Towards Validating Risk Indicators Based on Measurement Theory (Extended version)

    NARCIS (Netherlands)

    Morali, A.; Wieringa, Roelf J.

    Due to the lack of quantitative information and for cost-efficiency, most risk assessment methods use partially ordered values (e.g. high, medium, low) as risk indicators. In practice it is common to validate risk indicators by asking stakeholders whether they make sense. This way of validation is

  11. Template-Based Estimation of Time-Varying Tempo

    Directory of Open Access Journals (Sweden)

    Peeters Geoffroy

    2007-01-01

    Full Text Available We present a novel approach to automatic estimation of tempo over time. This method aims at detecting tempo at the tactus level for percussive and nonpercussive audio. The front-end of our system is based on a proposed reassigned spectral energy flux for the detection of musical events. The dominant periodicities of this flux are estimated by a proposed combination of discrete Fourier transform and frequency-mapped autocorrelation function. The most likely meter, beat, and tatum over time are then estimated jointly using proposed meter/beat subdivision templates and a Viterbi decoding algorithm. The performances of our system have been evaluated on four different test sets among which three were used during the ISMIR 2004 tempo induction contest. The performances obtained are close to the best results of this contest.

  12. Event-based state estimation a stochastic perspective

    CERN Document Server

    Shi, Dawei; Chen, Tongwen

    2016-01-01

    This book explores event-based estimation problems. It shows how several stochastic approaches are developed to maintain estimation performance when sensors perform their updates at slower rates only when needed. The self-contained presentation makes this book suitable for readers with no more than a basic knowledge of probability analysis, matrix algebra and linear systems. The introduction and literature review provide information, while the main content deals with estimation problems from four distinct angles in a stochastic setting, using numerous illustrative examples and comparisons. The text elucidates both theoretical developments and their applications, and is rounded out by a review of open problems. This book is a valuable resource for researchers and students who wish to expand their knowledge and work in the area of event-triggered systems. At the same time, engineers and practitioners in industrial process control will benefit from the event-triggering technique that reduces communication costs ...

  13. Estimation of Supercapacitor Energy Storage Based on Fractional Differential Equations.

    Science.gov (United States)

    Kopka, Ryszard

    2017-12-22

    In this paper, new results on using only voltage measurements on supercapacitor terminals for estimation of accumulated energy are presented. For this purpose, a study based on application of fractional-order models of supercapacitor charging/discharging circuits is undertaken. Parameter estimates of the models are then used to assess the amount of the energy accumulated in supercapacitor. The obtained results are compared with energy determined experimentally by measuring voltage and current on supercapacitor terminals. All the tests are repeated for various input signal shapes and parameters. Very high consistency between estimated and experimental results fully confirm suitability of the proposed approach and thus applicability of the fractional calculus to modelling of supercapacitor energy storage.

  14. Longitudinal tire force estimation based on sliding mode observer

    Energy Technology Data Exchange (ETDEWEB)

    El Hadri, A.; Cadiou, J.C.; M' Sirdi, N.K. [Versailles Univ., Paris (France). Lab. de Robotique; Beurier, G.; Delanne, Y. [Lab. Central des Ponts, Centre de Nantes (France)

    2001-07-01

    This paper presents an estimation method for vehicle longitudinal dynamics, particularly the tractive/braking force. The estimation can be used to detect a critical driving situation to improve security. It can be used also in several vehicle control systems. The main characteristics of the vehicle longitudinal dynamics were taken into account in the model used to design an observer and computer simulations. The state variables are the angular wheel velocity, vehicle velocity and the longitudinal tire force. The proposed differential equation of the tractive/braking force is derived using the concept of relaxation length. The observer designed is based on the sliding mode approach using only the angular wheel velocity measurement. The proposed method of estimation is verified through a one-wheel simulation model with a ''Magic formula'' tire model. Simulations results show an excellent reconstruction of the tire force. (orig.)

  15. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.

    2010-11-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.

  16. Estimating spacecraft attitude based on in-orbit sensor measurements

    DEFF Research Database (Denmark)

    Jakobsen, Britt; Lyn-Knudsen, Kevin; Mølgaard, Mathias

    2014-01-01

    of 2014/15. To better evaluate the performance of the payload, it is desirable to couple the payload data with the satellite's orientation. With AAUSAT3 already in orbit it is possible to collect data directly from space in order to evaluate the performance of the attitude estimation. An extended kalman...... filter (EKF) is used for quaternion-based attitude estimation. A Simulink simulation environment developed for AAUSAT3, containing a "truth model" of the satellite and the orbit environment, is used to test the performance The performance is tested using different sensor noise parameters obtained both...... from a controlled environment on Earth as well as in-orbit. By using sensor noise parameters obtained on Earth as the expected parameters in the attitude estimation, and simulating the environment using the sensor noise parameters from space, it is possible to assess whether the EKF can be designed...

  17. Sulphur levels in saliva as an estimation of sulphur status in cattle: a validation study

    NARCIS (Netherlands)

    Dermauw, V.; Froidmont, E.; Dijkstra, J.; Boever, de J.L.; Vyverman, W.; Debeer, A.E.; Janssens, G.P.J.

    2012-01-01

    Effective assessment of sulphur (S) status in cattle is important for optimal health, yet remains difficult. Rumen fluid S concentrations are preferred, but difficult to sample under practical conditions. This study aimed to evaluate salivary S concentration as estimator of S status in cattle.

  18. Validating diagnoses from hospital discharge registers change risk estimates for acute coronary syndrome

    DEFF Research Database (Denmark)

    Joensen, Albert Marni; Schmidt, E.B.; Dethlefsen, Claus

    2007-01-01

    of acute coronary syndrome (ACS) diagnoses identified in a hospital discharge register changed the relative risk estimates of well-established risk factors for ACS. Methods All first-time ACS diagnoses (n=1138) in the Danish National Patient Registry were identified among male participants in the Danish...

  19. Validation of the Ejike-Ijeh equations for the estimation of body fat ...

    African Journals Online (AJOL)

    The Ejike-Ijeh equations for the estimation of body fat percentage makes it possible for the body fat content of individuals and populations to be determined without the use of costly equipment. However, because the equations were derived using data from a young-adult (18-29 years old) Nigerian population, it is important ...

  20. Development and validation of a method to estimate body weight in ...

    African Journals Online (AJOL)

    Mid-arm circumference (MAC) has previously been used as a surrogate indicator of habitus, and the objective of this study was to determine whether MAC cut-off values could be used to predict habitus scores (HSs) to create an objective and standardised weight estimation methodology, the PAWPER XL-MAC method.

  1. Simultaneous Validation of Seven Physical Activity Questionnaires Used in Japanese Cohorts for Estimating Energy Expenditure: A Doubly Labeled Water Study.

    Science.gov (United States)

    Sasai, Hiroyuki; Nakata, Yoshio; Murakami, Haruka; Kawakami, Ryoko; Nakae, Satoshi; Tanaka, Shigeho; Ishikawa-Takata, Kazuko; Yamada, Yosuke; Miyachi, Motohiko

    2018-04-28

    Physical activity questionnaires (PAQs) used in large-scale Japanese cohorts have rarely been simultaneously validated against the gold standard doubly labeled water (DLW) method. This study examined the validity of seven PAQs used in Japan for estimating energy expenditure against the DLW method. Twenty healthy Japanese adults (9 men; mean age, 32.4 [standard deviation {SD}, 9.4] years, mainly researchers and students) participated in this study. Fifteen-day daily total energy expenditure (TEE) and basal metabolic rate (BMR) were measured using the DLW method and a metabolic chamber, respectively. Activity energy expenditure (AEE) was calculated as TEE - BMR - 0.1 × TEE. Seven PAQs were self-administered to estimate TEE and AEE. The mean measured values of TEE and AEE were 2,294 (SD, 318) kcal/day and 721 (SD, 161) kcal/day, respectively. All of the PAQs indicated moderate-to-strong correlations with the DLW method in TEE (rho = 0.57-0.84). Two PAQs (Japan Public Health Center Study [JPHC]-PAQ Short and JPHC-PAQ Long) showed significant equivalence in TEE and moderate intra-class correlation coefficients (ICC). None of the PAQs showed significantly equivalent AEE estimates, with differences ranging from -547 to 77 kcal/day. Correlations and ICCs in AEE were mostly weak or fair (rho = 0.02-0.54, and ICC = 0.00-0.44). Only JPHC-PAQ Short provided significant and fair agreement with the DLW method. TEE estimated by the PAQs showed moderate or strong correlations with the results of DLW. Two PAQs showed equivalent TEE and moderate agreement. None of the PAQs showed equivalent AEE estimation to the gold standard, with weak-to-fair correlations and agreements. Further studies with larger sample sizes are needed to confirm these findings.

  2. ON ESTIMATING FORCE-FREENESS BASED ON OBSERVED MAGNETOGRAMS

    International Nuclear Information System (INIS)

    Zhang, X. M.; Zhang, M.; Su, J. T.

    2017-01-01

    It is a common practice in the solar physics community to test whether or not measured photospheric or chromospheric vector magnetograms are force-free, using the Maxwell stress as a measure. Some previous studies have suggested that magnetic fields of active regions in the solar chromosphere are close to being force-free whereas there is no consistency among previous studies on whether magnetic fields of active regions in the solar photosphere are force-free or not. Here we use three kinds of representative magnetic fields (analytical force-free solutions, modeled solar-like force-free fields, and observed non-force-free fields) to discuss how measurement issues such as limited field of view (FOV), instrument sensitivity, and measurement error could affect the estimation of force-freeness based on observed magnetograms. Unlike previous studies that focus on discussing the effect of limited FOV or instrument sensitivity, our calculation shows that just measurement error alone can significantly influence the results of estimates of force-freeness, due to the fact that measurement errors in horizontal magnetic fields are usually ten times larger than those in vertical fields. This property of measurement errors, interacting with the particular form of a formula for estimating force-freeness, would result in wrong judgments of the force-freeness: a truly force-free field may be mistakenly estimated as being non-force-free and a truly non-force-free field may be estimated as being force-free. Our analysis calls for caution when interpreting estimates of force-freeness based on measured magnetograms, and also suggests that the true photospheric magnetic field may be further away from being force-free than it currently appears to be.

  3. Validation Study on the MCC-based Technology

    International Nuclear Information System (INIS)

    Park, Sungkeun; Lee, Dowhan; Kang, Shincheul; Choi, Hyunwoo; Chai, Jangbom

    2006-01-01

    KEPRI and M and D Corporation has developed a methodology, called the NEST I (Non-intrusive Evaluation of Stem Thrust), for determining the stem thrust for a Motor Operated Valve (MOV) based on the motor torque and the stem displacement. The motor torque is determined using another method called NEET (Non-intrusive Evaluation of Electric Torque) which uses the voltage and current data from three phases to obtain the motor torque. The stem displacement is obtained from the voltage and current data along with the nameplate information of the motor, actuator and stem. The motor data (voltage, current and coil current) are measured using MOVIDS (Motor Operated Valve Intelligent Diagnostic System). The motor torque is determined using a NEET algorithm and the stem thrust is calculated using the NEST I method. The goal of this testing was to obtain data from operation of a MOV and to compare the actual measured thrust with the thrust calculated using the NEET / NEST I methods and therefore validate the NEET / NEST I methods

  4. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    Science.gov (United States)

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  5. Extrapolated HPGe efficiency estimates based on a single calibration measurement

    International Nuclear Information System (INIS)

    Winn, W.G.

    1994-01-01

    Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V 0 . Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V 0 , and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L ] ± 1/2 [element-of h - element-of L ] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L ] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V 0 , causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of

  6. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  7. The Validity of Value-Added Estimates from Low-Stakes Testing Contexts: The Impact of Change in Test-Taking Motivation and Test Consequences

    Science.gov (United States)

    Finney, Sara J.; Sundre, Donna L.; Swain, Matthew S.; Williams, Laura M.

    2016-01-01

    Accountability mandates often prompt assessment of student learning gains (e.g., value-added estimates) via achievement tests. The validity of these estimates have been questioned when performance on tests is low stakes for students. To assess the effects of motivation on value-added estimates, we assigned students to one of three test consequence…

  8. Validating GPM-based Multi-satellite IMERG Products Over South Korea

    Science.gov (United States)

    Wang, J.; Petersen, W. A.; Wolff, D. B.; Ryu, G. H.

    2017-12-01

    Accurate precipitation estimates derived from space-borne satellite measurements are critical for a wide variety of applications such as water budget studies, and prevention or mitigation of natural hazards caused by extreme precipitation events. This study validates the near-real-time Early Run, Late Run and the research-quality Final Run Integrated Multi-Satellite Retrievals for GPM (IMERG) using Korean Quantitative Precipitation Estimation (QPE). The Korean QPE data are at a 1-hour temporal resolution and 1-km by 1-km spatial resolution, and were developed by Korea Meteorological Administration (KMA) from a Real-time ADjusted Radar-AWS (Automatic Weather Station) Rainrate (RAD-RAR) system utilizing eleven radars over the Republic of Korea. The validation is conducted by comparing Version-04A IMERG (Early, Late and Final Runs) with Korean QPE over the area (124.5E-130.5E, 32.5N-39N) at various spatial and temporal scales during March 2014 through November 2016. The comparisons demonstrate the reasonably good ability of Version-04A IMERG products in estimating precipitation over South Korea's complex topography that consists mainly of hills and mountains, as well as large coastal plains. Based on this data, the Early Run, Late Run and Final Run IMERG precipitation estimates higher than 0.1mm h-1 are about 20.1%, 7.5% and 6.1% higher than Korean QPE at 0.1o and 1-hour resolutions. Detailed comparison results are available at https://wallops-prf.gsfc.nasa.gov/KoreanQPE.V04/index.html

  9. MODIS Based Estimation of Forest Aboveground Biomass in China

    Science.gov (United States)

    Sun, Yan; Wang, Tao; Zeng, Zhenzhong; Piao, Shilong

    2015-01-01

    Accurate estimation of forest biomass C stock is essential to understand carbon cycles. However, current estimates of Chinese forest biomass are mostly based on inventory-based timber volumes and empirical conversion factors at the provincial scale, which could introduce large uncertainties in forest biomass estimation. Here we provide a data-driven estimate of Chinese forest aboveground biomass from 2001 to 2013 at a spatial resolution of 1 km by integrating a recently reviewed plot-level ground-measured forest aboveground biomass database with geospatial information from 1-km Moderate-Resolution Imaging Spectroradiometer (MODIS) dataset in a machine learning algorithm (the model tree ensemble, MTE). We show that Chinese forest aboveground biomass is 8.56 Pg C, which is mainly contributed by evergreen needle-leaf forests and deciduous broadleaf forests. The mean forest aboveground biomass density is 56.1 Mg C ha−1, with high values observed in temperate humid regions. The responses of forest aboveground biomass density to mean annual temperature are closely tied to water conditions; that is, negative responses dominate regions with mean annual precipitation less than 1300 mm y−1 and positive responses prevail in regions with mean annual precipitation higher than 2800 mm y−1. During the 2000s, the forests in China sequestered C by 61.9 Tg C y−1, and this C sink is mainly distributed in north China and may be attributed to warming climate, rising CO2 concentration, N deposition, and growth of young forests. PMID:26115195

  10. MODIS Based Estimation of Forest Aboveground Biomass in China.

    Science.gov (United States)

    Yin, Guodong; Zhang, Yuan; Sun, Yan; Wang, Tao; Zeng, Zhenzhong; Piao, Shilong

    2015-01-01

    Accurate estimation of forest biomass C stock is essential to understand carbon cycles. However, current estimates of Chinese forest biomass are mostly based on inventory-based timber volumes and empirical conversion factors at the provincial scale, which could introduce large uncertainties in forest biomass estimation. Here we provide a data-driven estimate of Chinese forest aboveground biomass from 2001 to 2013 at a spatial resolution of 1 km by integrating a recently reviewed plot-level ground-measured forest aboveground biomass database with geospatial information from 1-km Moderate-Resolution Imaging Spectroradiometer (MODIS) dataset in a machine learning algorithm (the model tree ensemble, MTE). We show that Chinese forest aboveground biomass is 8.56 Pg C, which is mainly contributed by evergreen needle-leaf forests and deciduous broadleaf forests. The mean forest aboveground biomass density is 56.1 Mg C ha-1, with high values observed in temperate humid regions. The responses of forest aboveground biomass density to mean annual temperature are closely tied to water conditions; that is, negative responses dominate regions with mean annual precipitation less than 1300 mm y-1 and positive responses prevail in regions with mean annual precipitation higher than 2800 mm y-1. During the 2000s, the forests in China sequestered C by 61.9 Tg C y-1, and this C sink is mainly distributed in north China and may be attributed to warming climate, rising CO2 concentration, N deposition, and growth of young forests.

  11. MODIS Based Estimation of Forest Aboveground Biomass in China.

    Directory of Open Access Journals (Sweden)

    Guodong Yin

    Full Text Available Accurate estimation of forest biomass C stock is essential to understand carbon cycles. However, current estimates of Chinese forest biomass are mostly based on inventory-based timber volumes and empirical conversion factors at the provincial scale, which could introduce large uncertainties in forest biomass estimation. Here we provide a data-driven estimate of Chinese forest aboveground biomass from 2001 to 2013 at a spatial resolution of 1 km by integrating a recently reviewed plot-level ground-measured forest aboveground biomass database with geospatial information from 1-km Moderate-Resolution Imaging Spectroradiometer (MODIS dataset in a machine learning algorithm (the model tree ensemble, MTE. We show that Chinese forest aboveground biomass is 8.56 Pg C, which is mainly contributed by evergreen needle-leaf forests and deciduous broadleaf forests. The mean forest aboveground biomass density is 56.1 Mg C ha-1, with high values observed in temperate humid regions. The responses of forest aboveground biomass density to mean annual temperature are closely tied to water conditions; that is, negative responses dominate regions with mean annual precipitation less than 1300 mm y-1 and positive responses prevail in regions with mean annual precipitation higher than 2800 mm y-1. During the 2000s, the forests in China sequestered C by 61.9 Tg C y-1, and this C sink is mainly distributed in north China and may be attributed to warming climate, rising CO2 concentration, N deposition, and growth of young forests.

  12. Observer Based Fault Detection and Moisture Estimating in Coal Mill

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Mataji, Babak

    2008-01-01

    In this paper an observer-based method for detecting faults and estimating moisture content in the coal in coal mills is presented. Handling of faults and operation under special conditions, such as high moisture content in the coal, are of growing importance due to the increasing...... requirements to the general performance of power plants. Detection  of faults and moisture content estimation are consequently of high interest in the handling of the problems caused by faults and moisture content. The coal flow out of the mill is the obvious variable to monitor, when detecting non-intended drops in the coal...... flow out of the coal mill. However, this variable is not measurable. Another estimated variable is the moisture content, which is only "measurable" during steady-state operations of the coal mill. Instead, this paper suggests a method where these unknown variables are estimated based on a simple energy...

  13. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs.

    Science.gov (United States)

    McCoy, A B; Wright, A; Krousel-Wood, M; Thomas, E J; McCoy, J A; Sittig, D F

    2015-01-01

    Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes.

  14. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs

    Science.gov (United States)

    Wright, A.; Krousel-Wood, M.; Thomas, E. J.; McCoy, J. A.; Sittig, D. F.

    2015-01-01

    Summary Background Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. Objective We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. Methods We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. Results The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. Conclusions We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes. PMID:26171079

  15. Gradient HPLC method development and validation for Simultaneous estimation of Rosiglitazone and Gliclazide.

    Directory of Open Access Journals (Sweden)

    Uttam Singh Baghel

    2012-10-01

    Full Text Available Objective: The aim of present work was to develop a gradient RP-HPLC method for simultaneous analysis of rosiglitazone and gliclazide, in a tablet dosage form. Method: Chromatographic system was optimized using a hypersil C18 (250mm x 4.6mm, 5毺 m column with potassium dihydrogen phosphate (pH-7.0 and acetonitrile in the ratio of 60:40, as mobile phase, at a flow rate of 1.0 ml/min. Detection was carried out at 225 nm by a SPD-20A prominence UV/Vis detector. Result: Rosiglitazone and gliclazide were eluted with retention times of 17.36 and 7.06 min, respectively. Beer’s Lambert ’s Law was obeyed over the concentration ranges of 5 to 70 毺 g/ml and 2 to 12 毺 g/ml for rosiglitazone and gliclazide, respectively. Conclusion: The high recovery and low coefficients of variation confirm the suitability of the method for simultaneous analysis of both drugs in a tablets dosage form. Statistical analysis proves that the method is sensitive and significant for the analysis of rosiglitazone and gliclazide in pure and in pharmaceutical dosage form without any interference from the excipients. The method was validated in accordance with ICH guidelines. Validation revealed the method is specific, rapid, accurate, precise, reliable, and reproducible.

  16. Validation of energy intake estimated from a food frequency questionnaire: a doubly labelled water study.

    Science.gov (United States)

    Andersen, L Frost; Tomten, H; Haggarty, P; Løvø, A; Hustvedt, B-E

    2003-02-01

    The validation of dietary assessment methods is critical in the evaluation of the relation between dietary intake and health. The aim of this study was to assess the validity of a food frequency questionnaire by comparing energy intake with energy expenditure measured with the doubly labelled water method. Total energy expenditure was measured with the doubly labelled water (DLW) method during a 10 day period. Furthermore, the subjects filled in the food frequency questionnaire about 18-35 days after the DLW phase of the study was completed. Twenty-one healthy, non-pregnant females volunteered to participate in the study; only 17 subjects completed the study. The group energy intake was on average 10% lower than the energy expenditure, but the difference was not statistically significant. However, there was a wide range in reporting accuracy: seven subjects were identified as acceptable reporters, eight as under-reporters and two were identified as over-reporters. The width of the 95% confidence limits of agreement in a Bland and Altman plot for energy intake and energy expenditure varied from -5 to 3 MJ. The data showed that there was substantial variability in the accuracy of the food frequency questionnaire at the individual level. Furthermore, the results showed that the questionnaire was more accurate for groups than individuals.

  17. CSNI Integral Test Facility Matrices for Validation of Best-Estimate Thermal-Hydraulic Computer Codes

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Internationally agreed Integral Test Facility (ITF) matrices for validation of realistic thermal hydraulic system computer codes were established. ITF development is mainly for Pressurised Water Reactors (PWRs) and Boiling Water Reactors (BWRs). A separate activity was for Russian Pressurised Water-cooled and Water-moderated Energy Reactors (WWER). Firstly, the main physical phenomena that occur during considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. In this paper some specific examples from the ITF matrices will also be provided. The matrices will be a guide for code validation, will be a basis for comparisons of code predictions performed with different system codes, and will contribute to the quantification of the uncertainty range of code model predictions. In addition to this objective, the construction of such a matrix is an attempt to record information which has been generated around the world over the last years, so that it is more accessible to present and future workers in that field than would otherwise be the case.

  18. A Validated RP-HPLC Method for Simultaneous Estimation of Atenolol and Indapamide in Pharmaceutical Formulations

    Directory of Open Access Journals (Sweden)

    G. Tulja Rani

    2011-01-01

    Full Text Available A simple, fast, precise, selective and accurate RP-HPLC method was developed and validated for the simultaneous determination of atenolol and indapamide from bulk and formulations. Chromatographic separation was achieved isocratically on a Waters C18 column (250×4.6 mm, 5 µ particle size using a mobile phase, methanol and water (adjusted to pH 2.7 with 1% orthophosphoric acid in the ratio of 80:20. The flow rate was 1 mL/min and effluent was detected at 230 nm. The retention time of atenolol and indapamide were 1.766 min and 3.407 min. respectively. Linearity was observed in the concentration range of 12.5-150 µg/mL for atenolol and 0.625-7.5 µg/mL for indapamide. Percent recoveries obtained for both the drugs were 99.74-100.06% and 98.65-99.98% respectively. The method was validated according to the ICH guidelines with respect to specificity, linearity, accuracy, precision and robustness. The method developed can be used for the routine analysis of atenolol and indapamide from their combined dosage form.

  19. Validating novel air pollution sensors to improve exposure estimates for epidemiological analyses and citizen science.

    Science.gov (United States)

    Jerrett, Michael; Donaire-Gonzalez, David; Popoola, Olalekan; Jones, Roderic; Cohen, Ronald C; Almanza, Estela; de Nazelle, Audrey; Mead, Iq; Carrasco-Turigas, Glòria; Cole-Hunter, Tom; Triguero-Mas, Margarita; Seto, Edmund; Nieuwenhuijsen, Mark

    2017-10-01

    Low cost, personal air pollution sensors may reduce exposure measurement errors in epidemiological investigations and contribute to citizen science initiatives. Here we assess the validity of a low cost personal air pollution sensor. Study participants were drawn from two ongoing epidemiological projects in Barcelona, Spain. Participants repeatedly wore the pollution sensor - which measured carbon monoxide (CO), nitric oxide (NO), and nitrogen dioxide (NO 2 ). We also compared personal sensor measurements to those from more expensive instruments. Our personal sensors had moderate to high correlations with government monitors with averaging times of 1-h and 30-min epochs (r ~ 0.38-0.8) for NO and CO, but had low to moderate correlations with NO 2 (~0.04-0.67). Correlations between the personal sensors and more expensive research instruments were higher than with the government monitors. The sensors were able to detect high and low air pollution levels in agreement with expectations (e.g., high levels on or near busy roadways and lower levels in background residential areas and parks). Our findings suggest that the low cost, personal sensors have potential to reduce exposure measurement error in epidemiological studies and provide valid data for citizen science studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Automatic CT-based finite element model generation for temperature-based death time estimation: feasibility study and sensitivity analysis.

    Science.gov (United States)

    Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita

    2017-05-01

    Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.

  1. Validade do instrumento WHO VAW STUDY para estimar violência de gênero contra a mulher Validez de instrumento para estimar violencia de género contra la mujer Validity of the WHO VAW study instrument for estimating gender-based violence against women

    Directory of Open Access Journals (Sweden)

    Lilia Blima Schraiber

    2010-08-01

    less favorable outcomes, with the exception of suicide attempts in São Paulo. CONCLUSIONS: The instrument was shown to be adequate for estimating gender-based violence against women perpetrated by intimate partners and can be used in studies on this subject. It has high internal consistency and a capacity to discriminate between different forms of violence (psychological, physical and sexual perpetrated in different social contexts. The instrument also characterizes the female victim and her relationship with the aggressor, thereby facilitating gender analysis.

  2. Parameters estimation for reactive transport: A way to test the validity of a reactive model

    Science.gov (United States)

    Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme

    The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.

  3. Theoretical estimation and validation of radiation field in alkaline hydrolysis plant

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Sanjay; Krishnamohanan, T.; Gopalakrishnan, R.K., E-mail: singhs@barc.gov.in [Radiation Safety Systems Division, Bhabha Atomic Research Centre, Mumbai (India); Anand, S. [Health Physics Division, Bhabha Atomic Research Centre, Mumbai (India); Pancholi, K. C. [Waste Management Division, Bhabha Atomic Research Centre, Mumbai (India)

    2014-07-01

    Spent organic solvent (30% TBP + 70% n-Dodecane) from reprocessing facility is treated at ETP in Alkaline Hydrolysis Plant (AHP) and Organic Waste Incineration (ORWIN) Facility. In AHP-ORWIN, there are three horizontal cylindrical tanks having 2.0 m{sup 3} operating capacity used for waste storage and transfer. The three tanks are, Aqueous Waste Tank (AWT), Waste Receiving Tank (WRT) and Dodecane Waste Tank (DWT). These tanks are en-housed in a shielded room in this facility. Monte Carlo N-Particle (MCNP) radiation transport code was used to estimate ambient radiation field levels when the storage tanks are having hold up volumes of desired specific activity levels. In this paper the theoretically estimated values of radiation field is compared with the actual measured dose.

  4. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship

    OpenAIRE

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15...

  5. Validation of real-time zenith tropospheric delay estimation with TOMION software within WAGNSS networks

    OpenAIRE

    Graffigna, Victoria

    2017-01-01

    The TOmographic Model of the IONospheric electron content (TOMION) software implements a simultaneous precise geodetic and ionospheric modeling, which can be used to test new approaches for real-time precise GNSS modeling (positioning, ionospheric and tropospheric delays, clock errors, among others). In this work, the software is used to estimate the Zenith Tropospheric Delay (ZTD) emulating real time and its performance is evaluated through a comparative analysis with a built-in GIPSY estima...

  6. Data Based Parameter Estimation Method for Circular-scanning SAR Imaging

    Directory of Open Access Journals (Sweden)

    Chen Gong-bo

    2013-06-01

    Full Text Available The circular-scanning Synthetic Aperture Radar (SAR is a novel working mode and its image quality is closely related to the accuracy of the imaging parameters, especially considering the inaccuracy of the real speed of the motion. According to the characteristics of the circular-scanning mode, a new data based method for estimating the velocities of the radar platform and the scanning-angle of the radar antenna is proposed in this paper. By referring to the basic conception of the Doppler navigation technique, the mathematic model and formulations for the parameter estimation are firstly improved. The optimal parameter approximation based on the least square criterion is then realized in solving those equations derived from the data processing. The simulation results verified the validity of the proposed scheme.

  7. Vehicle Sideslip Angle Estimation Based on Hybrid Kalman Filter

    Directory of Open Access Journals (Sweden)

    Jing Li

    2016-01-01

    Full Text Available Vehicle sideslip angle is essential for active safety control systems. This paper presents a new hybrid Kalman filter to estimate vehicle sideslip angle based on the 3-DoF nonlinear vehicle dynamic model combined with Magic Formula tire model. The hybrid Kalman filter is realized by combining square-root cubature Kalman filter (SCKF, which has quick convergence and numerical stability, with square-root cubature based receding horizon Kalman FIR filter (SCRHKF, which has robustness against model uncertainty and temporary noise. Moreover, SCKF and SCRHKF work in parallel, and the estimation outputs of two filters are merged by interacting multiple model (IMM approach. Experimental results show the accuracy and robustness of the hybrid Kalman filter.

  8. Validating CDIAC's population-based approach to the disaggregation of within-country CO2 emissions

    International Nuclear Information System (INIS)

    Cushman, R.M.; Beauchamp, J.J.; Brenkert, A.L.

    1998-01-01

    The Carbon Dioxide Information Analysis Center produces and distributes a data base of CO 2 emissions from fossil-fuel combustion and cement production, expressed as global, regional, and national estimates. CDIAC also produces a companion data base, expressed on a one-degree latitude-longitude grid. To do this gridding, emissions within each country are spatially disaggregated according to the distribution of population within that country. Previously, the lack of within-country emissions data prevented a validation of this approach. But emissions inventories are now becoming available for most US states. An analysis of these inventories confirms that population distribution explains most, but not all, of the variance in the distribution of CO 2 emissions within the US. Additional sources of variance (coal production, non-carbon energy sources, and interstate electricity transfers) are explored, with the hope that the spatial disaggregation of emissions can be improved

  9. Validity of rapid estimation of erythrocyte volume in the diagnosis of polycytemia vera

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, S.; Roedbro, P.

    1989-01-01

    In the diagnosis of polycytemia vera, estimation of erythrocyte volume (EV) from plasma volume (PV) and venous hematocrit (Hct/sub v/) is usually thought unadvisable, because the ratio of whole body hematocrit to venous hematocrit (f ratio) is higher in patients with splenomegaly than in normal subjects, and varies considerably between individuals. We determined the mean f ratio in 232 consecutive patients suspected of polycytemia vera (anti f=0.967; SD 0.048) and used it with each patient's PV and Hct/sub v/ to calculate an estimated normalised EV/sub n/. With measured EV as a reference value, EV/sub n/ was investigated as a diagnostic test. By means of two cut off levels the EV/sub n/ values could be divided into EV/sub n/ elevated, EV/sub n/ not elevated (both with high predictive values), and an EV/sub n/ borderline group. The size of the borderline EV/sub n/ group ranged from 5% to 46% depending on position of the cut off levels, i.e. with the efficiency demanded from the diagnostic test. EV can safely and rapidly be estimated from PV and Hct/sub v/, if anti f is determined from the relevant population, and if the results in an easily definable borderline range of EV/sub n/ values are supplemented by direct EV determination.

  10. Adaptive algorithm for mobile user positioning based on environment estimation

    Directory of Open Access Journals (Sweden)

    Grujović Darko

    2014-01-01

    Full Text Available This paper analyzes the challenges to realize an infrastructure independent and a low-cost positioning method in cellular networks based on RSS (Received Signal Strength parameter, auxiliary timing parameter and environment estimation. The proposed algorithm has been evaluated using field measurements collected from GSM (Global System for Mobile Communications network, but it is technology independent and can be applied in UMTS (Universal Mobile Telecommunication Systems and LTE (Long-Term Evolution networks, also.

  11. Comparison of physically based catchment models for estimating Phosphorus losses

    OpenAIRE

    Nasr, Ahmed Elssidig; Bruen, Michael

    2003-01-01

    As part of a large EPA-funded research project, coordinated by TEAGASC, the Centre for Water Resources Research at UCD reviewed the available distributed physically based catchment models with a potential for use in estimating phosphorous losses for use in implementing the Water Framework Directive. Three models, representative of different levels of approach and complexity, were chosen and were implemented for a number of Irish catchments. This paper reports on (i) the lessons and experience...

  12. Estimation of the flow resistances exerted in coronary arteries using a vessel length-based method.

    Science.gov (United States)

    Lee, Kyung Eun; Kwon, Soon-Sung; Ji, Yoon Cheol; Shin, Eun-Seok; Choi, Jin-Ho; Kim, Sung Joon; Shim, Eun Bo

    2016-08-01

    Flow resistances exerted in the coronary arteries are the key parameters for the image-based computer simulation of coronary hemodynamics. The resistances depend on the anatomical characteristics of the coronary system. A simple and reliable estimation of the resistances is a compulsory procedure to compute the fractional flow reserve (FFR) of stenosed coronary arteries, an important clinical index of coronary artery disease. The cardiac muscle volume reconstructed from computed tomography (CT) images has been used to assess the resistance of the feeding coronary artery (muscle volume-based method). In this study, we estimate the flow resistances exerted in coronary arteries by using a novel method. Based on a physiological observation that longer coronary arteries have more daughter branches feeding a larger mass of cardiac muscle, the method measures the vessel lengths from coronary angiogram or CT images (vessel length-based method) and predicts the coronary flow resistances. The underlying equations are derived from the physiological relation among flow rate, resistance, and vessel length. To validate the present estimation method, we calculate the coronary flow division over coronary major arteries for 50 patients using the vessel length-based method as well as the muscle volume-based one. These results are compared with the direct measurements in a clinical study. Further proving the usefulness of the present method, we compute the coronary FFR from the images of optical coherence tomography.

  13. Experimental and Analytical Studies on Improved Feedforward ML Estimation Based on LS-SVR

    Directory of Open Access Journals (Sweden)

    Xueqian Liu

    2013-01-01

    Full Text Available Maximum likelihood (ML algorithm is the most common and effective parameter estimation method. However, when dealing with small sample and low signal-to-noise ratio (SNR, threshold effects are resulted and estimation performance degrades greatly. It is proved that support vector machine (SVM is suitable for small sample. Consequently, we employ the linear relationship between least squares support vector regression (LS-SVR’s inputs and outputs and regard LS-SVR process as a time-varying linear filter to increase input SNR of received signals and decrease the threshold value of mean square error (MSE curve. Furthermore, it is verified that by taking single-tone sinusoidal frequency estimation, for example, and integrating data analysis and experimental validation, if LS-SVR’s parameters are set appropriately, not only can the LS-SVR process ensure the single-tone sinusoid and additive white Gaussian noise (AWGN channel characteristics of original signals well, but it can also improves the frequency estimation performance. During experimental simulations, LS-SVR process is applied to two common and representative single-tone sinusoidal ML frequency estimation algorithms, the DFT-based frequency-domain periodogram (FDP and phase-based Kay ones. And the threshold values of their MSE curves are decreased by 0.3 dB and 1.2 dB, respectively, which obviously exhibit the advantage of the proposed algorithm.

  14. (Re)conceptualizing validity in (outcomes-based) assessment

    African Journals Online (AJOL)

    Erna Kinsey

    how the construct validity has evolved within social research discour- ses. Third, we invoke particular ..... understanding and, ideally, self-determination through research participation. .... Handbook of classroom assessment. San Diego, CA: ...

  15. Development and Validation of UV Spectrophotometric Method For Estimation of Dolutegravir Sodium in Tablet Dosage Form

    International Nuclear Information System (INIS)

    Balasaheb, B.G.

    2015-01-01

    A simple, rapid, precise and accurate spectrophotometric method has been developed for quantitative analysis of Dolutegravir sodium in tablet formulations. The initial stock solution of Dolutegravir sodium was prepared in methanol solvent and subsequent dilution was done in water. The standard solution of Dolutegravir sodium in water showed maximum absorption at wavelength 259.80 nm. The drug obeyed Beer-Lamberts law in the concentration range of 5-40 μg/ mL with coefficient of correlation (R"2) was 0.9992. The method was validated as per the ICH guidelines. The developed method can be adopted in routine analysis of Dolutegravir sodium in bulk or tablet dosage form and it involves relatively low cost solvents and no complex extraction techniques. (author)

  16. Development and Validation of Liquid Chromatographic Method for Estimation of Naringin in Nanoformulation

    Directory of Open Access Journals (Sweden)

    Kranti P. Musmade

    2014-01-01

    Full Text Available A simple, precise, accurate, rapid, and sensitive reverse phase high performance liquid chromatography (RP-HPLC method with UV detection has been developed and validated for quantification of naringin (NAR in novel pharmaceutical formulation. NAR is a polyphenolic flavonoid present in most of the citrus plants having variety of pharmacological activities. Method optimization was carried out by considering the various parameters such as effect of pH and column. The analyte was separated by employing a C18 (250.0 × 4.6 mm, 5 μm column at ambient temperature in isocratic conditions using phosphate buffer pH 3.5: acetonitrile (75 : 25% v/v as mobile phase pumped at a flow rate of 1.0 mL/min. UV detection was carried out at 282 nm. The developed method was validated according to ICH guidelines Q2(R1. The method was found to be precise and accurate on statistical evaluation with a linearity range of 0.1 to 20.0 μg/mL for NAR. The intra- and interday precision studies showed good reproducibility with coefficients of variation (CV less than 1.0%. The mean recovery of NAR was found to be 99.33 ± 0.16%. The proposed method was found to be highly accurate, sensitive, and robust. The proposed liquid chromatographic method was successfully employed for the routine analysis of said compound in developed novel nanopharmaceuticals. The presence of excipients did not show any interference on the determination of NAR, indicating method specificity.

  17. Validity of transcobalamin II-based radioassay for the determination of serum vitamin B12 concentrations

    International Nuclear Information System (INIS)

    Paltridge, G.; Rudzki, Z.; Ryall, R.G.

    1980-01-01

    A valid radioassay for the estimation of serum vitamin B 12 in the presence of naturally occurring vitamin B 12 (= cobalamin) analogues can be operated if serum transcobalamin II (TC II) is used as the binding protein. Serum samples that gave diagnostically discrepant results when their vitamin B 12 content was analysed (i) by a commercial radioassay known to be susceptible to interference from cobalamin analogues, and (ii) by microbiological assay, were further analysed by an alternative radioassay which uses the transcobalamins (principally TC II) of diluted normal serum as the assay binding protein. Concordance between the results from microbiological assay and the TC II-based radioassay was found in all cases. In an extended study over a three-year period, all routine serum samples sent for vitamin B 12 analysis that had a vitamin B 12 content of less than 320 ng/l by the TC II-based radioassay (reference range 200-850 ng/l) were reanalysed using an established microbiological method. Over 1000 samples were thus analysed. The data are presented to demonstrate the validity of the TC II-based radioassay results in this group of patients, serum samples from which are most likely to produce diagnostically erroneous vitamin B 12 results when analysed by a radioassay that is less specific for cobalamins. (author)

  18. Validation of Smartphone Based Retinal Photography for Diabetic Retinopathy Screening.

    Science.gov (United States)

    Rajalakshmi, Ramachandran; Arulmalar, Subramanian; Usha, Manoharan; Prathiba, Vijayaraghavan; Kareemuddin, Khaji Syed; Anjana, Ranjit Mohan; Mohan, Viswanathan

    2015-01-01

    To evaluate the sensitivity and specificity of "fundus on phone' (FOP) camera, a smartphone based retinal imaging system, as a screening tool for diabetic retinopathy (DR) detection and DR severity in comparison with 7-standard field digital retinal photography. Single-site, prospective, comparative, instrument validation study. 301 patients (602 eyes) with type 2 diabetes underwent standard seven-field digital fundus photography with both Carl Zeiss fundus camera and indigenous FOP at a tertiary care diabetes centre in South India. Grading of DR was performed by two independent retina specialists using modified Early Treatment of Diabetic Retinopathy Study grading system. Sight threatening DR (STDR) was defined by the presence of proliferative DR(PDR) or diabetic macular edema. The sensitivity, specificity and image quality were assessed. The mean age of the participants was 53.5 ±9.6 years and mean duration of diabetes 12.5±7.3 years. The Zeiss camera showed that 43.9% had non-proliferative DR(NPDR) and 15.3% had PDR while the FOP camera showed that 40.2% had NPDR and 15.3% had PDR. The sensitivity and specificity for detecting any DR by FOP was 92.7% (95%CI 87.8-96.1) and 98.4% (95%CI 94.3-99.8) respectively and the kappa (ĸ) agreement was 0.90 (95%CI-0.85-0.95 p<0.001) while for STDR, the sensitivity was 87.9% (95%CI 83.2-92.9), specificity 94.9% (95%CI 89.7-98.2) and ĸ agreement was 0.80 (95%CI 0.71-0.89 p<0.001), compared to conventional photography. Retinal photography using FOP camera is effective for screening and diagnosis of DR and STDR with high sensitivity and specificity and has substantial agreement with conventional retinal photography.

  19. Validation of Smartphone Based Retinal Photography for Diabetic Retinopathy Screening.

    Directory of Open Access Journals (Sweden)

    Ramachandran Rajalakshmi

    Full Text Available To evaluate the sensitivity and specificity of "fundus on phone' (FOP camera, a smartphone based retinal imaging system, as a screening tool for diabetic retinopathy (DR detection and DR severity in comparison with 7-standard field digital retinal photography.Single-site, prospective, comparative, instrument validation study.301 patients (602 eyes with type 2 diabetes underwent standard seven-field digital fundus photography with both Carl Zeiss fundus camera and indigenous FOP at a tertiary care diabetes centre in South India. Grading of DR was performed by two independent retina specialists using modified Early Treatment of Diabetic Retinopathy Study grading system. Sight threatening DR (STDR was defined by the presence of proliferative DR(PDR or diabetic macular edema. The sensitivity, specificity and image quality were assessed.The mean age of the participants was 53.5 ±9.6 years and mean duration of diabetes 12.5±7.3 years. The Zeiss camera showed that 43.9% had non-proliferative DR(NPDR and 15.3% had PDR while the FOP camera showed that 40.2% had NPDR and 15.3% had PDR. The sensitivity and specificity for detecting any DR by FOP was 92.7% (95%CI 87.8-96.1 and 98.4% (95%CI 94.3-99.8 respectively and the kappa (ĸ agreement was 0.90 (95%CI-0.85-0.95 p<0.001 while for STDR, the sensitivity was 87.9% (95%CI 83.2-92.9, specificity 94.9% (95%CI 89.7-98.2 and ĸ agreement was 0.80 (95%CI 0.71-0.89 p<0.001, compared to conventional photography.Retinal photography using FOP camera is effective for screening and diagnosis of DR and STDR with high sensitivity and specificity and has substantial agreement with conventional retinal photography.

  20. VALIDITY OF A COMMERCIAL LINEAR ENCODER TO ESTIMATE BENCH PRESS 1 RM FROM THE FORCE-VELOCITY RELATIONSHIP

    Directory of Open Access Journals (Sweden)

    Laurent Bosquet

    2010-09-01

    Full Text Available The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway to estimate Bench press 1 repetition maximum (1RM from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg, while 1 RM estimated by the Musclelab's software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg. Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001 but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37. The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level.

  1. Ratio-based estimators for a change point in persistence.

    Science.gov (United States)

    Halunga, Andreea G; Osborn, Denise R

    2012-11-01

    We study estimation of the date of change in persistence, from [Formula: see text] to [Formula: see text] or vice versa. Contrary to statements in the original papers, our analytical results establish that the ratio-based break point estimators of Kim [Kim, J.Y., 2000. Detection of change in persistence of a linear time series. Journal of Econometrics 95, 97-116], Kim et al. [Kim, J.Y., Belaire-Franch, J., Badillo Amador, R., 2002. Corringendum to "Detection of change in persistence of a linear time series". Journal of Econometrics 109, 389-392] and Busetti and Taylor [Busetti, F., Taylor, A.M.R., 2004. Tests of stationarity against a change in persistence. Journal of Econometrics 123, 33-66] are inconsistent when a mean (or other deterministic component) is estimated for the process. In such cases, the estimators converge to random variables with upper bound given by the true break date when persistence changes from [Formula: see text] to [Formula: see text]. A Monte Carlo study confirms the large sample downward bias and also finds substantial biases in moderate sized samples, partly due to properties at the end points of the search interval.

  2. Estimation of Sideslip Angle Based on Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Yupeng Huang

    2017-01-01

    Full Text Available The sideslip angle plays an extremely important role in vehicle stability control, but the sideslip angle in production car cannot be obtained from sensor directly in consideration of the cost of the sensor; it is essential to estimate the sideslip angle indirectly by means of other vehicle motion parameters; therefore, an estimation algorithm with real-time performance and accuracy is critical. Traditional estimation method based on Kalman filter algorithm is correct in vehicle linear control area; however, on low adhesion road, vehicles have obvious nonlinear characteristics. In this paper, extended Kalman filtering algorithm had been put forward in consideration of the nonlinear characteristic of the tire and was verified by the Carsim and Simulink joint simulation, such as the simulation on the wet cement road and the ice and snow road with double lane change. To test and verify the effect of extended Kalman filtering estimation algorithm, the real vehicle test was carried out on the limit test field. The experimental results show that the accuracy of vehicle sideslip angle acquired by extended Kalman filtering algorithm is obviously higher than that acquired by Kalman filtering in the area of the nonlinearity.

  3. A novel ULA-based geometry for improving AOA estimation

    Directory of Open Access Journals (Sweden)

    Akbari Farida

    2011-01-01

    Full Text Available Abstract Due to relatively simple implementation, Uniform Linear Array (ULA is a popular geometry for array signal processing. Despite this advantage, it does not have a uniform performance in all directions and Angle of Arrival (AOA estimation performance degrades considerably in the angles close to endfire. In this article, a new configuration is proposed which can solve this problem. Proposed Array (PA configuration adds two elements to the ULA in top and bottom of the array axis. By extending signal model of the ULA to the new proposed ULA-based array, AOA estimation performance has been compared in terms of angular accuracy and resolution threshold through two well-known AOA estimation algorithms, MUSIC and MVDR. In both algorithms, Root Mean Square Error (RMSE of the detected angles descends as the input Signal to Noise Ratio (SNR increases. Simulation results show that the proposed array geometry introduces uniform accurate performance and higher resolution in middle angles as well as border ones. The PA also presents less RMSE than the ULA in endfire directions. Therefore, the proposed array offers better performance for the border angles with almost the same array size and simplicity in both MUSIC and MVDR algorithms with respect to the conventional ULA. In addition, AOA estimation performance of the PA geometry is compared with two well-known 2D-array geometries: L-shape and V-shape, and acceptable results are obtained with equivalent or lower complexity.

  4. A novel ULA-based geometry for improving AOA estimation

    Science.gov (United States)

    Shirvani-Moghaddam, Shahriar; Akbari, Farida

    2011-12-01

    Due to relatively simple implementation, Uniform Linear Array (ULA) is a popular geometry for array signal processing. Despite this advantage, it does not have a uniform performance in all directions and Angle of Arrival (AOA) estimation performance degrades considerably in the angles close to endfire. In this article, a new configuration is proposed which can solve this problem. Proposed Array (PA) configuration adds two elements to the ULA in top and bottom of the array axis. By extending signal model of the ULA to the new proposed ULA-based array, AOA estimation performance has been compared in terms of angular accuracy and resolution threshold through two well-known AOA estimation algorithms, MUSIC and MVDR. In both algorithms, Root Mean Square Error (RMSE) of the detected angles descends as the input Signal to Noise Ratio (SNR) increases. Simulation results show that the proposed array geometry introduces uniform accurate performance and higher resolution in middle angles as well as border ones. The PA also presents less RMSE than the ULA in endfire directions. Therefore, the proposed array offers better performance for the border angles with almost the same array size and simplicity in both MUSIC and MVDR algorithms with respect to the conventional ULA. In addition, AOA estimation performance of the PA geometry is compared with two well-known 2D-array geometries: L-shape and V-shape, and acceptable results are obtained with equivalent or lower complexity.

  5. ANFIS-Based Modeling for Photovoltaic Characteristics Estimation

    Directory of Open Access Journals (Sweden)

    Ziqiang Bi

    2016-09-01

    Full Text Available Due to the high cost of photovoltaic (PV modules, an accurate performance estimation method is significantly valuable for studying the electrical characteristics of PV generation systems. Conventional analytical PV models are usually composed by nonlinear exponential functions and a good number of unknown parameters must be identified before using. In this paper, an adaptive-network-based fuzzy inference system (ANFIS based modeling method is proposed to predict the current-voltage characteristics of PV modules. The effectiveness of the proposed modeling method is evaluated through comparison with Villalva’s model, radial basis function neural networks (RBFNN based model and support vector regression (SVR based model. Simulation and experimental results confirm both the feasibility and the effectiveness of the proposed method.

  6. Left ventricular strain and its pattern estimated from cine CMR and validation with DENSE

    International Nuclear Information System (INIS)

    Gao, Hao; Luo, Xiaoyu; Allan, Andrew; McComb, Christie; Berry, Colin

    2014-01-01

    Measurement of local strain provides insight into the biomechanical significance of viable myocardium. We attempted to estimate myocardial strain from cine cardiovascular magnetic resonance (CMR) images by using a b-spline deformable image registration method. Three healthy volunteers and 41 patients with either recent or chronic myocardial infarction (MI) were studied at 1.5 Tesla with both cine and DENSE CMR. Regional circumferential and radial left ventricular strains were estimated from cine and DENSE acquisitions. In all healthy volunteers, there was no difference for peak circumferential strain (− 0.18 ± 0.04 versus − 0.18 ± 0.03, p = 0.76) between cine and DENSE CMR, however peak radial strain was overestimated from cine (0.84 ± 0.37 versus 0.49 ± 0.2, p < 0.01). In the patient study, the peak strain patterns predicted by cine were similar to the patterns from DENSE, including the strain evolution related to recovery time and strain patterns related to MI scar extent. Furthermore, cine-derived strain disclosed different strain patterns in MI and non-MI regions, and regions with transmural and non-transmural MI as DENSE. Although there were large variations with radial strain measurements from cine CMR images, useful circumferential strain information can be obtained from routine clinical CMR imaging. Cine strain analysis has potential to improve the diagnostic yield from routine CMR imaging in clinical practice. (paper)

  7. Left ventricular strain and its pattern estimated from cine CMR and validation with DENSE.

    Science.gov (United States)

    Gao, Hao; Allan, Andrew; McComb, Christie; Luo, Xiaoyu; Berry, Colin

    2014-07-07

    Measurement of local strain provides insight into the biomechanical significance of viable myocardium. We attempted to estimate myocardial strain from cine cardiovascular magnetic resonance (CMR) images by using a b-spline deformable image registration method. Three healthy volunteers and 41 patients with either recent or chronic myocardial infarction (MI) were studied at 1.5 Tesla with both cine and DENSE CMR. Regional circumferential and radial left ventricular strains were estimated from cine and DENSE acquisitions. In all healthy volunteers, there was no difference for peak circumferential strain (- 0.18 ± 0.04 versus - 0.18 ± 0.03, p = 0.76) between cine and DENSE CMR, however peak radial strain was overestimated from cine (0.84 ± 0.37 versus 0.49 ± 0.2, p cine were similar to the patterns from DENSE, including the strain evolution related to recovery time and strain patterns related to MI scar extent. Furthermore, cine-derived strain disclosed different strain patterns in MI and non-MI regions, and regions with transmural and non-transmural MI as DENSE. Although there were large variations with radial strain measurements from cine CMR images, useful circumferential strain information can be obtained from routine clinical CMR imaging. Cine strain analysis has potential to improve the diagnostic yield from routine CMR imaging in clinical practice.

  8. Empirical models validation to estimate global solar irradiance on a horizontal plan in Ouargla, Algeria

    Science.gov (United States)

    Gougui, Abdelmoumen; Djafour, Ahmed; Khelfaoui, Narimane; Boutelli, Halima

    2018-05-01

    In this paper a comparison between three models for predicting the total solar flux falling on a horizontal surface has been processed. Capderou, Perrin & Brichambaut and Hottel models used to estimate the global solar radiation, the models are identified and evaluated using MATLAB environment. The recorded data have been obtained from a small weather station installed at the LAGE laboratory of Ouargla University, Algeria. Solar radiation data have been recorded using four sample days, every 15thday of the month, (March, April, May and October). The Root Mean Square Error (RMSE), Correlation Coefficient (CC) and Mean Absolute Percentage Error (MAPE) have been also calculated so as that to test the reliability of the proposed models. A comparisons between the measured and the calculated values have been made. The results obtained in this study depict that Perrin & Brichambaut and Capderou models are more effective to estimate the total solar intensity on a horizontal surface for clear sky over Ouargla city (Latitude of 31° 95' N, Longitude of 5° 24' E, and Altitude of 0.141km above Mean Sea Level), these models dedicated from meteorological parameters, geographical location and number of days since the first January. Perrin & Brichambaut and Capderou models give the best tendency with a CC of 0.985-0.999 and 0.932-0.995 consecutively further, Hottel give's a CC of 0.617-0.942.

  9. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    Science.gov (United States)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  10. Development and Validation of RP-HPLC Method for Simultaneous Estimation of Aspirin and Esomeprazole Magnesium in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    Dipali Patel

    2013-01-01

    Full Text Available A simple, specific, precise, and accurate reversed-phase HPLC method was developed and validated for simultaneous estimation of aspirin and esomeprazole magnesium in tablet dosage forms. The separation was achieved by HyperChrom ODS-BP C18 column (200 mm × 4.6 mm; 5.0 μm using acetonitrile: methanol: 0.05 M phosphate buffer at pH 3 adjusted with orthophosphoric acid (25 : 25 : 50, v/v as eluent, at a flow rate of 1 mL/min. Detection was carried out at wavelength 230 nm. The retention times of aspirin and esomeprazole magnesium were 4.29 min and 6.09 min, respectively. The linearity was established over the concentration ranges of 10–70 μg/mL and 10–30 μg/mL with correlation coefficients (r2 0.9986 and 0.9973 for aspirin and esomeprazole magnesium, respectively. The mean recoveries were found to be in the ranges of 99.80–100.57% and 99.70–100.83% for aspirin and esomeprazole magnesium, respectively. The proposed method has been validated as per ICH guidelines and successfully applied to the estimation of aspirin and esomeprazole magnesium in their combined tablet dosage form.

  11. Automated mode shape estimation in agent-based wireless sensor networks

    Science.gov (United States)

    Zimmerman, Andrew T.; Lynch, Jerome P.

    2010-04-01

    Recent advances in wireless sensing technology have made it possible to deploy dense networks of sensing transducers within large structural systems. Because these networks leverage the embedded computing power and agent-based abilities integral to many wireless sensing devices, it is possible to analyze sensor data autonomously and in-network. In this study, market-based techniques are used to autonomously estimate mode shapes within a network of agent-based wireless sensors. Specifically, recent work in both decentralized Frequency Domain Decomposition and market-based resource allocation is leveraged to create a mode shape estimation algorithm derived from free-market principles. This algorithm allows an agent-based wireless sensor network to autonomously shift emphasis between improving mode shape accuracy and limiting the consumption of certain scarce network resources: processing time, storage capacity, and power consumption. The developed algorithm is validated by successfully estimating mode shapes using a network of wireless sensor prototypes deployed on the mezzanine balcony of Hill Auditorium, located on the University of Michigan campus.

  12. A fuel-based approach to estimating motor vehicle exhaust emissions

    Science.gov (United States)

    Singer, Brett Craig

    in California appear to understate total exhaust CO and VOC emissions, while overstating the importance of cold start emissions. The fuel-based approach yields robust, independent, and accurate estimates of on-road vehicle emissions. Fuel-based estimates should be used to validate or adjust official vehicle emission inventories before society embarks on new, more costly air pollution control programs.

  13. Model validation and error estimation of tsunami runup using high resolution data in Sadeng Port, Gunungkidul, Yogyakarta

    Science.gov (United States)

    Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo

    2017-07-01

    A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.

  14. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  15. Marker-based estimation of genetic parameters in genomics.

    Directory of Open Access Journals (Sweden)

    Zhiqiu Hu

    Full Text Available Linear mixed model (LMM analysis has been recently used extensively for estimating additive genetic variances and narrow-sense heritability in many genomic studies. While the LMM analysis is computationally less intensive than the Bayesian algorithms, it remains infeasible for large-scale genomic data sets. In this paper, we advocate the use of a statistical procedure known as symmetric differences squared (SDS as it may serve as a viable alternative when the LMM methods have difficulty or fail to work with large datasets. The SDS procedure is a general and computationally simple method based only on the least squares regression analysis. We carry out computer simulations and empirical analyses to compare the SDS procedure with two commonly used LMM-based procedures. Our results show that the SDS method is not as good as the LMM methods for small data sets, but it becomes progressively better and can match well with the precision of estimation by the LMM methods for data sets with large sample sizes. Its major advantage is that with larger and larger samples, it continues to work with the increasing precision of estimation while the commonly used LMM methods are no longer able to work under our current typical computing capacity. Thus, these results suggest that the SDS method can serve as a viable alternative particularly when analyzing 'big' genomic data sets.

  16. Simulation-based seismic loss estimation of seaport transportation system

    International Nuclear Information System (INIS)

    Ung Jin Na; Shinozuka, Masanobu

    2009-01-01

    Seaport transportation system is one of the major lifeline systems in modern society and its reliable operation is crucial for the well-being of the public. However, past experiences showed that earthquake damage to port components can severely disrupt terminal operation, and thus negatively impact on the regional economy. The main purpose of this study is to provide a methodology for estimating the effects of the earthquake on the performance of the operation system of a container terminal in seaports. To evaluate the economic loss of damaged system, an analytical framework is developed by integrating simulation models for terminal operation and fragility curves of port components in the context of seismic risk analysis. For this purpose, computerized simulation model is developed and verified with actual terminal operation records. Based on the analytical procedure to assess the seismic performance of the terminal, system fragility curves are also developed. This simulation-based loss estimation methodology can be used not only for estimating the seismically induced revenue loss but also serve as a decision-making tool to select specific seismic retrofit technique on the basis of benefit-cost analysis

  17. Development and validation of analytical method for the estimation of nateglinide in rabbit plasma

    Directory of Open Access Journals (Sweden)

    Nihar Ranjan Pani

    2012-12-01

    Full Text Available Nateglinide has been widely used in the treatment of type-2 diabetics as an insulin secretogoga. A reliable, rapid, simple and sensitive reversed-phase high performance liquid chromatography (RP-HPLC method was developed and validated for determination of nateglinide in rabbit plasma. The method was developed on Hypersil BDSC-18 column (250 mm×4.6 mm, 5 mm using a mobile phase of 10 mM phosphate buffer (pH 2.5 and acetonitrile (35:65, v/v. The elute was monitored with the UV–vis detector at 210 nm with a flow rate of 1 mL/min. Calibration curve was linear over the concentration range of 25–2000 ng/mL. The retention times of nateglinide and internal standard (gliclazide were 9.608 min and 11.821 min respectively. The developed RP-HPLC method can be successfully applied to the quantitative pharmacokinetic parameters determination of nateglinide in rabbit model. Keywords: HPLC, Nateglinide, Rabbit plasma, Pharmacokinetics

  18. HPLC method development and validation for the estimation of axitinibe in rabbit plasma

    Directory of Open Access Journals (Sweden)

    Achanta Suneetha

    2017-10-01

    Full Text Available ABSTRACT A rapid, sensitive, and accurate high performance liquid chromatography for the determination of axitinibe (AN in rabbit plasma is developed using crizotinibe as an internal standard (IS. Axitinibe is a tyrosine kinase inhibitor, used in the treatment of advanced kidney cancer, which works by slowing or stopping the growth of cancer cells. The chromatographic separation was performed on a Waters 2695, Kromosil (150 mm × 4.6 mm, 5 µm column using a mobile phase containing buffer (pH 4.6 and acetonitrile in the ratio of 65:35 v/v with a flow rate of1 mL/min. The analyte and internal standard were extracted using liquid-liquid extraction with acetonitrile. The elution was detected by photo diode array detector at 320 nm.The total chromatographic runtime is 10.0 min with a retention time for axitinibe and IS of 5.685, and 3.606 min, respectively. The method was validated over a dynamic linear range of 0.002-0.2µg/mL for axitinibe with a correlation coefficient of r2 0.999.

  19. Validity of segmental bioelectrical impedance analysis for estimating fat-free mass in children including overweight individuals.

    Science.gov (United States)

    Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki

    2017-02-01

    This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.

  20. Estimation and Validation of Land Surface Temperatures from Chinese Second-Generation Polar-Orbit FY-3A VIRR Data

    Directory of Open Access Journals (Sweden)

    Bo-Hui Tang

    2015-03-01

    Full Text Available This work estimated and validated the land surface temperature (LST from thermal-infrared Channels 4 (10.8 µm and 5 (12.0 µm of the Visible and Infrared Radiometer (VIRR onboard the second-generation Chinese polar-orbiting FengYun-3A (FY-3A meteorological satellite. The LST, mean emissivity and atmospheric water vapor content (WVC were divided into several tractable sub-ranges with little overlap to improve the fitting accuracy. The experimental results showed that the root mean square errors (RMSEs were proportional to the viewing zenith angles (VZAs and WVC. The RMSEs were below 1.0 K for VZA sub-ranges less than 30° or for VZA sub-ranges less than 60° and WVC less than 3.5 g/cm2, provided that the land surface emissivities were known. A preliminary validation using independently simulated data showed that the estimated LSTs were quite consistent with the actual inputs, with a maximum RMSE below 1 K for all VZAs. An inter-comparison using the Moderate Resolution Imaging Spectroradiometer (MODIS-derived LST product MOD11_L2 showed that the minimum RMSE was 1.68 K for grass, and the maximum RMSE was 3.59 K for barren or sparsely vegetated surfaces. In situ measurements at the Hailar field site in northeastern China from October, 2013, to September, 2014, were used to validate the proposed method. The result showed that the RMSE between the LSTs calculated from the ground measurements and derived from the VIRR data was 1.82 K.

  1. Temporal regularization of ultrasound-based liver motion estimation for image-guided radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J. [Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS foundation Trust, Sutton, London SM2 5PT (United Kingdom)

    2016-01-15

    Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison with normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking

  2. Validation of estimating food intake in gray wolves by 22Na turnover

    Science.gov (United States)

    DelGiudice, G.D.; Duquette, L.S.; Seal, U.S.; Mech, L.D.

    1991-01-01

    We studied 22sodium (22Na) turnover as a means of estimating food intake in 6 captive, adult gray wolves (Canis lupus) (2 F, 4 M) over a 31-day feeding period. Wolves were fed white-tailed deer (Odocoileus virginianus) meat only. Mean mass-specific exchangeable Na pool was 44.8 .+-. 0.7 mEq/kg; there was no differeence between males and females. Total exchangeable Na was related (r2 = 0.85, P food consumption (g/kg/day) in wolves over a 32-day period. Sampling blood and weighing wolves every 1-4 days permitted identification of several potential sources of error, including changes in size of exchangeable Na pools, exchange of 22Na with gastrointestinal and bone Na, and rapid loss of the isotope by urinary excretion.

  3. A Geometrical-Based Model for Cochannel Interference Analysis and Capacity Estimation of CDMA Cellular Systems

    Directory of Open Access Journals (Sweden)

    Konstantinos B. Baltzis

    2008-10-01

    Full Text Available A common assumption in cellular communications is the circular-cell approximation. In this paper, an alternative analysis based on the hexagonal shape of the cells is presented. A geometrical-based stochastic model is proposed to describe the angle of arrival of the interfering signals in the reverse link of a cellular system. Explicit closed form expressions are derived, and simulations performed exhibit the characteristics and validate the accuracy of the proposed model. Applications in the capacity estimation of WCDMA cellular networks are presented. Dependence of system capacity of the sectorization of the cells and the base station antenna radiation pattern is explored. Comparisons with data in literature validate the accuracy of the proposed model. The degree of error of the hexagonal and the circular-cell approaches has been investigated indicating the validity of the proposed model. Results have also shown that, in many cases, the two approaches give similar results when the radius of the circle equals to the hexagon inradius. A brief discussion on how the proposed technique may be applied to broadband access networks is finally made.

  4. External Force Estimation for Teleoperation Based on Proprioceptive Sensors

    Directory of Open Access Journals (Sweden)

    Enrique del Sol

    2014-03-01

    Full Text Available This paper establishes an approach to external force estimation for telerobotic control in radioactive environments by the use of an identified manipulator model and pressure sensors, without employing a force/torque sensor. The advantages of - and need for - force feedback have been well-established in the field of telerobotics, where electrical and back-drivable manipulators have traditionally been used. This research proposes a methodology employing hydraulic robots for telerobotics tasks based on a model identification scheme. Comparative results of a force sensor and the proposed approach using a hydraulic telemanipulator are presented under different conditions. This approach not only presents a cost effective solution but also a methodology for force estimation in radioactive environments, where the dose rates limit the use of electronic devices such as sensing equipment.

  5. Optimization-based particle filter for state and parameter estimation

    Institute of Scientific and Technical Information of China (English)

    Li Fu; Qi Fei; Shi Guangming; Zhang Li

    2009-01-01

    In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.

  6. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  7. Gradient-based stochastic estimation of the density matrix

    Science.gov (United States)

    Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton

    2018-03-01

    Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.

  8. Validity of energy intake estimated by digital photography plus recall in overweight and obese young adults.

    Science.gov (United States)

    Ptomey, Lauren T; Willis, Erik A; Honas, Jeffery J; Mayo, Matthew S; Washburn, Richard A; Herrmann, Stephen D; Sullivan, Debra K; Donnelly, Joseph E

    2015-09-01

    Recent reports have questioned the adequacy of self-report measures of dietary intake as the basis for scientific conclusions regarding the associations of dietary intake and health, and reports have recommended the development and evaluation of better methods for the assessment of dietary intake in free-living individuals. We developed a procedure that used pre- and post-meal digital photographs in combination with dietary recalls (DP+R) to assess energy intake during ad libitum eating in a cafeteria setting. To compare mean daily energy intake of overweight and obese young adults assessed by a DP+R method with mean total daily energy expenditure assessed by doubly labeled water (TDEE(DLW)). Energy intake was assessed using the DP+R method in 91 overweight and obese young adults (age = 22.9±3.2 years, body mass index [BMI; calculated as kg/m(2)]=31.2±5.6, female=49%) over 7 days of ad libitum eating in a university cafeteria. Foods consumed outside the cafeteria (ie, snacks, non-cafeteria meals) were assessed using multiple-pass recall procedures, using food models and standardized, neutral probing questions. TDEE(DLW) was assessed in all participants over the 14-day period. The mean energy intakes estimated by DP+R and TDEE(DLW) were not significantly different (DP+R=2912±661 kcal/d; TDEE(DLW)=2849±748 kcal/d, P=0.42). The DP+R method overestimated TDEE(DLW) by 63±750 kcal/d (6.8±28%). Results suggest that the DP+R method provides estimates of energy intake comparable to those obtained by TDEE(DLW). Copyright © 2015 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  9. A History-based Estimation for LHCb job requirements

    Science.gov (United States)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  10. Verification and validation of computer based systems for PFBR

    International Nuclear Information System (INIS)

    Thirugnanamurthy, D.

    2017-01-01

    Verification and Validation (V and V) process is essential to build quality into system. Verification is the process of evaluating a system to determine whether the products of each development phase satisfies the requirements imposed by the previous phase. Validation is the process of evaluating a system at the end of the development process to ensure compliance with the functional, performance and interface requirements. This presentation elaborates the V and V process followed, documents submission requirements in each stage, V and V activities, check list used for reviews in each stage and reports

  11. Differences in characteristics of raters who use the visual estimation method in hospitals based on their training experiences.

    Science.gov (United States)

    Kawasaki, Yui; Tamaura, Yuki; Akamatsu, Rie; Sakai, Masashi; Fujiwara, Keiko

    2018-02-07

    Despite a clinical need, only a few studies have provided information concerning visual estimation training for raters to improve the validity of their evaluations. This study aims to describe the differences in the characteristics of raters who evaluated patients' dietary intake in hospitals using the visual estimation method based on their training experiences. We collected data from three hospitals in Tokyo from August to September 2016. The participants were 199 nursing staff members, and they completed a self-administered questionnaire on demographic data; working career; training in the visual estimation method; knowledge, attitude, and practice associated with nutritional care; and self-evaluation of method validity of and skills of visual estimation. We classified participants into two groups, experienced and inexperienced, based on whether they had received training. Square test, Mann-Whitney U test, and univariable and multivariable logistic regression analysis were used to describe the differences between these two groups in terms of their characteristics; knowledge, attitude, and practice associated with nutritional care; and self-evaluation of method validity and tips used in the visual estimation method. Of the 158 staff members (79.4%) (118 nurses and 40 nursing assistants) who agreed to participate in the analysis, thirty-three participants (20.9%) were trained in the visual estimation method. Participants who had received training had better knowledge (2.70 ± 0.81, score range was 1-5) than those who had not received any training (2.34 ± 0.74, p = 0.03). Score of self-evaluation of method validity of the visual estimation method was higher in the experienced group (3.78 ± 0.61, score range was 1-5) than the inexperienced group (3.40 ± 0.66, p trained had adequate knowledge (OR: 2.78, 95% CI: 1.05-7.35) and frequently used tips in visual estimation (OR: 1.85, 95% CI: 1.26-2.73). Trained participants had more required knowledge and

  12. Particle-filtering-based estimation of maximum available power state in Lithium-Ion batteries

    International Nuclear Information System (INIS)

    Burgos-Mellado, Claudio; Orchard, Marcos E.; Kazerani, Mehrdad; Cárdenas, Roberto; Sáez, Doris

    2016-01-01

    Highlights: • Approach to estimate the state of maximum power available in Lithium-Ion battery. • Optimisation problem is formulated on the basis of a non-linear dynamic model. • Solutions of the optimisation problem are functions of state of charge estimates. • State of charge estimates computed using particle filter algorithms. - Abstract: Battery Energy Storage Systems (BESS) are important for applications related to both microgrids and electric vehicles. If BESS are used as the main energy source, then it is required to include adequate procedures for the estimation of critical variables such as the State of Charge (SoC) and the State of Health (SoH) in the design of Battery Management Systems (BMS). Furthermore, in applications where batteries are exposed to high charge and discharge rates it is also desirable to estimate the State of Maximum Power Available (SoMPA). In this regard, this paper presents a novel approach to the estimation of SoMPA in Lithium-Ion batteries. This method formulates an optimisation problem for the battery power based on a non-linear dynamic model, where the resulting solutions are functions of the SoC. In the battery model, the polarisation resistance is modelled using fuzzy rules that are function of both SoC and the discharge (charge) current. Particle filtering algorithms are used as an online estimation technique, mainly because these algorithms allow approximating the probability density functions of the SoC and SoMPA even in the case of non-Gaussian sources of uncertainty. The proposed method for SoMPA estimation is validated using the experimental data obtained from an experimental setup designed for charging and discharging the Lithium-Ion batteries.

  13. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach

    Directory of Open Access Journals (Sweden)

    Nan Xu

    2017-05-01

    Full Text Available Resting-state functional MRI (rs-fMRI is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD signal from different regions of interest (ROIs. However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1 Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2 On simulated data designed to display the “common driver” problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3 On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain.

  14. Vce-based methods for temperature estimation of high power IGBT modules during power cycling - A comparison

    DEFF Research Database (Denmark)

    Amoiridis, Anastasios; Anurag, Anup; Ghimire, Pramod

    2015-01-01

    . This experimental work evaluates the validity and accuracy of two Vce based methods applied on high power IGBT modules during power cycling tests. The first method estimates the chip temperature when low sense current is applied and the second method when normal load current is present. Finally, a correction factor......Temperature estimation is of great importance for performance and reliability of IGBT power modules in converter operation as well as in active power cycling tests. It is common to be estimated through Thermo-Sensitive Electrical Parameters such as the forward voltage drop (Vce) of the chip...

  15. Relative validity of a web-based food frequency questionnaire for Danish adolescents.

    Science.gov (United States)

    Bjerregaard, Anne A; Halldorsson, Thorhallur I; Kampmann, Freja B; Olsen, Sjurdur F; Tetens, Inge

    2018-01-12

    With increased focus on dietary intake among youth and risk of diseases later in life, it is of importance, prior to assessing diet-disease relationships, to examine the validity of the dietary assessment tool. This study's objective was to evaluate the relative validity of a self-administered web-based FFQ among Danish children aged 12 to 15 years. From a nested sub-cohort within the Danish National Birth Cohort, 124 adolescents participated. Four weeks after completion of the FFQ, adolescents were invited to complete three telephone-based 24HRs; administered 4 weeks apart. Mean or median intakes of nutrients and food groups estimated from the FFQ were compared with the mean of 3x24HRs. To assess the level of ranking we calculated the proportion of correctly classified into the same quartile, and the proportion of misclassified (into the opposite quartile). Spearman's correlation coefficients and de-attenuated coefficients were calculated to assess agreement between the FFQ and 24HRs. The mean percentage of all food groups, for adolescents classified into the same and opposite quartile was 35 and 7.5%, respectively. Mean Spearman's correlation was 0.28 for food groups and 0.35 for nutrients, respectively. Adjustment for energy and within-person variation in the 24HRs had little effect on the magnitude of the correlations for food groups and nutrients. We found overestimation by the FFQ compared with the 24HRs for fish, fruits, vegetables, oils and dressing and underestimation by the FFQ for meat/poultry and sweets. Median intake of beverages, dairy, bread, cereals, the mean total energy and carbohydrate intake did not differ significantly between the two methods. The relative validity of the FFQ compared with the 3x24HRs showed that the ranking ability differed across food groups and nutrients with best ranking for estimated intake of dairy, fruits, and oils and dressing. Larger variation was observed for fish, sweets and vegetables. For nutrients, the ranking

  16. Ecosystem services - from assessements of estimations to quantitative, validated, high-resolution, continental-scale mapping via airborne LIDAR

    Science.gov (United States)

    Zlinszky, András; Pfeifer, Norbert

    2016-04-01

    service potential" which is the ability of the local ecosystem to deliver various functions (water retention, carbon storage etc.), but can't quantify how much of these are actually used by humans or what the estimated monetary value is. Due to its ability to measure both terrain relief and vegetation structure in high resolution, airborne LIDAR supports direct quantification of the properties of an ecosystem that lead to it delivering a given service (such as biomass, water retention, micro-climate regulation or habitat diversity). In addition, its high resolution allows direct calibration with field measurements: routine harvesting-based ecological measurements, local biodiversity indicator surveys or microclimate recordings all take place at the human scale and can be directly linked to the local value of LIDAR-based indicators at meter resolution. Therefore, if some field measurements with standard ecological methods are performed on site, the accuracy of LIDAR-based ecosystem service indicators can be rigorously validated. With this conceptual and technical approach high resolution ecosystem service assessments can be made with well established credibility. These would consolidate the concept of ecosystem services and support both scientific research and evidence-based environmental policy at local and - as data coverage is continually increasing - continental scale.

  17. Concurrent validity and reliability of torso-worn inertial measurement unit for jump power and height estimation.

    Science.gov (United States)

    Rantalainen, Timo; Gastin, Paul B; Spangler, Rhys; Wundersitz, Daniel

    2018-09-01

    The purpose of the present study was to evaluate the concurrent validity and test-retest repeatability of torso-worn IMU-derived power and jump height in a counter-movement jump test. Twenty-seven healthy recreationally active males (age, 21.9 [SD 2.0] y, height, 1.76 [0.7] m, mass, 73.7 [10.3] kg) wore an IMU and completed three counter-movement jumps a week apart. A force platform and a 3D motion analysis system were used to concurrently measure the jumps and subsequently derive power and jump height (based on take-off velocity and flight time). The IMU significantly overestimated power (mean difference = 7.3 W/kg; P jump heights exhibited poorer concurrent validity (ICC = 0.72 to 0.78) and repeatability (ICC = 0.68) than flight-time-derived jump heights, which exhibited excellent validity (ICC = 0.93 to 0.96) and reliability (ICC = 0.91). Since jump height and power are closely related, and flight-time-derived jump height exhibits excellent concurrent validity and reliability, flight-time-derived jump height could provide a more desirable measure compared to power when assessing athletic performance in a counter-movement jump with IMUs.

  18. Development and validation of a noncontact spectroscopic device for hemoglobin estimation at point-of-care

    Science.gov (United States)

    Sarkar, Probir Kumar; Pal, Sanchari; Polley, Nabarun; Aich, Rajarshi; Adhikari, Aniruddha; Halder, Animesh; Chakrabarti, Subhananda; Chakrabarti, Prantar; Pal, Samir Kumar

    2017-05-01

    Anemia severely and adversely affects human health and socioeconomic development. Measuring hemoglobin with the minimal involvement of human and financial resources has always been challenging. We describe a translational spectroscopic technique for noncontact hemoglobin measurement at low-resource point-of-care settings in human subjects, independent of their skin color, age, and sex, by measuring the optical spectrum of the blood flowing in the vascular bed of the bulbar conjunctiva. We developed software on the LabVIEW platform for automatic data acquisition and interpretation by nonexperts. The device is calibrated by comparing the differential absorbance of light of wavelength 576 and 600 nm with the clinical hemoglobin level of the subject. Our proposed method is consistent with the results obtained using the current gold standard, the automated hematology analyzer. The proposed noncontact optical device for hemoglobin estimation is highly efficient, inexpensive, feasible, and extremely useful in low-resource point-of-care settings. The device output correlates with the different degrees of anemia with absolute and trending accuracy similar to those of widely used invasive methods. Moreover, the device can instantaneously transmit the generated report to a medical expert through e-mail, text messaging, or mobile apps.

  19. A Priori Implementation Effort Estimation for HW Design Based on Independent-Path Analysis

    DEFF Research Database (Denmark)

    Abildgren, Rasmus; Diguet, Jean-Philippe; Bomel, Pierre

    2008-01-01

    that with the proposed approach it is possible to estimate the hardware implementation effort. This approach, part of our light design space exploration concept, is implemented in our framework ‘‘Design-Trotter'' and offers a new type of tool that can help designers and managers to reduce the time-to-market factor......This paper presents a metric-based approach for estimating the hardware implementation effort (in terms of time) for an application in relation to the number of linear-independent paths of its algorithms. We exploit the relation between the number of edges and linear-independent paths...... in an algorithm and the corresponding implementation effort. We propose an adaptation of the concept of cyclomatic complexity, complemented with a correction function to take designers' learning curve and experience into account. Our experimental results, composed of a training and a validation phase, show...

  20. Inertial Measurement Units-Based Probe Vehicles: Automatic Calibration, Trajectory Estimation, and Context Detection

    KAUST Repository

    Mousa, Mustafa

    2017-12-06

    Most probe vehicle data is generated using satellite navigation systems, such as the Global Positioning System (GPS), Globalnaya navigatsionnaya sputnikovaya Sistema (GLONASS), or Galileo systems. However, because of their high cost, relatively high position uncertainty in cities, and low sampling rate, a large quantity of satellite positioning data is required to estimate traffic conditions accurately. To address this issue, we introduce a new type of traffic monitoring system based on inexpensive inertial measurement units (IMUs) as probe sensors. IMUs as traffic probes pose unique challenges in that they need to be precisely calibrated, do not generate absolute position measurements, and their position estimates are subject to accumulating errors. In this paper, we address each of these challenges and demonstrate that the IMUs can reliably be used as traffic probes. After discussing the sensing technique, we present an implementation of this system using a custom-designed hardware platform, and validate the system with experimental data.