WorldWideScience

Sample records for valid estimates based

  1. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  2. An Improved Fuzzy Based Missing Value Estimation in DNA Microarray Validated by Gene Ranking

    Directory of Open Access Journals (Sweden)

    Sujay Saha

    2016-01-01

    Full Text Available Most of the gene expression data analysis algorithms require the entire gene expression matrix without any missing values. Hence, it is necessary to devise methods which would impute missing data values accurately. There exist a number of imputation algorithms to estimate those missing values. This work starts with a microarray dataset containing multiple missing values. We first apply the modified version of the fuzzy theory based existing method LRFDVImpute to impute multiple missing values of time series gene expression data and then validate the result of imputation by genetic algorithm (GA based gene ranking methodology along with some regular statistical validation techniques, like RMSE method. Gene ranking, as far as our knowledge, has not been used yet to validate the result of missing value estimation. Firstly, the proposed method has been tested on the very popular Spellman dataset and results show that error margins have been drastically reduced compared to some previous works, which indirectly validates the statistical significance of the proposed method. Then it has been applied on four other 2-class benchmark datasets, like Colorectal Cancer tumours dataset (GDS4382, Breast Cancer dataset (GSE349-350, Prostate Cancer dataset, and DLBCL-FL (Leukaemia for both missing value estimation and ranking the genes, and the results show that the proposed method can reach 100% classification accuracy with very few dominant genes, which indirectly validates the biological significance of the proposed method.

  3. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  4. Validating estimates of problematic drug use in England

    Directory of Open Access Journals (Sweden)

    Heatlie Heath

    2007-10-01

    Full Text Available Abstract Background UK Government expenditure on combatting drug abuse is based on estimates of illicit drug users, yet the validity of these estimates is unknown. This study aims to assess the face validity of problematic drug use (PDU and injecting drug use (IDU estimates for all English Drug Action Teams (DATs in 2001. The estimates were derived from a statistical model using the Multiple Indicator Method (MIM. Methods Questionnaire study, in which the 149 English Drug Action Teams were asked to evaluate the MIM estimates for their DAT. Results The response rate was 60% and there were no indications of selection bias. Of responding DATs, 64% thought the PDU estimates were about right or did not dispute them, while 27% had estimates that were too low and 9% were too high. The figures for the IDU estimates were 52% (about right, 44% (too low and 3% (too high. Conclusion This is the first UK study to determine the validity estimates of problematic and injecting drug misuse. The results of this paper highlight the need to consider criterion and face validity when evaluating estimates of the number of drug users.

  5. A stepwise validation of a wearable system for estimating energy expenditure in field-based research

    International Nuclear Information System (INIS)

    Rumo, Martin; Mäder, Urs; Amft, Oliver; Tröster, Gerhard

    2011-01-01

    Regular physical activity (PA) is an important contributor to a healthy lifestyle. Currently, standard sensor-based methods to assess PA in field-based research rely on a single accelerometer mounted near the body's center of mass. This paper introduces a wearable system that estimates energy expenditure (EE) based on seven recognized activity types. The system was developed with data from 32 healthy subjects and consists of a chest mounted heart rate belt and two accelerometers attached to a thigh and dominant upper arm. The system was validated with 12 other subjects under restricted lab conditions and simulated free-living conditions against indirect calorimetry, as well as in subjects' habitual environments for 2 weeks against the doubly labeled water method. Our stepwise validation methodology gradually trades reference information from the lab against realistic data from the field. The average accuracy for EE estimation was 88% for restricted lab conditions, 55% for simulated free-living conditions and 87% and 91% for the estimation of average daily EE over the period of 1 and 2 weeks

  6. MR-based water content estimation in cartilage: design and validation of a method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sophie; Ringgaard, Steffen

    Purpose: Design and validation of an MR-based method that allows the calculation of the water content in cartilage tissue. Methods and Materials: Cartilage tissue T1 map based water content MR sequences were used on a 37 Celsius degree stable system. The T1 map intensity signal was analyzed on 6...... cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer used. Finally, the method was validated after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained...... map based water content sequences can provide information that, after being analyzed using a T1-map analysis software, can be interpreted as the water contained inside a cartilage tissue. The amount of water estimated using this method was similar to the one obtained at the dry-freeze procedure...

  7. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    International Nuclear Information System (INIS)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K.

    2016-01-01

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  8. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K. [Boston Children' s Hospital, Harvard Medical School, Department of Radiology, Boston, MA (United States)

    2016-03-15

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  9. The validity and reproducibility of food-frequency questionnaire–based total antioxidant capacity estimates in Swedish women

    Science.gov (United States)

    Total antioxidant capacity (TAC) provides an assessment of antioxidant activity and synergistic interactions of redox molecules in foods and plasma. We investigated the validity and reproducibility of food frequency questionnaire (FFQ)–based TAC estimates assessed by oxygen radical absorbance capaci...

  10. Estimating misclassification error: a closer look at cross-validation based methods

    Directory of Open Access Journals (Sweden)

    Ounpraseuth Songthip

    2012-11-01

    Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

  11. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    Science.gov (United States)

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  12. Type-specific human papillomavirus biological features: validated model-based estimates.

    Directory of Open Access Journals (Sweden)

    Iacopo Baussano

    Full Text Available Infection with high-risk (hr human papillomavirus (HPV is considered the necessary cause of cervical cancer. Vaccination against HPV16 and 18 types, which are responsible of about 75% of cervical cancer worldwide, is expected to have a major global impact on cervical cancer occurrence. Valid estimates of the parameters that regulate the natural history of hrHPV infections are crucial to draw reliable projections of the impact of vaccination. We devised a mathematical model to estimate the probability of infection transmission, the rate of clearance, and the patterns of immune response following the clearance of infection of 13 hrHPV types. To test the validity of our estimates, we fitted the same transmission model to two large independent datasets from Italy and Sweden and assessed finding consistency. The two populations, both unvaccinated, differed substantially by sexual behaviour, age distribution, and study setting (screening for cervical cancer or Chlamydia trachomatis infection. Estimated transmission probability of hrHPV types (80% for HPV16, 73%-82% for HPV18, and above 50% for most other types; clearance rates decreasing as a function of time since infection; and partial protection against re-infection with the same hrHPV type (approximately 20% for HPV16 and 50% for the other types were similar in the two countries. The model could accurately predict the HPV16 prevalence observed in Italy among women who were not infected three years before. In conclusion, our models inform on biological parameters that cannot at the moment be measured directly from any empirical data but are essential to forecast the impact of HPV vaccination programmes.

  13. Low-cost extrapolation method for maximal lte radio base station exposure estimation: Test and validation

    International Nuclear Information System (INIS)

    Verloock, L.; Joseph, W.; Gati, A.; Varsier, N.; Flach, B.; Wiart, J.; Martens, L.

    2013-01-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on down-link band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2x2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders. (authors)

  14. Online cross-validation-based ensemble learning.

    Science.gov (United States)

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Towards valid 'serious non-fatal injury' indicators for international comparisons based on probability of admission estimates

    DEFF Research Database (Denmark)

    Cryer, Colin; Miller, Ted R; Lyons, Ronan A

    2017-01-01

    in regions of Canada, Denmark, Greece, Spain and the USA. International Classification of Diseases (ICD)-9 or ICD-10 4-digit/character injury diagnosis-specific ED attendance and inpatient admission counts were provided, based on a common protocol. Diagnosis-specific and region-specific PrAs with 95% CIs...... diagnoses with high estimated PrAs. These diagnoses can be used as the basis for more valid international comparisons of life-threatening injury, based on hospital discharge data, for countries with well-developed healthcare and data collection systems....

  16. Validity of Edgeworth expansions for realized volatility estimators

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Veliyev, Bezirgen

    (2009). Second, we show that the validity of the Edgeworth expansions for realized volatility may not cover the optimal two-point distribution wild bootstrap proposed by Gonçalves and Meddahi (2009). Then, we propose a new optimal nonlattice distribution which ensures the second-order correctness...... of the bootstrap. Third, in the presence of microstructure noise, based on our Edgeworth expansions, we show that the new optimal choice proposed in the absence of noise is still valid in noisy data for the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). Finally, we show how...

  17. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    Science.gov (United States)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the

  18. Validation of differential gene expression algorithms: Application comparing fold-change estimation to hypothesis testing

    Directory of Open Access Journals (Sweden)

    Bickel David R

    2010-01-01

    Full Text Available Abstract Background Sustained research on the problem of determining which genes are differentially expressed on the basis of microarray data has yielded a plethora of statistical algorithms, each justified by theory, simulation, or ad hoc validation and yet differing in practical results from equally justified algorithms. Recently, a concordance method that measures agreement among gene lists have been introduced to assess various aspects of differential gene expression detection. This method has the advantage of basing its assessment solely on the results of real data analyses, but as it requires examining gene lists of given sizes, it may be unstable. Results Two methodologies for assessing predictive error are described: a cross-validation method and a posterior predictive method. As a nonparametric method of estimating prediction error from observed expression levels, cross validation provides an empirical approach to assessing algorithms for detecting differential gene expression that is fully justified for large numbers of biological replicates. Because it leverages the knowledge that only a small portion of genes are differentially expressed, the posterior predictive method is expected to provide more reliable estimates of algorithm performance, allaying concerns about limited biological replication. In practice, the posterior predictive method can assess when its approximations are valid and when they are inaccurate. Under conditions in which its approximations are valid, it corroborates the results of cross validation. Both comparison methodologies are applicable to both single-channel and dual-channel microarrays. For the data sets considered, estimating prediction error by cross validation demonstrates that empirical Bayes methods based on hierarchical models tend to outperform algorithms based on selecting genes by their fold changes or by non-hierarchical model-selection criteria. (The latter two approaches have comparable

  19. Assessing the external validity of model-based estimates of the incidence of heart attack in England: a modelling study

    Directory of Open Access Journals (Sweden)

    Peter Scarborough

    2016-11-01

    Full Text Available Abstract Background The DisMod II model is designed to estimate epidemiological parameters on diseases where measured data are incomplete and has been used to provide estimates of disease incidence for the Global Burden of Disease study. We assessed the external validity of the DisMod II model by comparing modelled estimates of the incidence of first acute myocardial infarction (AMI in England in 2010 with estimates derived from a linked dataset of hospital records and death certificates. Methods Inputs for DisMod II were prevalence rates of ever having had an AMI taken from a population health survey, total mortality rates and AMI mortality rates taken from death certificates. By definition, remission rates were zero. We estimated first AMI incidence in an external dataset from England in 2010 using a linked dataset including all hospital admissions and death certificates since 1998. 95 % confidence intervals were derived around estimates from the external dataset and DisMod II estimates based on sampling variance and reported uncertainty in prevalence estimates respectively. Results Estimates of the incidence rate for the whole population were higher in the DisMod II results than the external dataset (+54 % for men and +26 % for women. Age-specific results showed that the DisMod II results over-estimated incidence for all but the oldest age groups. Confidence intervals for the DisMod II and external dataset estimates did not overlap for most age groups. Conclusion By comparison with AMI incidence rates in England, DisMod II did not achieve external validity for age-specific incidence rates, but did provide global estimates of incidence that are of similar magnitude to measured estimates. The model should be used with caution when estimating age-specific incidence rates.

  20. Validity of food frequency questionnaire-based estimates of long-term long-chain n-3 polyunsaturated fatty acid intake.

    Science.gov (United States)

    Wallin, Alice; Di Giuseppe, Daniela; Burgaz, Ann; Håkansson, Niclas; Cederholm, Tommy; Michaëlsson, Karl; Wolk, Alicja

    2014-01-01

    To evaluate how long-term dietary intake of long-chain n-3 polyunsaturated fatty acids (LCn-3 PUFAs), estimated by repeated food frequency questionnaires (FFQs) over 15 years, is correlated with LCn-3 PUFAs in adipose tissue (AT). Subcutaneous adipose tissue was obtained in 2003-2004 (AT-03) from 239 randomly selected women, aged 55-75 years, after completion of a 96-item FFQ (FFQ-03). All participants had previously returned an identical FFQ in 1997 (FFQ-97) and a 67-item version in 1987-1990 (FFQ-87). Pearson product-moment correlations were used to evaluate associations between intake of total and individual LCn-3 PUFAs as estimated by the three FFQ assessments and AT-03 content (% of total fatty acids). FFQ-estimated mean relative intake of LCn-3 PUFAs (% of total fat intake) increased between all three assessments (FFQ-87, 0.55 ± 0.34; FFQ-97, 0.74 ± 0.64; FFQ-03, 0.88 ± 0.56). Validity, in terms of Pearson correlations between FFQ-03 estimates and AT-03 content, was 0.41 (95% CI 0.30-0.51) for total LCn-3 PUFA and ranged from 0.29 to 0.48 for individual fatty acids; lower correlation was observed among participants with higher percentage body fat. With regard to long-term intake estimates, past dietary intake was also correlated with AT-03 content, with correlation coefficients in the range of 0.21-0.33 and 0.21-0.34 for FFQ-97 and FFQ-87, respectively. The correlations were improved by using average estimates from two or more FFQ assessments. Exclusion of fish oil supplement users (14%) did not alter the correlations. These data indicate reasonable validity of FFQ-based estimates of long-term (up to 15 years) LCn-3 PUFA intake, justifying their use in studies of diet-disease associations.

  1. Validity and practicability of smartphone-based photographic food records for estimating energy and nutrient intake.

    Science.gov (United States)

    Kong, Kaimeng; Zhang, Lulu; Huang, Lisu; Tao, Yexuan

    2017-05-01

    Image-assisted dietary assessment methods are frequently used to record individual eating habits. This study tested the validity of a smartphone-based photographic food recording approach by comparing the results obtained with those of a weighed food record. We also assessed the practicality of the method by using it to measure the energy and nutrient intake of college students. The experiment was implemented in two phases, each lasting 2 weeks. In the first phase, a labelled menu and a photograph database were constructed. The energy and nutrient content of 31 randomly selected dishes in three different portion sizes were then estimated by the photograph-based method and compared with a weighed food record. In the second phase, we combined the smartphone-based photographic method with the WeChat smartphone application and applied this to 120 randomly selected participants to record their energy and nutrient intake. The Pearson correlation coefficients for energy, protein, fat, and carbohydrate content between the weighed and the photographic food record were 0.997, 0.936, 0.996, and 0.999, respectively. Bland-Altman plots showed good agreement between the two methods. The estimated protein, fat, and carbohydrate intake by participants was in accordance with values in the Chinese Residents' Nutrition and Chronic Disease report (2015). Participants expressed satisfaction with the new approach and the compliance rate was 97.5%. The smartphone-based photographic dietary assessment method combined with the WeChat instant messaging application was effective and practical for use by young people.

  2. Model-based PSF and MTF estimation and validation from skeletal clinical CT images.

    Science.gov (United States)

    Pakdel, Amirreza; Mainprize, James G; Robert, Normand; Fialkov, Jeffery; Whyne, Cari M

    2014-01-01

    A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge about the scanner-specific parameters.

  3. Model-based PSF and MTF estimation and validation from skeletal clinical CT images

    International Nuclear Information System (INIS)

    Pakdel, Amirreza; Mainprize, James G.; Robert, Normand; Fialkov, Jeffery; Whyne, Cari M.

    2014-01-01

    Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge about the

  4. Model-based PSF and MTF estimation and validation from skeletal clinical CT images

    Energy Technology Data Exchange (ETDEWEB)

    Pakdel, Amirreza [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Mainprize, James G.; Robert, Normand [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5 (Canada); Fialkov, Jeffery [Division of Plastic Surgery, Sunnybrook Health Sciences Center, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Whyne, Cari M., E-mail: cari.whyne@sunnybrook.ca [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada)

    2014-01-15

    Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge

  5. Solar radiation estimation based on the insolation

    International Nuclear Information System (INIS)

    Assis, F.N. de; Steinmetz, S.; Martins, S.R.; Mendez, M.E.G.

    1998-01-01

    A series of daily global solar radiation data measured by an Eppley pyranometer was used to test PEREIRA and VILLA NOVA’s (1997) model to estimate the potential of radiation based on the instantaneous values measured at solar noon. The model also allows to estimate the parameters of PRESCOTT’s equation (1940) assuming a = 0,29 cosj. The results demonstrated the model’s validity for the studied conditions. Simultaneously, the hypothesis of generalizing the use of the radiation estimative formulas based on insolation, and using K = Ko (0,29 cosj + 0,50 n/N), was analysed and confirmed [pt

  6. Estimating and validating ground-based timber harvesting production through computer simulation

    Science.gov (United States)

    Jingxin Wang; Chris B. LeDoux

    2003-01-01

    Estimating ground-based timber harvesting systems production with an object oriented methodology was investigated. The estimation model developed generates stands of trees, simulates chain saw, drive-to-tree feller-buncher, swing-to-tree single-grip harvester felling, and grapple skidder and forwarder extraction activities, and analyzes costs and productivity. It also...

  7. Validity of eyeball estimation for range of motion during the cervical flexion rotation test compared to an ultrasound-based movement analysis system.

    Science.gov (United States)

    Schäfer, Axel; Lüdtke, Kerstin; Breuel, Franziska; Gerloff, Nikolas; Knust, Maren; Kollitsch, Christian; Laukart, Alex; Matej, Laura; Müller, Antje; Schöttker-Königer, Thomas; Hall, Toby

    2018-08-01

    Headache is a common and costly health problem. Although pathogenesis of headache is heterogeneous, one reported contributing factor is dysfunction of the upper cervical spine. The flexion rotation test (FRT) is a commonly used diagnostic test to detect upper cervical movement impairment. The aim of this cross-sectional study was to investigate concurrent validity of detecting high cervical ROM impairment during the FRT by comparing measurements established by an ultrasound-based system (gold standard) with eyeball estimation. Secondary aim was to investigate intra-rater reliability of FRT ROM eyeball estimation. The examiner (6 years experience) was blinded to the data from the ultrasound-based device and to the symptoms of the patients. FRT test result (positive or negative) was based on visual estimation of range of rotation less than 34° to either side. Concurrently, range of rotation was evaluated using the ultrasound-based device. A total of 43 subjects with headache (79% female), mean age of 35.05 years (SD 13.26) were included. According to the International Headache Society Classification 23 subjects had migraine, 4 tension type headache, and 16 multiple headache forms. Sensitivity and specificity were 0.96 and 0.89 for combined rotation, indicating good concurrent reliability. The area under the ROC curve was 0.95 (95% CI 0.91-0.98) for rotation to both sides. Intra-rater reliability for eyeball estimation was excellent with Fleiss Kappa 0.79 for right rotation and left rotation. The results of this study indicate that the FRT is a valid and reliable test to detect impairment of upper cervical ROM in patients with headache.

  8. Estimating activity energy expenditure: how valid are physical activity questionnaires?

    Science.gov (United States)

    Neilson, Heather K; Robson, Paula J; Friedenreich, Christine M; Csizmadi, Ilona

    2008-02-01

    Activity energy expenditure (AEE) is the modifiable component of total energy expenditure (TEE) derived from all activities, both volitional and nonvolitional. Because AEE may affect health, there is interest in its estimation in free-living people. Physical activity questionnaires (PAQs) could be a feasible approach to AEE estimation in large populations, but it is unclear whether or not any PAQ is valid for this purpose. Our aim was to explore the validity of existing PAQs for estimating usual AEE in adults, using doubly labeled water (DLW) as a criterion measure. We reviewed 20 publications that described PAQ-to-DLW comparisons, summarized study design factors, and appraised criterion validity using mean differences (AEE(PAQ) - AEE(DLW), or TEE(PAQ) - TEE(DLW)), 95% limits of agreement, and correlation coefficients (AEE(PAQ) versus AEE(DLW) or TEE(PAQ) versus TEE(DLW)). Only 2 of 23 PAQs assessed most types of activity over the past year and indicated acceptable criterion validity, with mean differences (TEE(PAQ) - TEE(DLW)) of 10% and 2% and correlation coefficients of 0.62 and 0.63, respectively. At the group level, neither overreporting nor underreporting was more prevalent across studies. We speculate that, aside from reporting error, discrepancies between PAQ and DLW estimates may be partly attributable to 1) PAQs not including key activities related to AEE, 2) PAQs and DLW ascertaining different time periods, or 3) inaccurate assignment of metabolic equivalents to self-reported activities. Small sample sizes, use of correlation coefficients, and limited information on individual validity were problematic. Future research should address these issues to clarify the true validity of PAQs for estimating AEE.

  9. Content validity and its estimation

    Directory of Open Access Journals (Sweden)

    Yaghmale F

    2003-04-01

    Full Text Available Background: Measuring content validity of instruments are important. This type of validity can help to ensure construct validity and give confidence to the readers and researchers about instruments. content validity refers to the degree that the instrument covers the content that it is supposed to measure. For content validity two judgments are necessary: the measurable extent of each item for defining the traits and the set of items that represents all aspects of the traits. Purpose: To develop a content valid scale for assessing experience with computer usage. Methods: First a review of 2 volumes of International Journal of Nursing Studies, was conducted with onlyI article out of 13 which documented content validity did so by a 4-point content validity index (CV! and the judgment of 3 experts. Then a scale with 38 items was developed. The experts were asked to rate each item based on relevance, clarity, simplicity and ambiguity on the four-point scale. Content Validity Index (CVI for each item was determined. Result: Of 38 items, those with CVIover 0.75 remained and the rest were discarded reSulting to 25-item scale. Conclusion: Although documenting content validity of an instrument may seem expensive in terms of time and human resources, its importance warrants greater attention when a valid assessment instrument is to be developed. Keywords: Content Validity, Measuring Content Validity

  10. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  11. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  12. View Estimation Based on Value System

    Science.gov (United States)

    Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru

    Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.

  13. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  14. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    Science.gov (United States)

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating

  15. Observer-Based Human Knee Stiffness Estimation.

    Science.gov (United States)

    Misgeld, Berno J E; Luken, Markus; Riener, Robert; Leonhardt, Steffen

    2017-05-01

    We consider the problem of stiffness estimation for the human knee joint during motion in the sagittal plane. The new stiffness estimator uses a nonlinear reduced-order biomechanical model and a body sensor network (BSN). The developed model is based on a two-dimensional knee kinematics approach to calculate the angle-dependent lever arms and the torques of the muscle-tendon-complex. To minimize errors in the knee stiffness estimation procedure that result from model uncertainties, a nonlinear observer is developed. The observer uses the electromyogram (EMG) of involved muscles as input signals and the segmental orientation as the output signal to correct the observer-internal states. Because of dominating model nonlinearities and nonsmoothness of the corresponding nonlinear functions, an unscented Kalman filter is designed to compute and update the observer feedback (Kalman) gain matrix. The observer-based stiffness estimation algorithm is subsequently evaluated in simulations and in a test bench, specifically designed to provide robotic movement support for the human knee joint. In silico and experimental validation underline the good performance of the knee stiffness estimation even in the cases of a knee stiffening due to antagonistic coactivation. We have shown the principle function of an observer-based approach to knee stiffness estimation that employs EMG signals and segmental orientation provided by our own IPANEMA BSN. The presented approach makes realtime, model-based estimation of knee stiffness with minimal instrumentation possible.

  16. Valid and efficient manual estimates of intracranial volume from magnetic resonance images

    International Nuclear Information System (INIS)

    Klasson, Niklas; Olsson, Erik; Rudemo, Mats; Eckerström, Carl; Malmgren, Helge; Wallin, Anders

    2015-01-01

    Manual segmentations of the whole intracranial vault in high-resolution magnetic resonance images are often regarded as very time-consuming. Therefore it is common to only segment a few linearly spaced intracranial areas to estimate the whole volume. The purpose of the present study was to evaluate how the validity of intracranial volume estimates is affected by the chosen interpolation method, orientation of the intracranial areas and the linear spacing between them. Intracranial volumes were manually segmented on 62 participants from the Gothenburg MCI study using 1.5 T, T 1 -weighted magnetic resonance images. Estimates of the intracranial volumes were then derived using subsamples of linearly spaced coronal, sagittal or transversal intracranial areas from the same volumes. The subsamples of intracranial areas were interpolated into volume estimates by three different interpolation methods. The linear spacing between the intracranial areas ranged from 2 to 50 mm and the validity of the estimates was determined by comparison with the entire intracranial volumes. A progressive decrease in intra-class correlation and an increase in percentage error could be seen with increased linear spacing between intracranial areas. With small linear spacing (≤15 mm), orientation of the intracranial areas and interpolation method had negligible effects on the validity. With larger linear spacing, the best validity was achieved using cubic spline interpolation with either coronal or sagittal intracranial areas. Even at a linear spacing of 50 mm, cubic spline interpolation on either coronal or sagittal intracranial areas had a mean absolute agreement intra-class correlation with the entire intracranial volumes above 0.97. Cubic spline interpolation in combination with linearly spaced sagittal or coronal intracranial areas overall resulted in the most valid and robust estimates of intracranial volume. Using this method, valid ICV estimates could be obtained in less than five

  17. Validation of limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring - is a separate validation group required?

    NARCIS (Netherlands)

    Proost, J. H.

    Objective: Limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring are usually validated in a separate group of patients, according to published guidelines. The aim of this study is to evaluate the validation of LSM by comparing independent validation with cross-validation

  18. MR-based Water Content Estimation in Cartilage: Design and Validation of a Method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sofie; Ringgaard, Steffen

    2012-01-01

    Objective Design and validation of an MR-based method that allows the calculation of the water content in cartilage tissue. Material and Methods We modified and adapted to cartilage tissue T1 map based water content MR sequences commonly used in the neurology field. Using a 37 Celsius degree stable...... was costumed and programmed. Finally, we validated the method after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue...... contains) and we measured the water they contained. Results We could reproduce twice the 37 Celsius degree system and could perform the measurements in a similar way. We found that the MR T1 map based water content sequences can provide information that, after being analyzed with a special software, can...

  19. Well-founded cost estimation validated by experience

    International Nuclear Information System (INIS)

    LaGuardia, T.S.

    2005-01-01

    Full text: Reliable cost estimating is one of the most important elements of decommissioning planning. Alternative technologies may be evaluated and compared based on their efficiency and effectiveness, and measured against a baseline cost as to the feasibility and benefits derived from the technology. When the plan is complete, those cost considerations ensure that it is economically sound and practical for funding. Estimates of decommissioning costs have been performed and published by many organizations for many different applications. The results often vary because of differences in the work scope. Labor force cost, monetary considerations, oversight costs, the specific contaminated materials involved, the waste stream and peripheral costs associated with that type of waste, or applicable environmental compliance requirements. Many of these differences are unavoidable since a reasonable degree of reliability and accuracy can only be achieved by developing decommissioning cost estimates on a case-by-case site-specific basis. This paper describes the estimating methodology and process applied to develop decommissioning cost estimates. A major effort has been made to standardize these methodologies, and to understand the assumptions and bases that drive the costs. However, estimates are only as accurate as the information available from which to derive the costs. This information includes the assumptions of scope of the work, labour cost inputs, inflationary effects, and financial analyses that project these costs to year of expenditure. Attempts at comparison of estimates for two facilities of similar design and size must clearly identify the assumptions used in developing the estimate, and comparison of actual costs versus estimated costs must reflect these same assumptions. For the nuclear industry to grow, decommissioning estimating tools must improve to keep pace with changing technology, regulations and stakeholder issues. The decommissioning industry needs

  20. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Melius, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ong, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  1. A mathematical method for verifying the validity of measured information about the flows of energy resources based on the state estimation theory

    Science.gov (United States)

    Pazderin, A. V.; Sof'in, V. V.; Samoylenko, V. O.

    2015-11-01

    Efforts aimed at improving energy efficiency in all branches of the fuel and energy complex shall be commenced with setting up a high-tech automated system for monitoring and accounting energy resources. Malfunctions and failures in the measurement and information parts of this system may distort commercial measurements of energy resources and lead to financial risks for power supplying organizations. In addition, measurement errors may be connected with intentional distortion of measurements for reducing payment for using energy resources on the consumer's side, which leads to commercial loss of energy resource. The article presents a universal mathematical method for verifying the validity of measurement information in networks for transporting energy resources, such as electricity and heat, petroleum, gas, etc., based on the state estimation theory. The energy resource transportation network is represented by a graph the nodes of which correspond to producers and consumers, and its branches stand for transportation mains (power lines, pipelines, and heat network elements). The main idea of state estimation is connected with obtaining the calculated analogs of energy resources for all available measurements. Unlike "raw" measurements, which contain inaccuracies, the calculated flows of energy resources, called estimates, will fully satisfy the suitability condition for all state equations describing the energy resource transportation network. The state equations written in terms of calculated estimates will be already free from residuals. The difference between a measurement and its calculated analog (estimate) is called in the estimation theory an estimation remainder. The obtained large values of estimation remainders are an indicator of high errors of particular energy resource measurements. By using the presented method it is possible to improve the validity of energy resource measurements, to estimate the transportation network observability, to eliminate

  2. Validation of equations for pleural effusion volume estimation by ultrasonography.

    Science.gov (United States)

    Hassan, Maged; Rizk, Rana; Essam, Hatem; Abouelnour, Ahmed

    2017-12-01

    To validate the accuracy of previously published equations that estimate pleural effusion volume using ultrasonography. Only equations using simple measurements were tested. Three measurements were taken at the posterior axillary line for each case with effusion: lateral height of effusion ( H ), distance between collapsed lung and chest wall ( C ) and distance between lung and diaphragm ( D ). Cases whose effusion was aspirated to dryness were included and drained volume was recorded. Intra-class correlation coefficient (ICC) was used to determine the predictive accuracy of five equations against the actual volume of aspirated effusion. 46 cases with effusion were included. The most accurate equation in predicting effusion volume was ( H  +  D ) × 70 (ICC 0.83). The simplest and yet accurate equation was H  × 100 (ICC 0.79). Pleural effusion height measured by ultrasonography gives a reasonable estimate of effusion volume. Incorporating distance between lung base and diaphragm into estimation improves accuracy from 79% with the first method to 83% with the latter.

  3. Validity and reliability of Nike + Fuelband for estimating physical activity energy expenditure.

    Science.gov (United States)

    Tucker, Wesley J; Bhammar, Dharini M; Sawyer, Brandon J; Buman, Matthew P; Gaesser, Glenn A

    2015-01-01

    The Nike + Fuelband is a commercially available, wrist-worn accelerometer used to track physical activity energy expenditure (PAEE) during exercise. However, validation studies assessing the accuracy of this device for estimating PAEE are lacking. Therefore, this study examined the validity and reliability of the Nike + Fuelband for estimating PAEE during physical activity in young adults. Secondarily, we compared PAEE estimation of the Nike + Fuelband with the previously validated SenseWear Armband (SWA). Twenty-four participants (n = 24) completed two, 60-min semi-structured routines consisting of sedentary/light-intensity, moderate-intensity, and vigorous-intensity physical activity. Participants wore a Nike + Fuelband and SWA, while oxygen uptake was measured continuously with an Oxycon Mobile (OM) metabolic measurement system (criterion). The Nike + Fuelband (ICC = 0.77) and SWA (ICC = 0.61) both demonstrated moderate to good validity. PAEE estimates provided by the Nike + Fuelband (246 ± 67 kcal) and SWA (238 ± 57 kcal) were not statistically different than OM (243 ± 67 kcal). Both devices also displayed similar mean absolute percent errors for PAEE estimates (Nike + Fuelband = 16 ± 13 %; SWA = 18 ± 18 %). Test-retest reliability for PAEE indicated good stability for Nike + Fuelband (ICC = 0.96) and SWA (ICC = 0.90). The Nike + Fuelband provided valid and reliable estimates of PAEE, that are similar to the previously validated SWA, during a routine that included approximately equal amounts of sedentary/light-, moderate- and vigorous-intensity physical activity.

  4. Parameter extraction and estimation based on the PV panel outdoor ...

    African Journals Online (AJOL)

    The experimental data obtained are validated and compared with the estimated results obtained through simulation based on the manufacture's data sheet. The simulation is based on the Newton-Raphson iterative method in MATLAB environment. This approach aids the computation of the PV module's parameters at any ...

  5. Online Internal Temperature Estimation for Lithium-Ion Batteries Based on Kalman Filter

    Directory of Open Access Journals (Sweden)

    Jinlei Sun

    2015-05-01

    Full Text Available The battery internal temperature estimation is important for the thermal safety in applications, because the internal temperature is hard to measure directly. In this work, an online internal temperature estimation method based on a simplified thermal model using a Kalman filter is proposed. As an improvement, the influences of entropy change and overpotential on heat generation are analyzed quantitatively. The model parameters are identified through a current pulse test. The charge/discharge experiments under different current rates are carried out on the same battery to verify the estimation results. The internal and surface temperatures are measured with thermocouples for result validation and model construction. The accuracy of the estimated result is validated with a maximum estimation error of around 1 K.

  6. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    Science.gov (United States)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  7. Validity of a self-administered food frequency questionnaire (FFQ and its generalizability to the estimation of dietary folate intake in Japan

    Directory of Open Access Journals (Sweden)

    Iso Hiroyasu

    2005-10-01

    Full Text Available Abstract Background In an epidemiological study, it is essential to test the validity of the food frequency questionnaire (FFQ for its ability to estimate dietary intake. The objectives of our study were to 1 validate a FFQ for estimating folate intake, and to identify the foods that contribute to inter-individual variation of folate intake in the Japanese population. Methods Validity of the FFQ was evaluated using 28-day weighed dietary records (DRs as gold standard in the two groups independently. In the group for which the FFQ was developed, validity was evaluated by Spearman's correlation coefficients (CCs, and linear regression analysis was used to identify foods with large inter-individual variation. The cumulative mean intake of these foods was compared with total intake estimated by the DR. The external validity of the FFQ and intake from foods on the same list were evaluated in the other group to verify generalizability. Subjects were a subsample from the Japan Public Health Center-based prospective Study who volunteered to participate in the FFQ validation study. Results CCs for the internal validity of the FFQ were 0.49 for men and 0.29 and women, while CCs for external validity were 0.33 for men and 0.42 for women. CCs for cumulative folate intake from 33 foods selected by regression analysis were also applicable to an external population. Conclusion Our FFQ was valid for and generalizable to the estimation of folate intake. Foods identified as predictors of inter-individual variation in folate intake were also generalizable in Japanese populations. The FFQ with 138 foods was valid for the estimation of folate intake, while that with 33 foods might be useful for estimating inter-individual variation and ranking of individual folate intake.

  8. Validity of a self-administered food frequency questionnaire (FFQ) and its generalizability to the estimation of dietary folate intake in Japan

    Science.gov (United States)

    Ishihara, Junko; Yamamoto, Seiichiro; Iso, Hiroyasu; Inoue, Manami; Tsugane, Shoichiro

    2005-01-01

    Background In an epidemiological study, it is essential to test the validity of the food frequency questionnaire (FFQ) for its ability to estimate dietary intake. The objectives of our study were to 1) validate a FFQ for estimating folate intake, and to identify the foods that contribute to inter-individual variation of folate intake in the Japanese population. Methods Validity of the FFQ was evaluated using 28-day weighed dietary records (DRs) as gold standard in the two groups independently. In the group for which the FFQ was developed, validity was evaluated by Spearman's correlation coefficients (CCs), and linear regression analysis was used to identify foods with large inter-individual variation. The cumulative mean intake of these foods was compared with total intake estimated by the DR. The external validity of the FFQ and intake from foods on the same list were evaluated in the other group to verify generalizability. Subjects were a subsample from the Japan Public Health Center-based prospective Study who volunteered to participate in the FFQ validation study. Results CCs for the internal validity of the FFQ were 0.49 for men and 0.29 and women, while CCs for external validity were 0.33 for men and 0.42 for women. CCs for cumulative folate intake from 33 foods selected by regression analysis were also applicable to an external population. Conclusion Our FFQ was valid for and generalizable to the estimation of folate intake. Foods identified as predictors of inter-individual variation in folate intake were also generalizable in Japanese populations. The FFQ with 138 foods was valid for the estimation of folate intake, while that with 33 foods might be useful for estimating inter-individual variation and ranking of individual folate intake. PMID:16202175

  9. Online Synchrophasor-Based Dynamic State Estimation using Real-Time Digital Simulator

    DEFF Research Database (Denmark)

    Khazraj, Hesam; Adewole, Adeyemi Charles; Udaya, Annakkage

    2018-01-01

    Dynamic state estimation is a very important control center application used in the dynamic monitoring of state variables. This paper presents and validates a time-synchronized phasor measurement unit (PMU)-based for dynamic state estimation by unscented Kalman filter (UKF) method using the real-...... using the RTDS (real-time digital simulator). The dynamic state variables of multi-machine systems are monitored and measured for the study on the transient behavior of power systems.......Dynamic state estimation is a very important control center application used in the dynamic monitoring of state variables. This paper presents and validates a time-synchronized phasor measurement unit (PMU)-based for dynamic state estimation by unscented Kalman filter (UKF) method using the real......-time digital simulator (RTDS). The dynamic state variables of the system are the rotor angle and speed of the generators. The performance of the UKF method is tested with PMU measurements as inputs using the IEEE 14-bus test system. This test system was modeled in the RSCAD software and tested in real time...

  10. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    Science.gov (United States)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some

  11. Development and validation of a CFD based methodology to estimate the pressure loss of flow through perforated plates

    International Nuclear Information System (INIS)

    Barros Filho, Jose A.; Navarro, Moyses A.; Santos, Andre A.C. dos; Jordao, E.

    2011-01-01

    In spite of the recent great development of Computational Fluid Dynamics (CFD), there are still some issues about how to assess its accurateness. This work presents the validation of a CFD methodology devised to estimate the pressure drop of water flow through perforated plates similar to the ones used in some reactor core components. This was accomplished by comparing the results of CFD simulations against experimental data of 5 perforated plates with different geometric characteristics. The proposed methodology correlates the experimental data within a range of ± 7.5%. The validation procedure recommended by the ASME Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer-V and V 20 is also evaluated. The conclusion is that it is not adequate to this specific use. (author)

  12. Validation of statistical models for estimating hospitalization associated with influenza and other respiratory viruses.

    Directory of Open Access Journals (Sweden)

    Lin Yang

    Full Text Available BACKGROUND: Reliable estimates of disease burden associated with respiratory viruses are keys to deployment of preventive strategies such as vaccination and resource allocation. Such estimates are particularly needed in tropical and subtropical regions where some methods commonly used in temperate regions are not applicable. While a number of alternative approaches to assess the influenza associated disease burden have been recently reported, none of these models have been validated with virologically confirmed data. Even fewer methods have been developed for other common respiratory viruses such as respiratory syncytial virus (RSV, parainfluenza and adenovirus. METHODS AND FINDINGS: We had recently conducted a prospective population-based study of virologically confirmed hospitalization for acute respiratory illnesses in persons <18 years residing in Hong Kong Island. Here we used this dataset to validate two commonly used models for estimation of influenza disease burden, namely the rate difference model and Poisson regression model, and also explored the applicability of these models to estimate the disease burden of other respiratory viruses. The Poisson regression models with different link functions all yielded estimates well correlated with the virologically confirmed influenza associated hospitalization, especially in children older than two years. The disease burden estimates for RSV, parainfluenza and adenovirus were less reliable with wide confidence intervals. The rate difference model was not applicable to RSV, parainfluenza and adenovirus and grossly underestimated the true burden of influenza associated hospitalization. CONCLUSION: The Poisson regression model generally produced satisfactory estimates in calculating the disease burden of respiratory viruses in a subtropical region such as Hong Kong.

  13. A comparative study of soft sensor design for lipid estimation of microalgal photobioreactor system with experimental validation.

    Science.gov (United States)

    Yoo, Sung Jin; Jung, Dong Hwi; Kim, Jung Hun; Lee, Jong Min

    2015-03-01

    This study examines the applicability of various nonlinear estimators for online estimation of the lipid concentration in microalgae cultivation system. Lipid is a useful bio-product that has many applications including biofuels and bioactives. However, the improvement of lipid productivity using real-time monitoring and control with experimental validation is limited because measurement of lipid in microalgae is a difficult and time-consuming task. In this study, estimation of lipid concentration from other measurable sources such as biomass or glucose sensor was studied. Extended Kalman filter (EKF), unscented Kalman filter (UKF), and particle filter (PF) were compared in various cases for their applicability to photobioreactor systems. Furthermore, simulation studies to identify appropriate types of sensors for estimating lipid were also performed. Based on the case studies, the most effective case was validated with experimental data and found that UKF and PF with time-varying system noise covariance is effective for microalgal photobioreactor system. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Fetal QRS detection and heart rate estimation: a wavelet-based approach

    International Nuclear Information System (INIS)

    Almeida, Rute; Rocha, Ana Paula; Gonçalves, Hernâni; Bernardes, João

    2014-01-01

    Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR. (paper)

  15. Experimental study on the plant state estimation for the condition-based maintenance

    International Nuclear Information System (INIS)

    Harada, J. I.; Takahashi, M.; Kitamura, M.; Wakabayashi, T.

    2006-01-01

    A framework of maintenance support system based on the plant state estimation using diverse methods has been proposed and the validity of the plant state estimation methods has been experimentally evaluated. The focus has been set on the construction of the BN for the objective system with the scale and complexity as same as real world systems. Another focus has been set on the other functions for maintenance support system such as signal processing tool and similarity matching. The validity of the proposed inference method has been confirmed through numerical experiments. (authors)

  16. Enhanced closed loop State of Charge estimator for lithium-ion batteries based on Extended Kalman Filter

    International Nuclear Information System (INIS)

    Pérez, Gustavo; Garmendia, Maitane; Reynaud, Jean François; Crego, Jon; Viscarret, Unai

    2015-01-01

    Highlights: • Based on a general model valid in full range of SOC considering varied dynamics. • Integration of an accurate OCV model in EKF taking into account hysteresis effect. • Experimental validation with different current profiles: pulses, EV and lift. • Validated with specifically designed profile demanding accurate OCV modeling. - Abstract: The accurate State of Charge (SOC) estimation in a Li-ion battery requires a suitable model of the cell behavior. In this work an enhanced closed loop estimator based on Extended Kalman Filter (EKF) is proposed, considering a precise model of the cell dynamics valid for different current profiles and SOCs, and a complete model of the Open Circuit Voltage (OCV) which takes into account the hysteresis influence. The employed model and proposed estimator are validated with experimental results obtained from the response of a 40 Ah NMC Li-ion cell to several current profiles. These tests include current pulses, FUDS driving cycles, residential lift profiles, and specially designed profiles which demand an accurate modeling of the transitions between OCV boundaries. In each case, it is demonstrated that the enhanced model can reduce the estimation error nearly by half compared to an estimator ignoring the hysteresis effect. Furthermore, the good performance of the cell dynamics model allows an accurate and stable estimation over different conditions

  17. Development and validation of satellite-based estimates of surface visibility

    Science.gov (United States)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2016-02-01

    A satellite-based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5 % for classifying clear (V ≥ 30 km), moderate (10 km ≤ V United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  18. Development and validation of satellite based estimates of surface visibility

    Science.gov (United States)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2015-10-01

    A satellite based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5% for classifying Clear (V ≥ 30 km), Moderate (10 km ≤ V United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network, and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  19. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    Science.gov (United States)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  20. Validation of risk-based performance indicators: Safety system function trends

    International Nuclear Information System (INIS)

    Boccio, J.L.; Vesely, W.E.; Azarm, M.A.; Carbonaro, J.F.; Usher, J.L.; Oden, N.

    1989-10-01

    This report describes and applies a process for validating a model for a risk-based performance indicator. The purpose of the risk-based indicator evaluated, Safety System Function Trend (SSFT), is to monitor the unavailability of selected safety systems. Interim validation of this indicator is based on three aspects: a theoretical basis, an empirical basis relying on statistical correlations, and case studies employing 25 plant years of historical data collected from five plants for a number of safety systems. Results using the SSFT model are encouraging. Application of the model through case studies dealing with the performance of important safety systems shows that statistically significant trends in, and levels of, system performance can be discerned which thereby can provide leading indications of degrading and/or improving performances. Methods for developing system performance tolerance bounds are discussed and applied to aid in the interpretation of the trends in this risk-based indicator. Some additional characteristics of the SSFT indicator, learned through the data-collection efforts and subsequent data analyses performed, are also discussed. The usefulness and practicality of other data sources for validation purposes are explored. Further validation of this indicator is noted. Also, additional research is underway in developing a more detailed estimator of system unavailability. 9 refs., 18 figs., 5 tabs

  1. Validation of generic cost estimates for construction-related activities at nuclear power plants: Final report

    International Nuclear Information System (INIS)

    Simion, G.; Sciacca, F.; Claiborne, E.; Watlington, B.; Riordan, B.; McLaughlin, M.

    1988-05-01

    This report represents a validation study of the cost methodologies and quantitative factors derived in Labor Productivity Adjustment Factors and Generic Methodology for Estimating the Labor Cost Associated with the Removal of Hardware, Materials, and Structures From Nuclear Power Plants. This cost methodology was developed to support NRC analysts in determining generic estimates of removal, installation, and total labor costs for construction-related activities at nuclear generating stations. In addition to the validation discussion, this report reviews the generic cost analysis methodology employed. It also discusses each of the individual cost factors used in estimating the costs of physical modifications at nuclear power plants. The generic estimating approach presented uses the /open quotes/greenfield/close quotes/ or new plant construction installation costs compiled in the Energy Economic Data Base (EEDB) as a baseline. These baseline costs are then adjusted to account for labor productivity, radiation fields, learning curve effects, and impacts on ancillary systems or components. For comparisons of estimated vs actual labor costs, approximately four dozen actual cost data points (as reported by 14 nuclear utilities) were obtained. Detailed background information was collected on each individual data point to give the best understanding possible so that the labor productivity factors, removal factors, etc., could judiciously be chosen. This study concludes that cost estimates that are typically within 40% of the actual values can be generated by prudently using the methodologies and cost factors investigated herein

  2. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in

    2016-09-07

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.

  3. Development and test validation of a computational scheme for high-fidelity fluence estimations of the Swiss BWRs

    International Nuclear Information System (INIS)

    Vasiliev, A.; Wieselquist, W.; Ferroukhi, H.; Canepa, S.; Heldt, J.; Ledergerber, G.

    2011-01-01

    One of the current objectives within reactor analysis related projects at the Paul Scherrer Institut is the establishment of a comprehensive computational methodology for fast neutron fluence (FNF) estimations of reactor pressure vessels (RPV) and internals for both PWRs and BWRs. In the recent past, such an integral calculational methodology based on the CASMO-4/SIMULATE- 3/MCNPX system of codes was developed for PWRs and validated against RPV scraping tests. Based on the very satisfactory validation results, the methodology was recently applied for predictive FNF evaluations of a Swiss PWR to support the national nuclear safety inspectorate in the framework of life-time estimations. Today, focus is at PSI given to develop a corresponding advanced methodology for high-fidelity FNF estimations of BWR reactors. In this paper, the preliminary steps undertaken in that direction are presented. To start, the concepts of the PWR computational scheme and its transfer/adaptation to BWR are outlined. Then, the modelling of a Swiss BWR characterized by very heterogeneous core designs is presented along with preliminary sensitivity studies carried out to assess the sufficient level of details required for the complex core region. Finally, a first validation test case is presented on the basis of two dosimeter monitors irradiated during two recent cycles of the given BWR reactor. The achieved computational results show a satisfactory agreement with measured dosimeter data and illustrate thereby the feasibility of applying the PSI FNF computational scheme also for BWRs. Further sensitivity/optimization studies are nevertheless necessary in order to consolidate the scheme and to ensure increasing continuously, the fidelity and reliability of the BWR FNF estimations. (author)

  4. Automated mode shape estimation in agent-based wireless sensor networks

    Science.gov (United States)

    Zimmerman, Andrew T.; Lynch, Jerome P.

    2010-04-01

    Recent advances in wireless sensing technology have made it possible to deploy dense networks of sensing transducers within large structural systems. Because these networks leverage the embedded computing power and agent-based abilities integral to many wireless sensing devices, it is possible to analyze sensor data autonomously and in-network. In this study, market-based techniques are used to autonomously estimate mode shapes within a network of agent-based wireless sensors. Specifically, recent work in both decentralized Frequency Domain Decomposition and market-based resource allocation is leveraged to create a mode shape estimation algorithm derived from free-market principles. This algorithm allows an agent-based wireless sensor network to autonomously shift emphasis between improving mode shape accuracy and limiting the consumption of certain scarce network resources: processing time, storage capacity, and power consumption. The developed algorithm is validated by successfully estimating mode shapes using a network of wireless sensor prototypes deployed on the mezzanine balcony of Hill Auditorium, located on the University of Michigan campus.

  5. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    Science.gov (United States)

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research

  6. Regional GRACE-based estimates of water mass variations over Australia: validation and interpretation

    Science.gov (United States)

    Seoane, L.; Ramillien, G.; Frappart, F.; Leblanc, M.

    2013-04-01

    Time series of regional 2°-by-2° GRACE solutions have been computed from 2003 to 2011 with a 10 day resolution by using an energy integral method over Australia [112° E 156° E; 44° S 10° S]. This approach uses the dynamical orbit analysis of GRACE Level 1 measurements, and specially accurate along-track K Band Range Rate (KBRR) residuals (1 μm s-1 level of error) to estimate the total water mass over continental regions. The advantages of regional solutions are a significant reduction of GRACE aliasing errors (i.e. north-south stripes) providing a more accurate estimation of water mass balance for hydrological applications. In this paper, the validation of these regional solutions over Australia is presented as well as their ability to describe water mass change as a reponse of climate forcings such as El Niño. Principal component analysis of GRACE-derived total water storage maps show spatial and temporal patterns that are consistent with independent datasets (e.g. rainfall, climate index and in-situ observations). Regional TWS show higher spatial correlations with in-situ water table measurements over Murray-Darling drainage basin (80-90%), and they offer a better localization of hydrological structures than classical GRACE global solutions (i.e. Level 2 GRGS products and 400 km ICA solutions as a linear combination of GFZ, CSR and JPL GRACE solutions).

  7. Measurement-Based Transmission Line Parameter Estimation with Adaptive Data Selection Scheme

    DEFF Research Database (Denmark)

    Li, Changgang; Zhang, Yaping; Zhang, Hengxu

    2017-01-01

    Accurate parameters of transmission lines are critical for power system operation and control decision making. Transmission line parameter estimation based on measured data is an effective way to enhance the validity of the parameters. This paper proposes a multi-point transmission line parameter...

  8. Liver stiffness value-based risk estimation of late recurrence after curative resection of hepatocellular carcinoma: development and validation of a predictive model.

    Directory of Open Access Journals (Sweden)

    Kyu Sik Jung

    Full Text Available Preoperative liver stiffness (LS measurement using transient elastography (TE is useful for predicting late recurrence after curative resection of hepatocellular carcinoma (HCC. We developed and validated a novel LS value-based predictive model for late recurrence of HCC.Patients who were due to undergo curative resection of HCC between August 2006 and January 2010 were prospectively enrolled and TE was performed prior to operations by study protocol. The predictive model of late recurrence was constructed based on a multiple logistic regression model. Discrimination and calibration were used to validate the model.Among a total of 139 patients who were finally analyzed, late recurrence occurred in 44 patients, with a median follow-up of 24.5 months (range, 12.4-68.1. We developed a predictive model for late recurrence of HCC using LS value, activity grade II-III, presence of multiple tumors, and indocyanine green retention rate at 15 min (ICG R15, which showed fairly good discrimination capability with an area under the receiver operating characteristic curve (AUROC of 0.724 (95% confidence intervals [CIs], 0.632-0.816. In the validation, using a bootstrap method to assess discrimination, the AUROC remained largely unchanged between iterations, with an average AUROC of 0.722 (95% CIs, 0.718-0.724. When we plotted a calibration chart for predicted and observed risk of late recurrence, the predicted risk of late recurrence correlated well with observed risk, with a correlation coefficient of 0.873 (P<0.001.A simple LS value-based predictive model could estimate the risk of late recurrence in patients who underwent curative resection of HCC.

  9. Robust Backlash Estimation for Industrial Drive-Train Systems—Theory and Validation

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik

    2018-01-01

    Backlash compensation is used in modern machinetool controls to ensure high-accuracy positioning. When wear of a machine causes deadzone width to increase, high-accuracy control may be maintained if the deadzone is accurately estimated. Deadzone estimation is also an important parameter to indica......-of-the-art Siemens equipment. The experiments validate the theory and show that expected performance and robustness to parameter uncertainties are both achieved....

  10. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    Science.gov (United States)

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  11. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, Cetin; Williams, Brian; McClure, Patrick; Nelson, Ralph A.

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M and S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for

  12. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Unal, Cetin [Los Alamos National Laboratory; Williams, Brian [Los Alamos National Laboratory; Mc Clure, Patrick [Los Alamos National Laboratory; Nelson, Ralph A [IDAHO NATIONAL LAB

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  13. Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.

    Science.gov (United States)

    Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M

    2016-03-11

    Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.

  14. Validity of parent-reported weight and height of preschool children measured at home or estimated without home measurement: a validation study

    Directory of Open Access Journals (Sweden)

    Cox Bianca

    2011-07-01

    Full Text Available Abstract Background Parental reports are often used in large-scale surveys to assess children's body mass index (BMI. Therefore, it is important to know to what extent these parental reports are valid and whether it makes a difference if the parents measured their children's weight and height at home or whether they simply estimated these values. The aim of this study is to compare the validity of parent-reported height, weight and BMI values of preschool children (3-7 y-old, when measured at home or estimated by parents without actual measurement. Methods The subjects were 297 Belgian preschool children (52.9% male. Participation rate was 73%. A questionnaire including questions about height and weight of the children was completed by the parents. Nurses measured height and weight following standardised procedures. International age- and sex-specific BMI cut-off values were employed to determine categories of weight status and obesity. Results On the group level, no important differences in accuracy of reported height, weight and BMI were identified between parent-measured or estimated values. However, for all 3 parameters, the correlations between parental reports and nurse measurements were higher in the group of children whose body dimensions were measured by the parents. Sensitivity for underweight and overweight/obesity were respectively 73% and 47% when parents measured their child's height and weight, and 55% and 47% when parents estimated values without measurement. Specificity for underweight and overweight/obesity were respectively 82% and 97% when parents measured the children, and 75% and 93% with parent estimations. Conclusions Diagnostic measures were more accurate when parents measured their child's weight and height at home than when those dimensions were based on parental judgements. When parent-reported data on an individual level is used, the accuracy could be improved by encouraging the parents to measure weight and height

  15. Classification in hyperspectral images by independent component analysis, segmented cross-validation and uncertainty estimates

    Directory of Open Access Journals (Sweden)

    Beatriz Galindo-Prieto

    2018-02-01

    Full Text Available Independent component analysis combined with various strategies for cross-validation, uncertainty estimates by jack-knifing and critical Hotelling’s T2 limits estimation, proposed in this paper, is used for classification purposes in hyperspectral images. To the best of our knowledge, the combined approach of methods used in this paper has not been previously applied to hyperspectral imaging analysis for interpretation and classification in the literature. The data analysis performed here aims to distinguish between four different types of plastics, some of them containing brominated flame retardants, from their near infrared hyperspectral images. The results showed that the method approach used here can be successfully used for unsupervised classification. A comparison of validation approaches, especially leave-one-out cross-validation and regions of interest scheme validation is also evaluated.

  16. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations.

    Science.gov (United States)

    Checchi, Francesco; Stewart, Barclay T; Palmer, Jennifer J; Grundy, Chris

    2013-01-23

    Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons' camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to "gold standard" reference population figures from census or other robust methods. Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of layout. For each site, estimates were produced in 2-5 working person-days. In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or shelters, a complex pattern of roofs and multi-level buildings. Based on these results, we discuss possible ways forward for the method's development.

  17. Population-based absolute risk estimation with survey data

    Science.gov (United States)

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  18. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    Science.gov (United States)

    Guo, T. H.; Musgrave, J.

    1992-11-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using

  19. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time.

    Science.gov (United States)

    Martin, Corby K; Correa, John B; Han, Hongmei; Allen, H Raymond; Rood, Jennifer C; Champagne, Catherine M; Gunturk, Bahadir K; Bray, George A

    2012-04-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1's objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake (EI) with the Remote Food Photography Method (RFPM) over 6 days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, EI estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n = 24) or Customized Prompts (n = 16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating EI when Standard (mean ± s.d. = -895 ± 770 kcal/day, P < 0.0001), but not Customized Prompts (-270 ± 748 kcal/day, P = 0.22) were used. Error (EI from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM's ability to accurately estimate EI in free-living adults (N = 50) over 6 days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living EI (-152 ± 694 kcal/day, P = 0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake.

  20. Validation of a spectrophotometer-based method for estimating daily sperm production and deferent duct transit.

    Science.gov (United States)

    Froman, D P; Rhoads, D D

    2012-10-01

    The objectives of the present work were 3-fold. First, a new method for estimating daily sperm production was validated. This method, in turn, was used to evaluate testis output as well as deferent duct throughput. Next, this analytical approach was evaluated in 2 experiments. The first experiment compared left and right reproductive tracts within roosters. The second experiment compared reproductive tract throughput in roosters from low and high sperm mobility lines. Standard curves were constructed from which unknown concentrations of sperm cells and sperm nuclei could be predicted from observed absorbance. In each case, the independent variable was based upon hemacytometer counts, and absorbance was a linear function of concentration. Reproductive tracts were excised, semen recovered from each duct, and the extragonadal sperm reserve determined by multiplying volume by sperm cell concentration. Testicular sperm nuclei were procured by homogenization of a whole testis, overlaying a 20-mL volume of homogenate upon 15% (wt/vol) Accudenz (Accurate Chemical and Scientific Corporation, Westbury, NY), and then washing nuclei by centrifugation through the Accudenz layer. Daily sperm production was determined by dividing the predicted number of sperm nuclei within the homogenate by 4.5 d (i.e., the time sperm with elongated nuclei spend within the testis). Sperm transit through the deferent duct was estimated by dividing the extragonadal reserve by daily sperm production. Neither the efficiency of sperm production (sperm per gram of testicular parenchyma per day) nor deferent duct transit differed between left and right reproductive tracts (P > 0.05). Whereas efficiency of sperm production did not differ (P > 0.05) between low and high sperm mobility lines, deferent duct transit differed between lines (P < 0.001). On average, this process required 2.2 and 1.0 d for low and high lines, respectively. In summary, we developed and then tested a method for quantifying male

  1. Evidence-based research: understanding the best estimate

    Directory of Open Access Journals (Sweden)

    Bauer JG

    2016-09-01

    Full Text Available Janet G Bauer,1 Sue S Spackman,2 Robert Fritz,2 Amanjyot K Bains,3 Jeanette Jetton-Rangel3 1Advanced Education Services, 2Division of General Dentistry, 3Center of Dental Research, Loma Linda University School of Dentistry, Loma Linda, CA, USA Introduction: Best estimates of intervention outcomes are used when uncertainties in decision making are evidenced. Best estimates are often, out of necessity, from a context of less than quality evidence or needing more evidence to provide accuracy. Purpose: The purpose of this article is to understand the best estimate behavior, so that clinicians and patients may have confidence in its quantification and validation. Methods: To discover best estimates and quantify uncertainty, critical appraisals of the literature, gray literature and its resources, or both are accomplished. Best estimates of pairwise comparisons are calculated using meta-analytic methods; multiple comparisons use network meta-analysis. Manufacturers provide margins of performance of proprietary material(s. Lower margin performance thresholds or requirements (functional failure of materials are determined by a distribution of tests to quantify performance or clinical competency. The same is done for the high margin performance thresholds (estimated true value of success and clinician-derived critical values (material failure to function clinically. This quantification of margins and uncertainties assists clinicians in determining if reported best estimates are progressing toward true value as new knowledge is reported. Analysis: The best estimate of outcomes focuses on evidence-centered care. In stochastic environments, we are not able to observe all events in all situations to know without uncertainty the best estimates of predictable outcomes. Point-in-time analyses of best estimates using quantification of margins and uncertainties do this. Conclusion: While study design and methodology are variables known to validate the quality of

  2. Development and validation of a two-dimensional fast-response flood estimation model

    Energy Technology Data Exchange (ETDEWEB)

    Judi, David R [Los Alamos National Laboratory; Mcpherson, Timothy N [Los Alamos National Laboratory; Burian, Steven J [UNIV OF UTAK

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.

  3. An intercomparison and validation of satellite-based surface radiative energy flux estimates over the Arctic

    Science.gov (United States)

    Riihelä, Aku; Key, Jeffrey R.; Meirink, Jan Fokke; Kuipers Munneke, Peter; Palo, Timo; Karlsson, Karl-Göran

    2017-05-01

    Accurate determination of radiative energy fluxes over the Arctic is of crucial importance for understanding atmosphere-surface interactions, melt and refreezing cycles of the snow and ice cover, and the role of the Arctic in the global energy budget. Satellite-based estimates can provide comprehensive spatiotemporal coverage, but the accuracy and comparability of the existing data sets must be ascertained to facilitate their use. Here we compare radiative flux estimates from Clouds and the Earth's Radiant Energy System (CERES) Synoptic 1-degree (SYN1deg)/Energy Balanced and Filled, Global Energy and Water Cycle Experiment (GEWEX) surface energy budget, and our own experimental FluxNet / Satellite Application Facility on Climate Monitoring cLoud, Albedo and RAdiation (CLARA) data against in situ observations over Arctic sea ice and the Greenland Ice Sheet during summer of 2007. In general, CERES SYN1deg flux estimates agree best with in situ measurements, although with two particular limitations: (1) over sea ice the upwelling shortwave flux in CERES SYN1deg appears to be underestimated because of an underestimated surface albedo and (2) the CERES SYN1deg upwelling longwave flux over sea ice saturates during midsummer. The Advanced Very High Resolution Radiometer-based GEWEX and FluxNet-CLARA flux estimates generally show a larger range in retrieval errors relative to CERES, with contrasting tendencies relative to each other. The largest source of retrieval error in the FluxNet-CLARA downwelling shortwave flux is shown to be an overestimated cloud optical thickness. The results illustrate that satellite-based flux estimates over the Arctic are not yet homogeneous and that further efforts are necessary to investigate the differences in the surface and cloud properties which lead to disagreements in flux retrievals.

  4. Temporal regularization of ultrasound-based liver motion estimation for image-guided radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J. [Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS foundation Trust, Sutton, London SM2 5PT (United Kingdom)

    2016-01-15

    Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison with normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking

  5. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time

    Science.gov (United States)

    Martin, C. K.; Correa, J. B.; Han, H.; Allen, H. R.; Rood, J.; Champagne, C. M.; Gunturk, B. K.; Bray, G. A.

    2014-01-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1’s objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake with the Remote Food Photography Method (RFPM) over six days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server for food intake estimation. Consistent with EMA, prompts are sent to the Smartphones reminding participants to capture food images. During Study 1, energy intake estimated with the RFPM and the gold standard, doubly labeled water (DLW), were compared. Participants were assigned to receive Standard EMA Prompts (n=24) or Customized Prompts (n=16) (the latter received more reminders delivered at personalized meal times). The RFPM differed significantly from DLW at estimating energy intake when Standard (mean±SD = −895±770 kcal/day, p<.0001), but not Customized Prompts (−270±748 kcal/day, p=.22) were used. Error (energy intake from the RFPM minus that from DLW) was significantly smaller with Customized vs. Standard Prompts. The objectives of Study 2 included testing the RFPM’s ability to accurately estimate energy intake in free-living adults (N=50) over six days, and energy and nutrient intake in laboratory-based meals. The RFPM did not differ significantly from DLW at estimating free-living energy intake (−152±694 kcal/day, p=0.16). During laboratory-based meals, estimating energy and macronutrient intake with the RFPM did not differ significantly compared to directly weighed intake. PMID:22134199

  6. Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets

    Directory of Open Access Journals (Sweden)

    Raquel D. Langer

    2016-03-01

    Full Text Available Background: Bioelectrical Impedance Analysis (BIA is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. Methods: A total of 396 males, Brazilian Army cadets, aged 17–24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA as a reference method. Student’s t-test (for paired sample, linear regression analysis, and Bland–Altman method were used to test the validity of the BIA equations. Results: Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05 and large limits of agreement by Bland–Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Conclusion: Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.

  7. Estimating incidence from prevalence in generalised HIV epidemics: methods and validation.

    Directory of Open Access Journals (Sweden)

    Timothy B Hallett

    2008-04-01

    Full Text Available HIV surveillance of generalised epidemics in Africa primarily relies on prevalence at antenatal clinics, but estimates of incidence in the general population would be more useful. Repeated cross-sectional measures of HIV prevalence are now becoming available for general populations in many countries, and we aim to develop and validate methods that use these data to estimate HIV incidence.Two methods were developed that decompose observed changes in prevalence between two serosurveys into the contributions of new infections and mortality. Method 1 uses cohort mortality rates, and method 2 uses information on survival after infection. The performance of these two methods was assessed using simulated data from a mathematical model and actual data from three community-based cohort studies in Africa. Comparison with simulated data indicated that these methods can accurately estimates incidence rates and changes in incidence in a variety of epidemic conditions. Method 1 is simple to implement but relies on locally appropriate mortality data, whilst method 2 can make use of the same survival distribution in a wide range of scenarios. The estimates from both methods are within the 95% confidence intervals of almost all actual measurements of HIV incidence in adults and young people, and the patterns of incidence over age are correctly captured.It is possible to estimate incidence from cross-sectional prevalence data with sufficient accuracy to monitor the HIV epidemic. Although these methods will theoretically work in any context, we have able to test them only in southern and eastern Africa, where HIV epidemics are mature and generalised. The choice of method will depend on the local availability of HIV mortality data.

  8. A Novel Rules Based Approach for Estimating Software Birthmark

    Science.gov (United States)

    Binti Alias, Norma; Anwar, Sajid

    2015-01-01

    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363

  9. Quantification of construction waste prevented by BIM-based design validation: Case studies in South Korea.

    Science.gov (United States)

    Won, Jongsung; Cheng, Jack C P; Lee, Ghang

    2016-03-01

    Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Development and Cross-Validation of Equation for Estimating Percent Body Fat of Korean Adults According to Body Mass Index

    Directory of Open Access Journals (Sweden)

    Hoyong Sung

    2017-06-01

    Full Text Available Background : Using BMI as an independent variable is the easiest way to estimate percent body fat. Thus far, few studies have investigated the development and cross-validation of an equation for estimating the percent body fat of Korean adults according to the BMI. The goals of this study were the development and cross-validation of an equation for estimating the percent fat of representative Korean adults using the BMI. Methods : Samples were obtained from the Korea National Health and Nutrition Examination Survey between 2008 and 2011. The samples from 2008-2009 and 2010-2011 were labeled as the validation group (n=10,624 and the cross-validation group (n=8,291, respectively. The percent fat was measured using dual-energy X-ray absorptiometry, and the body mass index, gender, and age were included as independent variables to estimate the measured percent fat. The coefficient of determination (R², standard error of estimation (SEE, and total error (TE were calculated to examine the accuracy of the developed equation. Results : The cross-validated R² was 0.731 for Model 1 and 0.735 for Model 2. The SEE was 3.978 for Model 1 and 3.951 for Model 2. The equations developed in this study are more accurate for estimating percent fat of the cross-validation group than those previously published by other researchers. Conclusion : The newly developed equations are comparatively accurate for the estimation of the percent fat of Korean adults.

  11. Validating fatty acid intake as estimated by an FFQ: how does the 24 h recall perform as reference method compared with the duplicate portion?

    Science.gov (United States)

    Trijsburg, Laura; de Vries, Jeanne Hm; Hollman, Peter Ch; Hulshof, Paul Jm; van 't Veer, Pieter; Boshuizen, Hendriek C; Geelen, Anouk

    2018-05-08

    To compare the performance of the commonly used 24 h recall (24hR) with the more distinct duplicate portion (DP) as reference method for validation of fatty acid intake estimated with an FFQ. Intakes of SFA, MUFA, n-3 fatty acids and linoleic acid (LA) were estimated by chemical analysis of two DP and by on average five 24hR and two FFQ. Plasma n-3 fatty acids and LA were used to objectively compare ranking of individuals based on DP and 24hR. Multivariate measurement error models were used to estimate validity coefficients and attenuation factors for the FFQ with the DP and 24hR as reference methods. Wageningen, the Netherlands. Ninety-two men and 106 women (aged 20-70 years). Validity coefficients for the fatty acid estimates by the FFQ tended to be lower when using the DP as reference method compared with the 24hR. Attenuation factors for the FFQ tended to be slightly higher based on the DP than those based on the 24hR as reference method. Furthermore, when using plasma fatty acids as reference, the DP showed comparable to slightly better ranking of participants according to their intake of n-3 fatty acids (0·33) and n-3:LA (0·34) than the 24hR (0·22 and 0·24, respectively). The 24hR gives only slightly different results compared with the distinctive but less feasible DP, therefore use of the 24hR seems appropriate as the reference method for FFQ validation of fatty acid intake.

  12. Web-based Food Behaviour Questionnaire: validation with grades six to eight students.

    Science.gov (United States)

    Hanning, Rhona M; Royall, Dawna; Toews, Jenn E; Blashill, Lindsay; Wegener, Jessica; Driezen, Pete

    2009-01-01

    The web-based Food Behaviour Questionnaire (FBQ) includes a 24-hour diet recall, a food frequency questionnaire, and questions addressing knowledge, attitudes, intentions, and food-related behaviours. The survey has been revised since it was developed and initially validated. The current study was designed to obtain qualitative feedback and to validate the FBQ diet recall. "Think aloud" techniques were used in cognitive interviews with dietitian experts (n=11) and grade six students (n=21). Multi-ethnic students (n=201) in grades six to eight at urban southern Ontario schools completed the FBQ and, subsequently, one-on-one diet recall interviews with trained dietitians. Food group and nutrient intakes were compared. Users provided positive feedback on the FBQ. Suggestions included adding more foods, more photos for portion estimation, and online student feedback. Energy and nutrient intakes were positively correlated between FBQ and dietitian interviews, overall and by gender and grade (all p<0.001). Intraclass correlation coefficients were ≥0.5 for energy and macro-nutrients, although the web-based survey underestimated energy (10.5%) and carbohydrate (-15.6%) intakes (p<0.05). Under-estimation of rice and pasta portions on the web accounted for 50% of this discrepancy. The FBQ is valid, relative to 24-hour recall interviews, for dietary assessment in diverse populations of Ontario children in grades six to eight.

  13. Convolution-based estimation of organ dose in tube current modulated CT

    Science.gov (United States)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The

  14. Improved best estimate plus uncertainty methodology, including advanced validation concepts, to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, C.; Williams, B.; Hemez, F.; Atamturktur, S.H.; McClure, P.

    2011-01-01

    Research highlights: → The best estimate plus uncertainty methodology (BEPU) is one option in the licensing of nuclear reactors. → The challenges for extending the BEPU method for fuel qualification for an advanced reactor fuel are primarily driven by schedule, the need for data, and the sufficiency of the data. → In this paper we develop an extended BEPU methodology that can potentially be used to address these new challenges in the design and licensing of advanced nuclear reactors. → The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. → The methodology includes a formalism to quantify an adequate level of validation (predictive maturity) with respect to existing data, so that required new testing can be minimized, saving cost by demonstrating that further testing will not enhance the quality of the predictive tools. - Abstract: Many evolving nuclear energy technologies use advanced predictive multiscale, multiphysics modeling and simulation (M and S) capabilities to reduce the cost and schedule of design and licensing. Historically, the role of experiments has been as a primary tool for the design and understanding of nuclear system behavior, while M and S played the subordinate role of supporting experiments. In the new era of multiscale, multiphysics computational-based technology development, this role has been reversed. The experiments will still be needed, but they will be performed at different scales to calibrate and validate the models leading to predictive simulations for design and licensing. Minimizing the required number of validation experiments produces cost and time savings. The use of multiscale, multiphysics models introduces challenges in validating these predictive tools - traditional methodologies will have to be modified to address these challenges. This paper gives the basic aspects of a methodology that can potentially be used to address these new challenges in

  15. Data Based Parameter Estimation Method for Circular-scanning SAR Imaging

    Directory of Open Access Journals (Sweden)

    Chen Gong-bo

    2013-06-01

    Full Text Available The circular-scanning Synthetic Aperture Radar (SAR is a novel working mode and its image quality is closely related to the accuracy of the imaging parameters, especially considering the inaccuracy of the real speed of the motion. According to the characteristics of the circular-scanning mode, a new data based method for estimating the velocities of the radar platform and the scanning-angle of the radar antenna is proposed in this paper. By referring to the basic conception of the Doppler navigation technique, the mathematic model and formulations for the parameter estimation are firstly improved. The optimal parameter approximation based on the least square criterion is then realized in solving those equations derived from the data processing. The simulation results verified the validity of the proposed scheme.

  16. Validation and selection of ODE based systems biology models: how to arrive at more reliable decisions.

    Science.gov (United States)

    Hasdemir, Dicle; Hoefsloot, Huub C J; Smilde, Age K

    2015-07-08

    Most ordinary differential equation (ODE) based modeling studies in systems biology involve a hold-out validation step for model validation. In this framework a pre-determined part of the data is used as validation data and, therefore it is not used for estimating the parameters of the model. The model is assumed to be validated if the model predictions on the validation dataset show good agreement with the data. Model selection between alternative model structures can also be performed in the same setting, based on the predictive power of the model structures on the validation dataset. However, drawbacks associated with this approach are usually under-estimated. We have carried out simulations by using a recently published High Osmolarity Glycerol (HOG) pathway from S.cerevisiae to demonstrate these drawbacks. We have shown that it is very important how the data is partitioned and which part of the data is used for validation purposes. The hold-out validation strategy leads to biased conclusions, since it can lead to different validation and selection decisions when different partitioning schemes are used. Furthermore, finding sensible partitioning schemes that would lead to reliable decisions are heavily dependent on the biology and unknown model parameters which turns the problem into a paradox. This brings the need for alternative validation approaches that offer flexible partitioning of the data. For this purpose, we have introduced a stratified random cross-validation (SRCV) approach that successfully overcomes these limitations. SRCV leads to more stable decisions for both validation and selection which are not biased by underlying biological phenomena. Furthermore, it is less dependent on the specific noise realization in the data. Therefore, it proves to be a promising alternative to the standard hold-out validation strategy.

  17. Validation of a Dish-Based Semiquantitative Food Questionnaire in Rural Bangladesh

    Directory of Open Access Journals (Sweden)

    Pi-I. D. Lin

    2017-01-01

    Full Text Available A locally validated tool was needed to evaluate long-term dietary intake in rural Bangladesh. We assessed the validity of a 42-item dish-based semi-quantitative food frequency questionnaire (FFQ using two 3-day food diaries (FDs. We selected a random subset of 47 families (190 participants from a longitudinal arsenic biomonitoring study in Bangladesh to administer the FFQ. Two 3-day FDs were completed by the female head of the households and we used an adult male equivalent method to estimate the FD for the other participants. Food and nutrient intakes measured by FFQ and FD were compared using Pearson’s and Spearman’s correlation, paired t-test, percent difference, cross-classification, weighted Kappa, and Bland–Altman analysis. Results showed good validity for total energy intake (paired t-test, p < 0.05; percent difference <10%, with no presence of proportional bias (Bland–Altman correlation, p > 0.05. After energy-adjustment and de-attenuation for within-person variation, macronutrient intakes had excellent correlations ranging from 0.55 to 0.70. Validity for micronutrients was mixed. High intraclass correlation coefficients (ICCs were found for most nutrients between the two seasons, except vitamin A. This dish-based FFQ provided adequate validity to assess and rank long-term dietary intake in rural Bangladesh for most food groups and nutrients, and should be useful for studying dietary-disease relationships.

  18. Relative validity of fruit and vegetable intake estimated by the food frequency questionnaire used in the Danish National Birth Cohort

    DEFF Research Database (Denmark)

    Mikkelsen, Tina B.; Olsen, Sjurdur F.; Rasmussen, Salka E.

    2007-01-01

    ) (r=0.57); and fruit, vegetables, and juice (F&V&J) (r=0.62). Sensitivities of correct classification by FFQ into the two lowest and the two highest quintiles of F&V&J intake were 58-67% and 50-74%, respectively, and specificities were 71-79% and 65-83%, respectively. F&V&J intake estimated from......Objective: To validate the fruit and vegetable intake estimated from the Food Frequency Questionnaire (FFQ) used in the Danish National Birth Cohort (DNBC). Subjects and setting: The DNBC is a cohort of 101,042 pregnant women in Denmark, who received a FFQ by mail in gestation week 25. A validation...... study with 88 participants was made. A seven-day weighed food diary (FD) and three different biomarkers were employed as comparison methods. Results: Significant correlations between FFQ and FD-based estimates were found for fruit (r=0.66); vegetables (r=0.32); juice (r=0.52); fruit and vegetables (F&V...

  19. Validating the use of 137Cs and 210Pbex measurements to estimate rates of soil loss from cultivated land in southern Italy.

    Science.gov (United States)

    Porto, Paolo; Walling, Des E

    2012-04-01

    Soil erosion represents an important threat to the long-term sustainability of agriculture and forestry in many areas of the world, including southern Italy. Numerous models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution, based on the local topography, hydrometeorology, soil type and land management. However, there remains an important need for empirical measurements to provide a basis for validating and calibrating such models and prediction procedures as well as to support specific investigations and experiments. In this context, erosion plots provide useful information on gross rates of soil loss, but are unable to document the efficiency of the onward transfer of the eroded sediment within a field and towards the stream system, and thus net rates of soil loss from larger areas. The use of environmental radionuclides, particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)), as a means of estimating rates of soil erosion and deposition has attracted increasing attention in recent years and the approach has now been recognised as possessing several important advantages. In order to provide further confirmation of the validity of the estimates of longer-term erosion and soil redistribution rates provided by (137)Cs and (210)Pb(ex) measurements, there is a need for studies aimed explicitly at validating the results obtained. In this context, the authors directed attention to the potential offered by a set of small erosion plots located near Reggio Calabria in southern Italy, for validating estimates of soil loss provided by (137)Cs and (210)Pb(ex) measurements. A preliminary assessment suggested that, notwithstanding the limitations and constraints involved, a worthwhile investigation aimed at validating the use of (137)Cs and (210)Pb(ex) measurements to estimate rates of soil loss from cultivated land could be undertaken. The results demonstrate a close consistency between the measured rates of soil

  20. Screening for cognitive impairment in older individuals. Validation study of a computer-based test.

    Science.gov (United States)

    Green, R C; Green, J; Harrison, J M; Kutner, M H

    1994-08-01

    This study examined the validity of a computer-based cognitive test that was recently designed to screen the elderly for cognitive impairment. Criterion-related validity was examined by comparing test scores of impaired patients and normal control subjects. Construct-related validity was computed through correlations between computer-based subtests and related conventional neuropsychological subtests. University center for memory disorders. Fifty-two patients with mild cognitive impairment by strict clinical criteria and 50 unimpaired, age- and education-matched control subjects. Control subjects were rigorously screened by neurological, neuropsychological, imaging, and electrophysiological criteria to identify and exclude individuals with occult abnormalities. Using a cut-off total score of 126, this computer-based instrument had a sensitivity of 0.83 and a specificity of 0.96. Using a prevalence estimate of 10%, predictive values, positive and negative, were 0.70 and 0.96, respectively. Computer-based subtests correlated significantly with conventional neuropsychological tests measuring similar cognitive domains. Thirteen (17.8%) of 73 volunteers with normal medical histories were excluded from the control group, with unsuspected abnormalities on standard neuropsychological tests, electroencephalograms, or magnetic resonance imaging scans. Computer-based testing is a valid screening methodology for the detection of mild cognitive impairment in the elderly, although this particular test has important limitations. Broader applications of computer-based testing will require extensive population-based validation. Future studies should recognize that normal control subjects without a history of disease who are typically used in validation studies may have a high incidence of unsuspected abnormalities on neurodiagnostic studies.

  1. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  2. Validating the use of 137Cs and 210Pbex measurements to estimate rates of soil loss from cultivated land in southern Italy

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.

    2012-01-01

    Soil erosion represents an important threat to the long-term sustainability of agriculture and forestry in many areas of the world, including southern Italy. Numerous models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution, based on the local topography, hydrometeorology, soil type and land management. However, there remains an important need for empirical measurements to provide a basis for validating and calibrating such models and prediction procedures as well as to support specific investigations and experiments. In this context, erosion plots provide useful information on gross rates of soil loss, but are unable to document the efficiency of the onward transfer of the eroded sediment within a field and towards the stream system, and thus net rates of soil loss from larger areas. The use of environmental radionuclides, particularly caesium-137 ( 137 Cs) and excess lead-210 ( 210 Pb ex ), as a means of estimating rates of soil erosion and deposition has attracted increasing attention in recent years and the approach has now been recognised as possessing several important advantages. In order to provide further confirmation of the validity of the estimates of longer-term erosion and soil redistribution rates provided by 137 Cs and 210 Pb ex measurements, there is a need for studies aimed explicitly at validating the results obtained. In this context, the authors directed attention to the potential offered by a set of small erosion plots located near Reggio Calabria in southern Italy, for validating estimates of soil loss provided by 137 Cs and 210 Pb ex measurements. A preliminary assessment suggested that, notwithstanding the limitations and constraints involved, a worthwhile investigation aimed at validating the use of 137 Cs and 210 Pb ex measurements to estimate rates of soil loss from cultivated land could be undertaken. The results demonstrate a close consistency between the measured rates of soil loss and

  3. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  4. Validating GPM-based Multi-satellite IMERG Products Over South Korea

    Science.gov (United States)

    Wang, J.; Petersen, W. A.; Wolff, D. B.; Ryu, G. H.

    2017-12-01

    Accurate precipitation estimates derived from space-borne satellite measurements are critical for a wide variety of applications such as water budget studies, and prevention or mitigation of natural hazards caused by extreme precipitation events. This study validates the near-real-time Early Run, Late Run and the research-quality Final Run Integrated Multi-Satellite Retrievals for GPM (IMERG) using Korean Quantitative Precipitation Estimation (QPE). The Korean QPE data are at a 1-hour temporal resolution and 1-km by 1-km spatial resolution, and were developed by Korea Meteorological Administration (KMA) from a Real-time ADjusted Radar-AWS (Automatic Weather Station) Rainrate (RAD-RAR) system utilizing eleven radars over the Republic of Korea. The validation is conducted by comparing Version-04A IMERG (Early, Late and Final Runs) with Korean QPE over the area (124.5E-130.5E, 32.5N-39N) at various spatial and temporal scales during March 2014 through November 2016. The comparisons demonstrate the reasonably good ability of Version-04A IMERG products in estimating precipitation over South Korea's complex topography that consists mainly of hills and mountains, as well as large coastal plains. Based on this data, the Early Run, Late Run and Final Run IMERG precipitation estimates higher than 0.1mm h-1 are about 20.1%, 7.5% and 6.1% higher than Korean QPE at 0.1o and 1-hour resolutions. Detailed comparison results are available at https://wallops-prf.gsfc.nasa.gov/KoreanQPE.V04/index.html

  5. Validation of walk score for estimating neighborhood walkability: an analysis of four US metropolitan areas.

    Science.gov (United States)

    Duncan, Dustin T; Aldstadt, Jared; Whalen, John; Melly, Steven J; Gortmaker, Steven L

    2011-11-01

    Neighborhood walkability can influence physical activity. We evaluated the validity of Walk Score(®) for assessing neighborhood walkability based on GIS (objective) indicators of neighborhood walkability with addresses from four US metropolitan areas with several street network buffer distances (i.e., 400-, 800-, and 1,600-meters). Address data come from the YMCA-Harvard After School Food and Fitness Project, an obesity prevention intervention involving children aged 5-11 years and their families participating in YMCA-administered, after-school programs located in four geographically diverse metropolitan areas in the US (n = 733). GIS data were used to measure multiple objective indicators of neighborhood walkability. Walk Scores were also obtained for the participant's residential addresses. Spearman correlations between Walk Scores and the GIS neighborhood walkability indicators were calculated as well as Spearman correlations accounting for spatial autocorrelation. There were many significant moderate correlations between Walk Scores and the GIS neighborhood walkability indicators such as density of retail destinations and intersection density (p walkability. Correlations generally became stronger with a larger spatial scale, and there were some geographic differences. Walk Score(®) is free and publicly available for public health researchers and practitioners. Results from our study suggest that Walk Score(®) is a valid measure of estimating certain aspects of neighborhood walkability, particularly at the 1600-meter buffer. As such, our study confirms and extends the generalizability of previous findings demonstrating that Walk Score is a valid measure of estimating neighborhood walkability in multiple geographic locations and at multiple spatial scales.

  6. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  7. Variables influencing wearable sensor outcome estimates in individuals with stroke and incomplete spinal cord injury: a pilot investigation validating two research grade sensors.

    Science.gov (United States)

    Jayaraman, Chandrasekaran; Mummidisetty, Chaithanya Krishna; Mannix-Slobig, Alannah; McGee Koch, Lori; Jayaraman, Arun

    2018-03-13

    Monitoring physical activity and leveraging wearable sensor technologies to facilitate active living in individuals with neurological impairment has been shown to yield benefits in terms of health and quality of living. In this context, accurate measurement of physical activity estimates from these sensors are vital. However, wearable sensor manufacturers generally only provide standard proprietary algorithms based off of healthy individuals to estimate physical activity metrics which may lead to inaccurate estimates in population with neurological impairment like stroke and incomplete spinal cord injury (iSCI). The main objective of this cross-sectional investigation was to evaluate the validity of physical activity estimates provided by standard proprietary algorithms for individuals with stroke and iSCI. Two research grade wearable sensors used in clinical settings were chosen and the outcome metrics estimated using standard proprietary algorithms were validated against designated golden standard measures (Cosmed K4B2 for energy expenditure and metabolic equivalent and manual tallying for step counts). The influence of sensor location, sensor type and activity characteristics were also studied. 28 participants (Healthy (n = 10); incomplete SCI (n = 8); stroke (n = 10)) performed a spectrum of activities in a laboratory setting using two wearable sensors (ActiGraph and Metria-IH1) at different body locations. Manufacturer provided standard proprietary algorithms estimated the step count, energy expenditure (EE) and metabolic equivalent (MET). These estimates were compared with the estimates from gold standard measures. For verifying validity, a series of Kruskal Wallis ANOVA tests (Games-Howell multiple comparison for post-hoc analyses) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with the designated gold standard measurements. The sensor type, sensor location

  8. Observation-based Quantitative Uncertainty Estimation for Realtime Tsunami Inundation Forecast using ABIC and Ensemble Simulation

    Science.gov (United States)

    Takagawa, T.

    2016-12-01

    An ensemble forecasting scheme for tsunami inundation is presented. The scheme consists of three elemental methods. The first is a hierarchical Bayesian inversion using Akaike's Bayesian Information Criterion (ABIC). The second is Montecarlo sampling from a probability density function of multidimensional normal distribution. The third is ensamble analysis of tsunami inundation simulations with multiple tsunami sources. Simulation based validation of the model was conducted. A tsunami scenario of M9.1 Nankai earthquake was chosen as a target of validation. Tsunami inundation around Nagoya Port was estimated by using synthetic tsunami waveforms at offshore GPS buoys. The error of estimation of tsunami inundation area was about 10% even if we used only ten minutes observation data. The estimation accuracy of waveforms on/off land and spatial distribution of maximum tsunami inundation depth is demonstrated.

  9. A comparative study and validation of state estimation algorithms for Li-ion batteries in battery management systems

    International Nuclear Information System (INIS)

    Klee Barillas, Joaquín; Li, Jiahao; Günther, Clemens; Danzer, Michael A.

    2015-01-01

    Highlights: • Description of state observers for estimating the battery’s SOC. • Implementation of four estimation algorithms in a BMS. • Reliability and performance study of BMS regarding the estimation algorithms. • Analysis of the robustness and code properties of the estimation approaches. • Guide to evaluate estimation algorithms to improve the BMS performance. - Abstract: To increase lifetime, safety, and energy usage battery management systems (BMS) for Li-ion batteries have to be capable of estimating the state of charge (SOC) of the battery cells with a very low estimation error. The accurate SOC estimation and the real time reliability are critical issues for a BMS. In general an increasing complexity of the estimation methods leads to higher accuracy. On the other hand it also leads to a higher computational load and may exceed the BMS limitations or increase its costs. An approach to evaluate and verify estimation algorithms is presented as a requisite prior the release of the battery system. The approach consists of an analysis concerning the SOC estimation accuracy, the code properties, complexity, the computation time, and the memory usage. Furthermore, a study for estimation methods is proposed for their evaluation and validation with respect to convergence behavior, parameter sensitivity, initialization error, and performance. In this work, the introduced analysis is demonstrated with four of the most published model-based estimation algorithms including Luenberger observer, sliding-mode observer, Extended Kalman Filter and Sigma-point Kalman Filter. The experiments under dynamic current conditions are used to verify the real time functionality of the BMS. The results show that a simple estimation method like the sliding-mode observer can compete with the Kalman-based methods presenting less computational time and memory usage. Depending on the battery system’s application the estimation algorithm has to be selected to fulfill the

  10. Are cannabis prevalence estimates comparable across countries and regions? A cross-cultural validation using search engine query data.

    Science.gov (United States)

    Steppan, Martin; Kraus, Ludwig; Piontek, Daniela; Siciliano, Valeria

    2013-01-01

    Prevalence estimation of cannabis use is usually based on self-report data. Although there is evidence on the reliability of this data source, its cross-cultural validity is still a major concern. External objective criteria are needed for this purpose. In this study, cannabis-related search engine query data are used as an external criterion. Data on cannabis use were taken from the 2007 European School Survey Project on Alcohol and Other Drugs (ESPAD). Provincial data came from three Italian nation-wide studies using the same methodology (2006-2008; ESPAD-Italia). Information on cannabis-related search engine query data was based on Google search volume indices (GSI). (1) Reliability analysis was conducted for GSI. (2) Latent measurement models of "true" cannabis prevalence were tested using perceived availability, web-based cannabis searches and self-reported prevalence as indicators. (3) Structure models were set up to test the influences of response tendencies and geographical position (latitude, longitude). In order to test the stability of the models, analyses were conducted on country level (Europe, US) and on provincial level in Italy. Cannabis-related GSI were found to be highly reliable and constant over time. The overall measurement model was highly significant in both data sets. On country level, no significant effects of response bias indicators and geographical position on perceived availability, web-based cannabis searches and self-reported prevalence were found. On provincial level, latitude had a significant positive effect on availability indicating that perceived availability of cannabis in northern Italy was higher than expected from the other indicators. Although GSI showed weaker associations with cannabis use than perceived availability, the findings underline the external validity and usefulness of search engine query data as external criteria. The findings suggest an acceptable relative comparability of national (provincial) prevalence

  11. Supporting Analogy-based Effort Estimation with the Use of Ontologies

    Directory of Open Access Journals (Sweden)

    Joanna Kowalska

    2014-06-01

    Full Text Available The paper concerns effort estimation of software development projects, in particular, at the level of product delivery stages. It proposes a new approach to model project data to support expert-supervised analogy-based effort estimation. The data is modeled using Semantic Web technologies, such as Resource Description Framework (RDF and Ontology Language for the Web (OWL. Moreover, in the paper, we define a method of supervised case-based reasoning. The method enables to search for similar projects’ tasks at different levels of abstraction. For instance, instead of searching for a task performed by a specific person, one could look for tasks performed by people with similar capabilities. The proposed method relies on ontology that defines the core concepts and relationships. However, it is possible to introduce new classes and relationships, without the need of altering the search mechanisms. Finally, we implemented a prototype tool that was used to preliminary validate the proposed approach. We observed that the proposed approach could potentially help experts in estimating non-trivial tasks that are often underestimated.

  12. Validity evidence based on test content.

    Science.gov (United States)

    Sireci, Stephen; Faulkner-Bond, Molly

    2014-01-01

    Validity evidence based on test content is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the logic and theory underlying such evidence and describe traditional and modern methods for gathering and analyzing content validity data. A comprehensive review of the literature and of the aforementioned Standards is presented. For educational tests and other assessments targeting knowledge and skill possessed by examinees, validity evidence based on test content is necessary for building a validity argument to support the use of a test for a particular purpose. By following the methods described in this article, practitioners have a wide arsenal of tools available for determining how well the content of an assessment is congruent with and appropriate for the specific testing purposes.

  13. Validation of Temperature Histories for Structural Steel Welds Using Estimated Heat-Affected-Zone Edges

    Science.gov (United States)

    2016-10-12

    Metallurgy , 2nd Ed., John Wiley & Sons, Inc., 2003. DOI: 10.1002/0471434027. 2. O. Grong, Metallurgical Modelling of Welding , 2ed., Materials Modelling...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6394--16-9690 Validation of Temperature Histories for Structural Steel Welds Using...PAGES 17. LIMITATION OF ABSTRACT Validation of Temperature Histories for Structural Steel Welds Using Estimated Heat-Affected-Zone Edges S.G. Lambrakos

  14. A gradient-based method for segmenting FDG-PET images: methodology and validation

    International Nuclear Information System (INIS)

    Geets, Xavier; Lee, John A.; Gregoire, Vincent; Bol, Anne; Lonneux, Max

    2007-01-01

    A new gradient-based method for segmenting FDG-PET images is described and validated. The proposed method relies on the watershed transform and hierarchical cluster analysis. To allow a better estimation of the gradient intensity, iteratively reconstructed images were first denoised and deblurred with an edge-preserving filter and a constrained iterative deconvolution algorithm. Validation was first performed on computer-generated 3D phantoms containing spheres, then on a real cylindrical Lucite phantom containing spheres of different volumes ranging from 2.1 to 92.9 ml. Moreover, laryngeal tumours from seven patients were segmented on PET images acquired before laryngectomy by the gradient-based method and the thresholding method based on the source-to-background ratio developed by Daisne (Radiother Oncol 2003;69:247-50). For the spheres, the calculated volumes and radii were compared with the known values; for laryngeal tumours, the volumes were compared with the macroscopic specimens. Volume mismatches were also analysed. On computer-generated phantoms, the deconvolution algorithm decreased the mis-estimate of volumes and radii. For the Lucite phantom, the gradient-based method led to a slight underestimation of sphere volumes (by 10-20%), corresponding to negligible radius differences (0.5-1.1 mm); for laryngeal tumours, the segmented volumes by the gradient-based method agreed with those delineated on the macroscopic specimens, whereas the threshold-based method overestimated the true volume by 68% (p = 0.014). Lastly, macroscopic laryngeal specimens were totally encompassed by neither the threshold-based nor the gradient-based volumes. The gradient-based segmentation method applied on denoised and deblurred images proved to be more accurate than the source-to-background ratio method. (orig.)

  15. A validated HPTLC method for estimation of moxifloxacin hydrochloride in tablets.

    Science.gov (United States)

    Dhillon, Vandana; Chaudhary, Alok Kumar

    2010-10-01

    A simple HPTLC method having high accuracy, precision and reproducibility was developed for the routine estimation of moxifloxacin hydrochloride in the tablets available in market and was validated for various parameters according to ICH guidelines. moxifloxacin hydrochloride was estimated at 292 nm by densitometry using Silica gel 60 F254 as stationary phase and a premix of methylene chloride: methanol: strong ammonia solution and acetonitrile (10:10:5:10) as mobile phase. Method was found linear in a range of 9-54 nanograms with a correlation coefficient >0.99. The regression equation was: AUC = 65.57 × (Amount in nanograms) + 163 (r(2) = 0.9908).

  16. An Estimator for Attitude and Heading Reference Systems Based on Virtual Horizontal Reference

    DEFF Research Database (Denmark)

    Wang, Yunlong; Soltani, Mohsen; Hussain, Dil muhammed Akbar

    2016-01-01

    makes it possible to correct the output of roll and pitch of the attitude estimator in the situations without accelerometer measurements, which cannot be achieved by the conventional nonlinear attitude estimator. The performance of VHR is tested both in simulation and hardware environment to validate......The output of the attitude determination systems suffers from large errors in case of accelerometer malfunctions. In this paper, an attitude estimator, based on Virtual Horizontal Reference (VHR), is designed for an Attitude Heading and Reference System (AHRS) to cope with this problem. The VHR...... their estimation performance. Moreover, the hardware test results are compared with that of a high-precision commercial AHRS to verify the estimation results. The implemented algorithm has shown high accuracy of attitude estimation that makes the system suitable for many applications....

  17. Differences in characteristics of raters who use the visual estimation method in hospitals based on their training experiences.

    Science.gov (United States)

    Kawasaki, Yui; Tamaura, Yuki; Akamatsu, Rie; Sakai, Masashi; Fujiwara, Keiko

    2018-02-07

    Despite a clinical need, only a few studies have provided information concerning visual estimation training for raters to improve the validity of their evaluations. This study aims to describe the differences in the characteristics of raters who evaluated patients' dietary intake in hospitals using the visual estimation method based on their training experiences. We collected data from three hospitals in Tokyo from August to September 2016. The participants were 199 nursing staff members, and they completed a self-administered questionnaire on demographic data; working career; training in the visual estimation method; knowledge, attitude, and practice associated with nutritional care; and self-evaluation of method validity of and skills of visual estimation. We classified participants into two groups, experienced and inexperienced, based on whether they had received training. Square test, Mann-Whitney U test, and univariable and multivariable logistic regression analysis were used to describe the differences between these two groups in terms of their characteristics; knowledge, attitude, and practice associated with nutritional care; and self-evaluation of method validity and tips used in the visual estimation method. Of the 158 staff members (79.4%) (118 nurses and 40 nursing assistants) who agreed to participate in the analysis, thirty-three participants (20.9%) were trained in the visual estimation method. Participants who had received training had better knowledge (2.70 ± 0.81, score range was 1-5) than those who had not received any training (2.34 ± 0.74, p = 0.03). Score of self-evaluation of method validity of the visual estimation method was higher in the experienced group (3.78 ± 0.61, score range was 1-5) than the inexperienced group (3.40 ± 0.66, p trained had adequate knowledge (OR: 2.78, 95% CI: 1.05-7.35) and frequently used tips in visual estimation (OR: 1.85, 95% CI: 1.26-2.73). Trained participants had more required knowledge and

  18. Validation of a Robust Neural Real-Time Voltage Estimator for Active Distribution Grids on Field Data

    DEFF Research Database (Denmark)

    Pertl, Michael; Douglass, Philip James; Heussen, Kai

    2018-01-01

    network approach for voltage estimation in active distribution grids by means of measured data from two feeders of a real low voltage distribution grid. The approach enables a real-time voltage estimation at locations in the distribution grid, where otherwise only non-real-time measurements are available......The installation of measurements in distribution grids enables the development of data driven methods for the power system. However, these methods have to be validated in order to understand the limitations and capabilities for their use. This paper presents a systematic validation of a neural...

  19. Validating the InterVA model to estimate the burden of mortality from verbal autopsy data: a population-based cross-sectional study.

    Directory of Open Access Journals (Sweden)

    Sebsibe Tadesse

    Full Text Available BACKGROUND: In countries with incomplete or no vital registration systems, verbal autopsy data are often reviewed by physicians in order to assign the probable cause of death. But in addition to being time and energy consuming, the method is liable to produce inconsistent results. The aim of this study is to validate the InterVA model for estimating the burden of mortality from verbal autopsy data by using physician review as a reference standard. METHODS AND FINDINGS: A population-based cross-sectional study was conducted from March to April, 2012. All adults aged ≥ 14 years and died between 01 January, 2010 and 15 February, 2012 were included in the study. The verbal autopsy interviews were reviewed by the InterVA model and physicians to estimate cause-specific mortality fractions. Cohen's kappa statistic, sensitivity, specificity, positive predictive value, and negative predictive value were applied to compare the agreement between the InterVA model and the physician review. A total of 408 adult deaths were studied. There was a general similarity and just slight differences between the InterVA model and the physicians in assigning cause-specific mortality. Both approaches showed an overall agreement in 298 (73% cases [kappa = 0.49, 95% CI: 0.37-0.60]. The observed sensitivities and specificities across causes of death categories varied from 13.3% to 81.9% and 77.7% to 99.5%, respectively. CONCLUSIONS: In understanding the burden of disease and setting health intervention priorities in areas that lack reliable vital registration systems, an accurate analysis of verbal autopsies is essential. Therefore, users should be aware of the suboptimal performance of the InterVA model. Similar validation studies need to be undertaken considering the limitation of the physician review as gold standard since physicians may misinterpret some of the verbal autopsy data and finally reach a wrong conclusion of the cause of death.

  20. Validity of 20-metre multi stage shuttle run test for estimation of ...

    African Journals Online (AJOL)

    Validity of 20-metre multi stage shuttle run test for estimation of maximum oxygen uptake in indian male university students. P Chatterjee, AK Banerjee, P Debnath, P Bas, B Chatterjee. Abstract. No Abstract. South African Journal for Physical, Health Education, Recreation and DanceVol. 12(4) 2006: pp. 461-467. Full Text:.

  1. Validity of Two New Brief Instruments to Estimate Vegetable Intake in Adults

    Directory of Open Access Journals (Sweden)

    Janine Wright

    2015-08-01

    Full Text Available Cost effective population-based monitoring tools are needed for nutritional surveillance and interventions. The aim was to evaluate the relative validity of two new brief instruments (three item: VEG3 and five item: VEG5 for estimating usual total vegetable intake in comparison to a 7-day dietary record (7DDR. Sixty-four Australian adult volunteers aged 30 to 69 years (30 males, mean age ± SD 56.3 ± 9.2 years and 34 female mean age ± SD 55.3 ± 10.0 years. Pearson correlations between 7DDR and VEG3 and VEG5 were modest, at 0.50 and 0.56, respectively. VEG3 significantly (p < 0.001 underestimated mean vegetable intake compared to 7DDR measures (2.9 ± 1.3 vs. 3.6 ± 1.6 serves/day, respectively, whereas mean vegetable intake assessed by VEG5 did not differ from 7DDR measures (3.3 ± 1.5 vs. 3.6 ± 1.6 serves/day. VEG5 was also able to correctly identify 95%, 88% and 75% of those subjects not consuming five, four and three serves/day of vegetables according to their 7DDR classification. VEG5, but not VEG3, can estimate usual total vegetable intake of population groups and had superior performance to VEG3 in identifying those not meeting different levels of vegetable intake. VEG5, a brief instrument, shows measurement characteristics useful for population-based monitoring and intervention targeting.

  2. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  3. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  4. Permeability Estimation of Rock Reservoir Based on PCA and Elman Neural Networks

    Science.gov (United States)

    Shi, Ying; Jian, Shaoyong

    2018-03-01

    an intelligent method which based on fuzzy neural networks with PCA algorithm, is proposed to estimate the permeability of rock reservoir. First, the dimensionality reduction process is utilized for these parameters by principal component analysis method. Further, the mapping relationship between rock slice characteristic parameters and permeability had been found through fuzzy neural networks. The estimation validity and reliability for this method were tested with practical data from Yan’an region in Ordos Basin. The result showed that the average relative errors of permeability estimation for this method is 6.25%, and this method had the better convergence speed and more accuracy than other. Therefore, by using the cheap rock slice related information, the permeability of rock reservoir can be estimated efficiently and accurately, and it is of high reliability, practicability and application prospect.

  5. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jinhua [Fudan University, Department of Electronic Engineering, Shanghai (China); Computing and Computer-Assisted Intervention, Key Laboratory of Medical Imaging, Shanghai (China); Shi, Zhifeng; Chen, Liang; Mao, Ying [Fudan University, Department of Neurosurgery, Huashan Hospital, Shanghai (China); Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan [Fudan University, Department of Electronic Engineering, Shanghai (China)

    2017-08-15

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. (orig.)

  6. A fuel-based approach to estimating motor vehicle exhaust emissions

    Science.gov (United States)

    Singer, Brett Craig

    in California appear to understate total exhaust CO and VOC emissions, while overstating the importance of cold start emissions. The fuel-based approach yields robust, independent, and accurate estimates of on-road vehicle emissions. Fuel-based estimates should be used to validate or adjust official vehicle emission inventories before society embarks on new, more costly air pollution control programs.

  7. Validation of Walk Score® for Estimating Neighborhood Walkability: An Analysis of Four US Metropolitan Areas

    Science.gov (United States)

    Duncan, Dustin T.; Aldstadt, Jared; Whalen, John; Melly, Steven J.; Gortmaker, Steven L.

    2011-01-01

    Neighborhood walkability can influence physical activity. We evaluated the validity of Walk Score® for assessing neighborhood walkability based on GIS (objective) indicators of neighborhood walkability with addresses from four US metropolitan areas with several street network buffer distances (i.e., 400-, 800-, and 1,600-meters). Address data come from the YMCA-Harvard After School Food and Fitness Project, an obesity prevention intervention involving children aged 5–11 years and their families participating in YMCA-administered, after-school programs located in four geographically diverse metropolitan areas in the US (n = 733). GIS data were used to measure multiple objective indicators of neighborhood walkability. Walk Scores were also obtained for the participant’s residential addresses. Spearman correlations between Walk Scores and the GIS neighborhood walkability indicators were calculated as well as Spearman correlations accounting for spatial autocorrelation. There were many significant moderate correlations between Walk Scores and the GIS neighborhood walkability indicators such as density of retail destinations and intersection density (p walkability. Correlations generally became stronger with a larger spatial scale, and there were some geographic differences. Walk Score® is free and publicly available for public health researchers and practitioners. Results from our study suggest that Walk Score® is a valid measure of estimating certain aspects of neighborhood walkability, particularly at the 1600-meter buffer. As such, our study confirms and extends the generalizability of previous findings demonstrating that Walk Score is a valid measure of estimating neighborhood walkability in multiple geographic locations and at multiple spatial scales. PMID:22163200

  8. Observers for vehicle tyre/road forces estimation: experimental validation

    Science.gov (United States)

    Doumiati, M.; Victorino, A.; Lechner, D.; Baffet, G.; Charara, A.

    2010-11-01

    The motion of a vehicle is governed by the forces generated between the tyres and the road. Knowledge of these vehicle dynamic variables is important for vehicle control systems that aim to enhance vehicle stability and passenger safety. This study introduces a new estimation process for tyre/road forces. It presents many benefits over the existing state-of-art works, within the dynamic estimation framework. One of these major contributions consists of discussing in detail the vertical and lateral tyre forces at each tyre. The proposed method is based on the dynamic response of a vehicle instrumented with potentially integrated sensors. The estimation process is separated into two principal blocks. The role of the first block is to estimate vertical tyre forces, whereas in the second block two observers are proposed and compared for the estimation of lateral tyre/road forces. The different observers are based on a prediction/estimation Kalman filter. The performance of this concept is tested and compared with real experimental data using a laboratory car. Experimental results show that the proposed approach is a promising technique to provide accurate estimation. Thus, it can be considered as a practical low-cost solution for calculating vertical and lateral tyre/road forces.

  9. Validation of a simple evaporation-transpiration scheme (SETS) to estimate evaporation using micro-lysimeter measurements

    Science.gov (United States)

    Ghazanfari, Sadegh; Pande, Saket; Savenije, Hubert

    2014-05-01

    Several methods exist to estimate E and T. The Penman-Montieth or Priestly-Taylor methods along with the Jarvis scheme for estimating vegetation resistance are commonly used to estimate these fluxes as a function of land cover, atmospheric forcing and soil moisture content. In this study, a simple evaporation transpiration method is developed based on MOSAIC Land Surface Model that explicitly accounts for soil moisture. Soil evaporation and transpiration estimated by SETS is validated on a single column of soil profile with measured evaporation data from three micro-lysimeters located at Ferdowsi University of Mashhad synoptic station, Iran, for the year 2005. SETS is run using both implicit and explicit computational schemes. Results show that the implicit scheme estimates the vapor flux close to that by the explicit scheme. The mean difference between the implicit and explicit scheme is -0.03 mm/day. The paired T-test of mean difference (p-Value = 0.042 and t-Value = 2.04) shows that there is no significant difference between the two methods. The sum of soil evaporation and transpiration from SETS is also compared with P-M equation and micro-lysimeters measurements. The SETS predicts the actual evaporation with a lower bias (= 1.24mm/day) than P-M (= 1.82 mm/day) and with R2 value of 0.82.

  10. [A method to estimate the short-term fractal dimension of heart rate variability based on wavelet transform].

    Science.gov (United States)

    Zhonggang, Liang; Hong, Yan

    2006-10-01

    A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.

  11. Development and validation of GFR-estimating equations using diabetes, transplant and weight

    DEFF Research Database (Denmark)

    Stevens, L.A.; Schmid, C.H.; Zhang, Y.L.

    2009-01-01

    interactions. Equations were developed in a pooled database of 10 studies [2/3 (N = 5504) for development and 1/3 (N = 2750) for internal validation], and final model selection occurred in 16 additional studies [external validation (N = 3896)]. RESULTS: The mean mGFR was 68, 67 and 68 ml/min/ 1.73 m(2......BACKGROUND: We have reported a new equation (CKD-EPI equation) that reduces bias and improves accuracy for GFR estimation compared to the MDRD study equation while using the same four basic predictor variables: creatinine, age, sex and race. Here, we describe the development and validation...... of this equation as well as other equations that incorporate diabetes, transplant and weight as additional predictor variables. METHODS: Linear regression was used to relate log-measured GFR (mGFR) to sex, race, diabetes, transplant, weight, various transformations of creatinine and age with and without...

  12. On the validity of time-dependent AUC estimators.

    Science.gov (United States)

    Schmid, Matthias; Kestler, Hans A; Potapov, Sergej

    2015-01-01

    Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. Development of robust flexible OLED encapsulations using simulated estimations and experimental validations

    International Nuclear Information System (INIS)

    Lee, Chang-Chun; Shih, Yan-Shin; Wu, Chih-Sheng; Tsai, Chia-Hao; Yeh, Shu-Tang; Peng, Yi-Hao; Chen, Kuang-Jung

    2012-01-01

    This work analyses the overall stress/strain characteristic of flexible encapsulations with organic light-emitting diode (OLED) devices. A robust methodology composed of a mechanical model of multi-thin film under bending loads and related stress simulations based on nonlinear finite element analysis (FEA) is proposed, and validated to be more reliable compared with related experimental data. With various geometrical combinations of cover plate, stacked thin films and plastic substrate, the position of the neutral axis (NA) plate, which is regarded as a key design parameter to minimize stress impact for the concerned OLED devices, is acquired using the present methodology. The results point out that both the thickness and mechanical properties of the cover plate help in determining the NA location. In addition, several concave and convex radii are applied to examine the reliable mechanical tolerance and to provide an insight into the estimated reliability of foldable OLED encapsulations. (paper)

  14. A novel body circumferences-based estimation of percentage body fat.

    Science.gov (United States)

    Lahav, Yair; Epstein, Yoram; Kedem, Ron; Schermann, Haggai

    2018-03-01

    Anthropometric measures of body composition are often used for rapid and cost-effective estimation of percentage body fat (%BF) in field research, serial measurements and screening. Our aim was to develop a validated estimate of %BF for the general population, based on simple body circumferences measures. The study cohort consisted of two consecutive samples of health club members, designated as 'development' (n 476, 61 % men, 39 % women) and 'validation' (n 224, 50 % men, 50 % women) groups. All subjects underwent anthropometric measurements as part of their registration to a health club. Dual-energy X-ray absorptiometry (DEXA) scan was used as the 'gold standard' estimate of %BF. Linear regressions where used to construct the predictive equation (%BFcal). Bland-Altman statistics, Lin concordance coefficients and percentage of subjects falling within 5 % of %BF estimate by DEXA were used to evaluate accuracy and precision of the equation. The variance inflation factor was used to check multicollinearity. Two distinct equations were developed for men and women: %BFcal (men)=10·1-0·239H+0·8A-0·5N; %BFcal (women)=19·2-0·239H+0·8A-0·5N (H, height; A, abdomen; N, neck, all in cm). Bland-Altman differences were randomly distributed and showed no fixed bias. Lin concordance coefficients of %BFcal were 0·89 in men and 0·86 in women. About 79·5 % of %BF predictions in both sexes were within ±5 % of the DEXA value. The Durnin-Womersley skinfolds equation was less accurate in our study group for prediction of %BF than %BFcal. We conclude that %BFcal offers the advantage of obtaining a reliable estimate of %BF from simple measurements that require no sophisticated tools and only a minimal prior training and experience.

  15. A new validation technique for estimations of body segment inertia tensors: Principal axes of inertia do matter.

    Science.gov (United States)

    Rossi, Marcel M; Alderson, Jacqueline; El-Sallam, Amar; Dowling, James; Reinbolt, Jeffrey; Donnelly, Cyril J

    2016-12-08

    The aims of this study were to: (i) establish a new criterion method to validate inertia tensor estimates by setting the experimental angular velocity data of an airborne objects as ground truth against simulations run with the estimated tensors, and (ii) test the sensitivity of the simulations to changes in the inertia tensor components. A rigid steel cylinder was covered with reflective kinematic markers and projected through a calibrated motion capture volume. Simulations of the airborne motion were run with two models, using inertia tensor estimated with geometric formula or the compound pendulum technique. The deviation angles between experimental (ground truth) and simulated angular velocity vectors and the root mean squared deviation angle were computed for every simulation. Monte Carlo analyses were performed to assess the sensitivity of simulations to changes in magnitude of principal moments of inertia within ±10% and to changes in orientation of principal axes of inertia within ±10° (of the geometric-based inertia tensor). Root mean squared deviation angles ranged between 2.9° and 4.3° for the inertia tensor estimated geometrically, and between 11.7° and 15.2° for the compound pendulum values. Errors up to 10% in magnitude of principal moments of inertia yielded root mean squared deviation angles ranging between 3.2° and 6.6°, and between 5.5° and 7.9° when lumped with errors of 10° in principal axes of inertia orientation. The proposed technique can effectively validate inertia tensors from novel estimation methods of body segment inertial parameter. Principal axes of inertia orientation should not be neglected when modelling human/animal mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Automatic CT-based finite element model generation for temperature-based death time estimation: feasibility study and sensitivity analysis.

    Science.gov (United States)

    Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita

    2017-05-01

    Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.

  17. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    Science.gov (United States)

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  18. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma.

    Science.gov (United States)

    Yu, Jinhua; Shi, Zhifeng; Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan; Chen, Liang; Mao, Ying

    2017-08-01

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. • Noninvasive IDH1 status estimation can be obtained with a radiomics approach. • Automatic and quantitative processes were established for noninvasive biomarker estimation. • High-throughput MRI features are highly correlated to IDH1 states. • Area under the ROC curve of the proposed estimation method reached 0.86.

  19. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    Science.gov (United States)

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  20. Estimating Evapotranspiration Using an Observation Based Terrestrial Water Budget

    Science.gov (United States)

    Rodell, Matthew; McWilliams, Eric B.; Famiglietti, James S.; Beaudoing, Hiroko K.; Nigro, Joseph

    2011-01-01

    Evapotranspiration (ET) is difficult to measure at the scales of climate models and climate variability. While satellite retrieval algorithms do exist, their accuracy is limited by the sparseness of in situ observations available for calibration and validation, which themselves may be unrepresentative of 500m and larger scale satellite footprints and grid pixels. Here, we use a combination of satellite and ground-based observations to close the water budgets of seven continental scale river basins (Mackenzie, Fraser, Nelson, Mississippi, Tocantins, Danube, and Ubangi), estimating mean ET as a residual. For any river basin, ET must equal total precipitation minus net runoff minus the change in total terrestrial water storage (TWS), in order for mass to be conserved. We make use of precipitation from two global observation-based products, archived runoff data, and TWS changes from the Gravity Recovery and Climate Experiment satellite mission. We demonstrate that while uncertainty in the water budget-based estimates of monthly ET is often too large for those estimates to be useful, the uncertainty in the mean annual cycle is small enough that it is practical for evaluating other ET products. Here, we evaluate five land surface model simulations, two operational atmospheric analyses, and a recent global reanalysis product based on our results. An important outcome is that the water budget-based ET time series in two tropical river basins, one in Brazil and the other in central Africa, exhibit a weak annual cycle, which may help to resolve debate about the strength of the annual cycle of ET in such regions and how ET is constrained throughout the year. The methods described will be useful for water and energy budget studies, weather and climate model assessments, and satellite-based ET retrieval optimization.

  1. Global temperature estimates in the troposphere and stratosphere: a validation study of COSMIC/FORMOSAT-3 measurements

    Directory of Open Access Journals (Sweden)

    P. Kishore

    2009-02-01

    Full Text Available This paper mainly focuses on the validation of temperature estimates derived with the newly launched Constellation Observing System for Meteorology Ionosphere and Climate (COSMIC/Formosa Satellite 3 (FORMOSAT-3 system. The analysis is based on the radio occultation (RO data samples collected during the first year observation from April 2006 to April 2007. For the validation, we have used the operational stratospheric analyses including the National Centers for Environmental Prediction - Reanalysis (NCEP, the Japanese 25-year Reanalysis (JRA-25, and the United Kingdom Met Office (MetO data sets. Comparisons done in different formats reveal good agreement between the COSMIC and reanalysis outputs. Spatially, the largest deviations are noted in the polar latitudes, and height-wise, the tropical tropopause region noted the maximum differences (2–4 K. We found that among the three reanalysis data sets the NCEP data sets have the best resemblance with the COSMIC measurements.

  2. Validation of Core Temperature Estimation Algorithm

    Science.gov (United States)

    2016-01-20

    based on an extended Kalman filter , which was developed using field data from 17 young male U.S. Army soldiers with core temperatures ranging from...CTstart, v) %KFMODEL estimate core temperature from heart rate with Kalman filter % This version supports both batch mode (operate on entire HR time...CTstart = 37.1; % degrees Celsius end if nargin < 3 v = 0; end %Extended Kalman Filter Parameters a = 1; gamma = 0.022^2; b_0 = -7887.1; b_1

  3. Using plot experiments to test the validity of mass balance models employed to estimate soil redistribution rates from 137Cs and 210Pbex measurements

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.

    2012-01-01

    Environmental radionuclides ( 137 Cs and 210 Pb ex ) provide a valuable means of estimating medium- and longer-term soil erosion rates. ► It is, however, important that the basic assumptions involved in the use of mass balance models to estimate soil erosion rates based on 137 Cs and 210 Pb ex measurements should be validated. ► The data provided by a set of small erosion plots located in southern Italy are used to validate several of the assumptions associated with the use of mass balance models to estimate soil erosion rates from 137 Cs and 210 Pb ex measurements.

  4. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    Science.gov (United States)

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.

  5. Development and prospective validation of a model estimating risk of readmission in cancer patients.

    Science.gov (United States)

    Schmidt, Carl R; Hefner, Jennifer; McAlearney, Ann S; Graham, Lisa; Johnson, Kristen; Moffatt-Bruce, Susan; Huerta, Timothy; Pawlik, Timothy M; White, Susan

    2018-02-26

    Hospital readmissions among cancer patients are common. While several models estimating readmission risk exist, models specific for cancer patients are lacking. A logistic regression model estimating risk of unplanned 30-day readmission was developed using inpatient admission data from a 2-year period (n = 18 782) at a tertiary cancer hospital. Readmission risk estimates derived from the model were then calculated prospectively over a 10-month period (n = 8616 admissions) and compared with actual incidence of readmission. There were 2478 (13.2%) unplanned readmissions. Model factors associated with readmission included: emergency department visit within 30 days, >1 admission within 60 days, non-surgical admission, solid malignancy, gastrointestinal cancer, emergency admission, length of stay >5 days, abnormal sodium, hemoglobin, or white blood cell count. The c-statistic for the model was 0.70. During the 10-month prospective evaluation, estimates of readmission from the model were associated with higher actual readmission incidence from 20.7% for the highest risk category to 9.6% for the lowest. An unplanned readmission risk model developed specifically for cancer patients performs well when validated prospectively. The specificity of the model for cancer patients, EMR incorporation, and prospective validation justify use of the model in future studies designed to reduce and prevent readmissions. © 2018 Wiley Periodicals, Inc.

  6. Validation and Application of the Modified Satellite-Based Priestley-Taylor Algorithm for Mapping Terrestrial Evapotranspiration

    Directory of Open Access Journals (Sweden)

    Yunjun Yao

    2014-01-01

    Full Text Available Satellite-based vegetation indices (VIs and Apparent Thermal Inertia (ATI derived from temperature change provide valuable information for estimating evapotranspiration (LE and detecting the onset and severity of drought. The modified satellite-based Priestley-Taylor (MS-PT algorithm that we developed earlier, coupling both VI and ATI, is validated based on observed data from 40 flux towers distributed across the world on all continents. The validation results illustrate that the daily LE can be estimated with the Root Mean Square Error (RMSE varying from 10.7 W/m2 to 87.6 W/m2, and with the square of correlation coefficient (R2 from 0.41 to 0.89 (p < 0.01. Compared with the Priestley-Taylor-based LE (PT-JPL algorithm, the MS-PT algorithm improves the LE estimates at most flux tower sites. Importantly, the MS-PT algorithm is also satisfactory in reproducing the inter-annual variability at flux tower sites with at least five years of data. The R2 between measured and predicted annual LE anomalies is 0.42 (p = 0.02. The MS-PT algorithm is then applied to detect the variations of long-term terrestrial LE over Three-North Shelter Forest Region of China and to monitor global land surface drought. The MS-PT algorithm described here demonstrates the ability to map regional terrestrial LE and identify global soil moisture stress, without requiring precipitation information.

  7. Validation of estimated glomerular filtration rate equations for Japanese children.

    Science.gov (United States)

    Gotoh, Yoshimitsu; Uemura, Osamu; Ishikura, Kenji; Sakai, Tomoyuki; Hamasaki, Yuko; Araki, Yoshinori; Hamda, Riku; Honda, Masataka

    2018-01-25

    The gold standard for evaluation of kidney function is renal inulin clearance (Cin). However, the methodology for Cin is complicated and difficult, especially for younger children and/or patients with bladder dysfunction. Therefore, we developed a simple and easier method for obtaining the estimated glomerular filtration rate (eGFR) using equations and values for several biomarkers, i.e., serum creatinine (Cr), serum cystatin C (cystC), serum beta-2 microglobulin (β 2 MG), and creatinine clearance (Ccr). The purpose of the present study was to validate these equations with a new data set. To validate each equation, we used data of 140 patients with CKD with clinical need for Cin, using the measured GFR (mGFR). We compared the results for each eGFR equation with the mGFR using mean error (ME), root mean square error (RMSE), P 30 , and Bland-Altman analysis. The ME of Cr, cystC, β 2 MG, and Ccr based on eGFR was 15.8 ± 13.0, 17.2 ± 16.5, 15.4 ± 14.3, and 10.6 ± 13.0 ml/min/1.73 m 2 , respectively. The RMSE was 29.5, 23.8, 20.9, and 16.7, respectively. The P 30 was 79.4, 71.1, 69.5, and 92.9%, respectively. The Bland-Altman bias analysis showed values of 4.0 ± 18.6, 5.3 ± 16.8, 12.7 ± 17.0, and 2.5 ± 17.2 ml/min/1.73 m 2 , respectively, for these parameters. The bias of each eGFR equation was not large. Therefore, each eGFR equation could be used.

  8. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    Science.gov (United States)

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  9. Simulation of anthropogenic CO2 uptake in the CCSM3.1 ocean circulation-biogeochemical model: comparison with data-based estimates

    Directory of Open Access Journals (Sweden)

    S. Khatiwala

    2012-04-01

    Full Text Available The global ocean has taken up a large fraction of the CO2 released by human activities since the industrial revolution. Quantifying the oceanic anthropogenic carbon (Cant inventory and its variability is important for predicting the future global carbon cycle. The detailed comparison of data-based and model-based estimates is essential for the validation and continued improvement of our prediction capabilities. So far, three global estimates of oceanic Cant inventory that are "data-based" and independent of global ocean circulation models have been produced: one based on the Δ C* method, and two that are based on constraining surface-to-interior transport of tracers, the TTD method and a maximum entropy inversion method (GF. The GF method, in particular, is capable of reconstructing the history of Cant inventory through the industrial era. In the present study we use forward model simulations of the Community Climate System Model (CCSM3.1 to estimate the Cant inventory and compare the results with the data-based estimates. We also use the simulations to test several assumptions of the GF method, including the assumption of constant climate and circulation, which is common to all the data-based estimates. Though the integrated estimates of global Cant inventories are consistent with each other, the regional estimates show discrepancies up to 50 %. The CCSM3 model underestimates the total Cant inventory, in part due to weak mixing and ventilation in the North Atlantic and Southern Ocean. Analyses of different simulation results suggest that key assumptions about ocean circulation and air-sea disequilibrium in the GF method are generally valid on the global scale, but may introduce errors in Cant estimates on regional scales. The GF method should also be used with caution when predicting future oceanic anthropogenic carbon uptake.

  10. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  11. Validation of a scenario-based assessment of critical thinking using an externally validated tool.

    Science.gov (United States)

    Buur, Jennifer L; Schmidt, Peggy; Smylie, Dean; Irizarry, Kris; Crocker, Carlos; Tyler, John; Barr, Margaret

    2012-01-01

    With medical education transitioning from knowledge-based curricula to competency-based curricula, critical thinking skills have emerged as a major competency. While there are validated external instruments for assessing critical thinking, many educators have created their own custom assessments of critical thinking. However, the face validity of these assessments has not been challenged. The purpose of this study was to compare results from a custom assessment of critical thinking with the results from a validated external instrument of critical thinking. Students from the College of Veterinary Medicine at Western University of Health Sciences were administered a custom assessment of critical thinking (ACT) examination and the externally validated instrument, California Critical Thinking Skills Test (CCTST), in the spring of 2011. Total scores and sub-scores from each exam were analyzed for significant correlations using Pearson correlation coefficients. Significant correlations between ACT Blooms 2 and deductive reasoning and total ACT score and deductive reasoning were demonstrated with correlation coefficients of 0.24 and 0.22, respectively. No other statistically significant correlations were found. The lack of significant correlation between the two examinations illustrates the need in medical education to externally validate internal custom assessments. Ultimately, the development and validation of custom assessments of non-knowledge-based competencies will produce higher quality medical professionals.

  12. Development of a Reference Data Set (RDS) for dental age estimation (DAE) and testing of this with a separate Validation Set (VS) in a southern Chinese population.

    Science.gov (United States)

    Jayaraman, Jayakumar; Wong, Hai Ming; King, Nigel M; Roberts, Graham J

    2016-10-01

    Many countries have recently experienced a rapid increase in the demand for forensic age estimates of unaccompanied minors. Hong Kong is a major tourist and business center where there has been an increase in the number of people intercepted with false travel documents. An accurate estimation of age is only possible when a dataset for age estimation that has been derived from the corresponding ethnic population. Thus, the aim of this study was to develop and validate a Reference Data Set (RDS) for dental age estimation for southern Chinese. A total of 2306 subjects were selected from the patient archives of a large dental hospital and the chronological age for each subject was recorded. This age was assigned to each specific stage of dental development for each tooth to create a RDS. To validate this RDS, a further 484 subjects were randomly chosen from the patient archives and their dental age was assessed based on the scores from the RDS. Dental age was estimated using meta-analysis command corresponding to random effects statistical model. Chronological age (CA) and Dental Age (DA) were compared using the paired t-test. The overall difference between the chronological and dental age (CA-DA) was 0.05 years (2.6 weeks) for males and 0.03 years (1.6 weeks) for females. The paired t-test indicated that there was no statistically significant difference between the chronological and dental age (p > 0.05). The validated southern Chinese reference dataset based on dental maturation accurately estimated the chronological age. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  13. Validation and Refinement of Prediction Models to Estimate Exercise Capacity in Cancer Survivors Using the Steep Ramp Test

    NARCIS (Netherlands)

    Stuiver, Martijn M.; Kampshoff, Caroline S.; Persoon, Saskia; Groen, Wim; van Mechelen, Willem; Chinapaw, Mai J. M.; Brug, Johannes; Nollet, Frans; Kersten, Marie-José; Schep, Goof; Buffart, Laurien M.

    2017-01-01

    Objective: To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension of previously published prediction models for peak oxygen consumption (Vo2(peak)) and peak power output (W-peak).&

  14. Validating a mass balance accounting approach to using 7Be measurements to estimate event-based erosion rates over an extended period at the catchment scale

    Science.gov (United States)

    Porto, Paolo; Walling, Des E.; Cogliandro, Vanessa; Callegari, Giovanni

    2016-07-01

    Use of the fallout radionuclides cesium-137 and excess lead-210 offers important advantages over traditional methods of quantifying erosion and soil redistribution rates. However, both radionuclides provide information on longer-term (i.e., 50-100 years) average rates of soil redistribution. Beryllium-7, with its half-life of 53 days, can provide a basis for documenting short-term soil redistribution and it has been successfully employed in several studies. However, the approach commonly used introduces several important constraints related to the timing and duration of the study period. A new approach proposed by the authors that overcomes these constraints has been successfully validated using an erosion plot experiment undertaken in southern Italy. Here, a further validation exercise undertaken in a small (1.38 ha) catchment is reported. The catchment was instrumented to measure event sediment yields and beryllium-7 measurements were employed to document the net soil loss for a series of 13 events that occurred between November 2013 and June 2015. In the absence of significant sediment storage within the catchment's ephemeral channel system and of a significant contribution from channel erosion to the measured sediment yield, the estimates of net soil loss for the individual events could be directly compared with the measured sediment yields to validate the former. The close agreement of the two sets of values is seen as successfully validating the use of beryllium-7 measurements and the new approach to obtain estimates of net soil loss for a sequence of individual events occurring over an extended period at the scale of a small catchment.

  15. Order Tracking Based on Robust Peak Search Instantaneous Frequency Estimation

    International Nuclear Information System (INIS)

    Gao, Y; Guo, Y; Chi, Y L; Qin, S R

    2006-01-01

    Order tracking plays an important role in non-stationary vibration analysis of rotating machinery, especially to run-up or coast down. An instantaneous frequency estimation (IFE) based order tracking of rotating machinery is introduced. In which, a peak search algorithms of spectrogram of time-frequency analysis is employed to obtain IFE of vibrations. An improvement to peak search is proposed, which can avoid strong non-order components or noises disturbing to the peak search work. Compared with traditional methods of order tracking, IFE based order tracking is simplified in application and only software depended. Testing testify the validity of the method. This method is an effective supplement to traditional methods, and the application in condition monitoring and diagnosis of rotating machinery is imaginable

  16. Estimation of in-vivo neurotransmitter release by brain microdialysis: the issue of validity.

    Science.gov (United States)

    Di Chiara, G.; Tanda, G.; Carboni, E.

    1996-11-01

    Although microdialysis is commonly understood as a method of sampling low molecular weight compounds in the extracellular compartment of tissues, this definition appears insufficient to specifically describe brain microdialysis of neurotransmitters. In fact, transmitter overflow from the brain into dialysates is critically dependent upon the composition of the perfusing Ringer. Therefore, the dialysing Ringer not only recovers the transmitter from the extracellular brain fluid but is a main determinant of its in-vivo release. Two types of brain microdialysis are distinguished: quantitative micro-dialysis and conventional microdialysis. Quantitative microdialysis provides an estimate of neurotransmitter concentrations in the extracellular fluid in contact with the probe. However, this information might poorly reflect the kinetics of neurotransmitter release in vivo. Conventional microdialysis involves perfusion at a constant rate with a transmitter-free Ringer, resulting in the formation of a steep neurotransmitter concentration gradient extending from the Ringer into the extracellular fluid. This artificial gradient might be critical for the ability of conventional microdialysis to detect and resolve phasic changes in neurotransmitter release taking place in the implanted area. On the basis of these characteristics, conventional microdialysis of neurotransmitters can be conceptualized as a model of the in-vivo release of neurotransmitters in the brain. As such, the criteria of face-validity, construct-validity and predictive-validity should be applied to select the most appropriate experimental conditions for estimating neurotransmitter release in specific brain areas in relation to behaviour.

  17. Validation and Intercomparison of Ocean Color Algorithms for Estimating Particulate Organic Carbon in the Oceans

    Directory of Open Access Journals (Sweden)

    Hayley Evers-King

    2017-08-01

    Full Text Available Particulate Organic Carbon (POC plays a vital role in the ocean carbon cycle. Though relatively small compared with other carbon pools, the POC pool is responsible for large fluxes and is linked to many important ocean biogeochemical processes. The satellite ocean-color signal is influenced by particle composition, size, and concentration and provides a way to observe variability in the POC pool at a range of temporal and spatial scales. To provide accurate estimates of POC concentration from satellite ocean color data requires algorithms that are well validated, with uncertainties characterized. Here, a number of algorithms to derive POC using different optical variables are applied to merged satellite ocean color data provided by the Ocean Color Climate Change Initiative (OC-CCI and validated against the largest database of in situ POC measurements currently available. The results of this validation exercise indicate satisfactory levels of performance from several algorithms (highest performance was observed from the algorithms of Loisel et al., 2002; Stramski et al., 2008 and uncertainties that are within the requirements of the user community. Estimates of the standing stock of the POC can be made by applying these algorithms, and yield an estimated mixed-layer integrated global stock of POC between 0.77 and 1.3 Pg C of carbon. Performance of the algorithms vary regionally, suggesting that blending of region-specific algorithms may provide the best way forward for generating global POC products.

  18. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  19. Channel Estimation in DCT-Based OFDM

    Science.gov (United States)

    Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing

    2014-01-01

    This paper derives the channel estimation of a discrete cosine transform- (DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic. PMID:24757439

  20. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach

    Directory of Open Access Journals (Sweden)

    Nan Xu

    2017-05-01

    Full Text Available Resting-state functional MRI (rs-fMRI is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD signal from different regions of interest (ROIs. However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1 Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2 On simulated data designed to display the “common driver” problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3 On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain.

  1. Relative validity of a web-based food frequency questionnaire for Danish adolescents.

    Science.gov (United States)

    Bjerregaard, Anne A; Halldorsson, Thorhallur I; Kampmann, Freja B; Olsen, Sjurdur F; Tetens, Inge

    2018-01-12

    With increased focus on dietary intake among youth and risk of diseases later in life, it is of importance, prior to assessing diet-disease relationships, to examine the validity of the dietary assessment tool. This study's objective was to evaluate the relative validity of a self-administered web-based FFQ among Danish children aged 12 to 15 years. From a nested sub-cohort within the Danish National Birth Cohort, 124 adolescents participated. Four weeks after completion of the FFQ, adolescents were invited to complete three telephone-based 24HRs; administered 4 weeks apart. Mean or median intakes of nutrients and food groups estimated from the FFQ were compared with the mean of 3x24HRs. To assess the level of ranking we calculated the proportion of correctly classified into the same quartile, and the proportion of misclassified (into the opposite quartile). Spearman's correlation coefficients and de-attenuated coefficients were calculated to assess agreement between the FFQ and 24HRs. The mean percentage of all food groups, for adolescents classified into the same and opposite quartile was 35 and 7.5%, respectively. Mean Spearman's correlation was 0.28 for food groups and 0.35 for nutrients, respectively. Adjustment for energy and within-person variation in the 24HRs had little effect on the magnitude of the correlations for food groups and nutrients. We found overestimation by the FFQ compared with the 24HRs for fish, fruits, vegetables, oils and dressing and underestimation by the FFQ for meat/poultry and sweets. Median intake of beverages, dairy, bread, cereals, the mean total energy and carbohydrate intake did not differ significantly between the two methods. The relative validity of the FFQ compared with the 3x24HRs showed that the ranking ability differed across food groups and nutrients with best ranking for estimated intake of dairy, fruits, and oils and dressing. Larger variation was observed for fish, sweets and vegetables. For nutrients, the ranking

  2. Vce-based methods for temperature estimation of high power IGBT modules during power cycling - A comparison

    DEFF Research Database (Denmark)

    Amoiridis, Anastasios; Anurag, Anup; Ghimire, Pramod

    2015-01-01

    . This experimental work evaluates the validity and accuracy of two Vce based methods applied on high power IGBT modules during power cycling tests. The first method estimates the chip temperature when low sense current is applied and the second method when normal load current is present. Finally, a correction factor......Temperature estimation is of great importance for performance and reliability of IGBT power modules in converter operation as well as in active power cycling tests. It is common to be estimated through Thermo-Sensitive Electrical Parameters such as the forward voltage drop (Vce) of the chip...

  3. Competency-Based Training and Simulation: Making a "Valid" Argument.

    Science.gov (United States)

    Noureldin, Yasser A; Lee, Jason Y; McDougall, Elspeth M; Sweet, Robert M

    2018-02-01

    The use of simulation as an assessment tool is much more controversial than is its utility as an educational tool. However, without valid simulation-based assessment tools, the ability to objectively assess technical skill competencies in a competency-based medical education framework will remain challenging. The current literature in urologic simulation-based training and assessment uses a definition and framework of validity that is now outdated. This is probably due to the absence of awareness rather than an absence of comprehension. The following review article provides the urologic community an updated taxonomy on validity theory as it relates to simulation-based training and assessments and translates our simulation literature to date into this framework. While the old taxonomy considered validity as distinct subcategories and focused on the simulator itself, the modern taxonomy, for which we translate the literature evidence, considers validity as a unitary construct with a focus on interpretation of simulator data/scores.

  4. Validation and uncertainty estimation of fast neutron activation analysis method for Cu, Fe, Al, Si elements in sediment samples

    International Nuclear Information System (INIS)

    Sunardi; Samin Prihatin

    2010-01-01

    Validation and uncertainty estimation of Fast Neutron Activation Analysis (FNAA) method for Cu, Fe, Al, Si elements in sediment samples has been conduced. The aim of the research is to confirm whether FNAA method is still matches to ISO/lEC 17025-2005 standard. The research covered the verification, performance, validation of FNM and uncertainty estimation. Standard of SRM 8704 and sediments were weighted for certain weight and irradiated with 14 MeV fast neutron and then counted using gamma spectrometry. The result of validation method for Cu, Fe, Al, Si element showed that the accuracy were in the range of 95.89-98.68 %, while the precision were in the range 1.13-2.29 %. The result of uncertainty estimation for Cu, Fe, Al, and Si were 2.67, 1.46, 1.71 and 1.20 % respectively. From this data, it can be concluded that the FNM method is still reliable and valid for element contents analysis in samples, because the accuracy is up to 95 % and the precision is under 5 %, while the uncertainty are relatively small and suitable for the range 95 % level of confidence where the uncertainty maximum is 5 %. (author)

  5. Assessment of heat transfer correlations for supercritical water in the frame of best-estimate code validation

    International Nuclear Information System (INIS)

    Jaeger, Wadim; Espinoza, Victor H. Sanchez; Schneider, Niko; Hurtado, Antonio

    2009-01-01

    Within the frame of the Generation IV international forum six innovative reactor concepts are the subject of comprehensive investigations. In some projects supercritical water will be considered as coolant, moderator (as for the High Performance Light Water Reactor) or secondary working fluid (one possible option for Liquid Metal-cooled Fast Reactors). Supercritical water is characterized by a pronounced change of the thermo-physical properties when crossing the pseudo-critical line, which goes hand in hand with a change in the heat transfer (HT) behavior. Hence, it is essential to estimate, in a proper way, the heat-transfer coefficient and subsequently the wall temperature. The scope of this paper is to present and discuss the activities at the Institute for Reactor Safety (IRS) related to the implementation of correlations for wall-to-fluid HT at supercritical conditions in Best-Estimate codes like TRACE as well as its validation. It is important to validate TRACE before applying it to safety analyses of HPLWR or of other reactor systems. In the past 3 decades various experiments have been performed all over the world to reveal the peculiarities of wall-to-fluid HT at supercritical conditions. Several different heat transfer phenomena such as HT enhancement (due to higher Prandtl numbers in the vicinity of the pseudo-critical point) or HT deterioration (due to strong property variations) were observed. Since TRACE is a component based system code with a finite volume method the resolution capabilities are limited and not all physical phenomena can be modeled properly. But Best -Estimate system codes are nowadays the preferred option for safety related investigations of full plants or other integral systems. Thus, the increase of the confidence in such codes is of high priority. In this paper, the post-test analysis of experiments with supercritical parameters will be presented. For that reason various correlations for the HT, which considers the characteristics

  6. A Geometrical-Based Model for Cochannel Interference Analysis and Capacity Estimation of CDMA Cellular Systems

    Directory of Open Access Journals (Sweden)

    Konstantinos B. Baltzis

    2008-10-01

    Full Text Available A common assumption in cellular communications is the circular-cell approximation. In this paper, an alternative analysis based on the hexagonal shape of the cells is presented. A geometrical-based stochastic model is proposed to describe the angle of arrival of the interfering signals in the reverse link of a cellular system. Explicit closed form expressions are derived, and simulations performed exhibit the characteristics and validate the accuracy of the proposed model. Applications in the capacity estimation of WCDMA cellular networks are presented. Dependence of system capacity of the sectorization of the cells and the base station antenna radiation pattern is explored. Comparisons with data in literature validate the accuracy of the proposed model. The degree of error of the hexagonal and the circular-cell approaches has been investigated indicating the validity of the proposed model. Results have also shown that, in many cases, the two approaches give similar results when the radius of the circle equals to the hexagon inradius. A brief discussion on how the proposed technique may be applied to broadband access networks is finally made.

  7. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  8. Cell type specific DNA methylation in cord blood: A 450K-reference data set and cell count-based validation of estimated cell type composition.

    Science.gov (United States)

    Gervin, Kristina; Page, Christian Magnus; Aass, Hans Christian D; Jansen, Michelle A; Fjeldstad, Heidi Elisabeth; Andreassen, Bettina Kulle; Duijts, Liesbeth; van Meurs, Joyce B; van Zelm, Menno C; Jaddoe, Vincent W; Nordeng, Hedvig; Knudsen, Gunn Peggy; Magnus, Per; Nystad, Wenche; Staff, Anne Cathrine; Felix, Janine F; Lyle, Robert

    2016-09-01

    Epigenome-wide association studies of prenatal exposure to different environmental factors are becoming increasingly common. These studies are usually performed in umbilical cord blood. Since blood comprises multiple cell types with specific DNA methylation patterns, confounding caused by cellular heterogeneity is a major concern. This can be adjusted for using reference data consisting of DNA methylation signatures in cell types isolated from blood. However, the most commonly used reference data set is based on blood samples from adult males and is not representative of the cell type composition in neonatal cord blood. The aim of this study was to generate a reference data set from cord blood to enable correct adjustment of the cell type composition in samples collected at birth. The purity of the isolated cell types was very high for all samples (>97.1%), and clustering analyses showed distinct grouping of the cell types according to hematopoietic lineage. We explored whether this cord blood and the adult peripheral blood reference data sets impact the estimation of cell type composition in cord blood samples from an independent birth cohort (MoBa, n = 1092). This revealed significant differences for all cell types. Importantly, comparison of the cell type estimates against matched cell counts both in the cord blood reference samples (n = 11) and in another independent birth cohort (Generation R, n = 195), demonstrated moderate to high correlation of the data. This is the first cord blood reference data set with a comprehensive examination of the downstream application of the data through validation of estimated cell types against matched cell counts.

  9. Estimation and Validation of RapidEye-Based Time-Series of Leaf Area Index for Winter Wheat in the Rur Catchment (Germany

    Directory of Open Access Journals (Sweden)

    Muhammad Ali

    2015-03-01

    Full Text Available Leaf Area Index (LAI is an important variable for numerous processes in various disciplines of bio- and geosciences. In situ measurements are the most accurate source of LAI among the LAI measuring methods, but the in situ measurements have the limitation of being labor intensive and site specific. For spatial-explicit applications (from regional to continental scales, satellite remote sensing is a promising source for obtaining LAI with different spatial resolutions. However, satellite-derived LAI measurements using empirical models require calibration and validation with the in situ measurements. In this study, we attempted to validate a direct LAI retrieval method from remotely sensed images (RapidEye with in situ LAI (LAIdestr. Remote sensing LAI (LAIrapideye were derived using different vegetation indices, namely SAVI (Soil Adjusted Vegetation Index and NDVI (Normalized Difference Vegetation Index. Additionally, applicability of the newly available red-edge band (RE was also analyzed through Normalized Difference Red-Edge index (NDRE and Soil Adjusted Red-Edge index (SARE. The LAIrapideye obtained from vegetation indices with red-edge band showed better correlation with LAIdestr (r = 0.88 and Root Mean Square Devation, RMSD = 1.01 & 0.92. This study also investigated the need to apply radiometric/atmospheric correction methods to the time-series of RapidEye Level 3A data prior to LAI estimation. Analysis of the the RapidEye Level 3A data set showed that application of the radiometric/atmospheric correction did not improve correlation of the estimated LAI with in situ LAI.

  10. Methodology for testing and validating knowledge bases

    Science.gov (United States)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  11. Design of Model-based Controller with Disturbance Estimation in Steer-by-wire System

    Directory of Open Access Journals (Sweden)

    Jung Sanghun

    2018-01-01

    Full Text Available The steer-by-wire system is a next generation steering control technology that has been actively studied because it has many advantages such as fast response, space efficiency due to removal of redundant mechanical elements, and high connectivity with vehicle chassis control, such as active steering. Steer-by-wire system has disturbance composed of tire friction torque and self-aligning torque. These disturbances vary widely due to the weight or friction coefficient change. Therefore, disturbance compensation logic is strongly required to obtain desired performance. This paper proposes model-based controller with disturbance compensation to achieve the robust control performance. Targeted steer-by-wire system is identified through the experiment and system identification method. Moreover, model-based controller is designed using the identified plant model. Disturbance of targeted steer-by-wire is estimated using disturbance observer(DOB, and compensate the estimated disturbance into control input. Experiment of various scenarios are conducted to validate the robust performance of proposed model-based controller.

  12. Validation of Web-Based Physical Activity Measurement Systems Using Doubly Labeled Water

    Science.gov (United States)

    Yamaguchi, Yukio; Yamada, Yosuke; Tokushima, Satoru; Hatamoto, Yoichi; Sagayama, Hiroyuki; Kimura, Misaka; Higaki, Yasuki; Tanaka, Hiroaki

    2012-01-01

    Background Online or Web-based measurement systems have been proposed as convenient methods for collecting physical activity data. We developed two Web-based physical activity systems—the 24-hour Physical Activity Record Web (24hPAR WEB) and 7 days Recall Web (7daysRecall WEB). Objective To examine the validity of two Web-based physical activity measurement systems using the doubly labeled water (DLW) method. Methods We assessed the validity of the 24hPAR WEB and 7daysRecall WEB in 20 individuals, aged 25 to 61 years. The order of email distribution and subsequent completion of the two Web-based measurements systems was randomized. Each measurement tool was used for a week. The participants’ activity energy expenditure (AEE) and total energy expenditure (TEE) were assessed over each week using the DLW method and compared with the respective energy expenditures estimated using the Web-based systems. Results The mean AEE was 3.90 (SD 1.43) MJ estimated using the 24hPAR WEB and 3.67 (SD 1.48) MJ measured by the DLW method. The Pearson correlation for AEE between the two methods was r = .679 (P WEB and 3.80 (SD 1.36) MJ by the DLW method. The Pearson correlation for AEE between the two methods was r = .144 (P = .54). The Bland-Altman 95% limits of agreement ranged from –3.83 to 4.81 MJ between the two methods. The Pearson correlation for TEE between the two methods was r = .590 (P = .006). The average input times using terminal devices were 8 minutes and 10 seconds for the 24hPAR WEB and 6 minutes and 38 seconds for the 7daysRecall WEB. Conclusions Both Web-based systems were found to be effective methods for collecting physical activity data and are appropriate for use in epidemiological studies. Because the measurement accuracy of the 24hPAR WEB was moderate to high, it could be suitable for evaluating the effect of interventions on individuals as well as for examining physical activity behavior. PMID:23010345

  13. Validation of an elastic registration technique to estimate anatomical lung modification in Non-Small-Cell Lung Cancer Tomotherapy

    International Nuclear Information System (INIS)

    Faggiano, Elena; Cattaneo, Giovanni M; Ciavarro, Cristina; Dell'Oca, Italo; Persano, Diego; Calandrino, Riccardo; Rizzo, Giovanna

    2011-01-01

    The study of lung parenchyma anatomical modification is useful to estimate dose discrepancies during the radiation treatment of Non-Small-Cell Lung Cancer (NSCLC) patients. We propose and validate a method, based on free-form deformation and mutual information, to elastically register planning kVCT with daily MVCT images, to estimate lung parenchyma modification during Tomotherapy. We analyzed 15 registrations between the planning kVCT and 3 MVCT images for each of the 5 NSCLC patients. Image registration accuracy was evaluated by visual inspection and, quantitatively, by Correlation Coefficients (CC) and Target Registration Errors (TRE). Finally, a lung volume correspondence analysis was performed to specifically evaluate registration accuracy in lungs. Results showed that elastic registration was always satisfactory, both qualitatively and quantitatively: TRE after elastic registration (average value of 3.6 mm) remained comparable and often smaller than voxel resolution. Lung volume variations were well estimated by elastic registration (average volume and centroid errors of 1.78% and 0.87 mm, respectively). Our results demonstrate that this method is able to estimate lung deformations in thorax MVCT, with an accuracy within 3.6 mm comparable or smaller than the voxel dimension of the kVCT and MVCT images. It could be used to estimate lung parenchyma dose variations in thoracic Tomotherapy

  14. Model-based state estimator for an intelligent tire

    NARCIS (Netherlands)

    Goos, J.; Teerhuis, A. P.; Schmeitz, A. J.C.; Besselink, I.; Nijmeijer, H.

    2017-01-01

    In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into

  15. Model-based State Estimator for an Intelligent Tire

    NARCIS (Netherlands)

    Goos, J.; Teerhuis, A.P.; Schmeitz, A.J.C.; Besselink, I.J.M.; Nijmeijer, H.

    2016-01-01

    In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into

  16. Is self-reporting workplace activity worthwhile? Validity and reliability of occupational sitting and physical activity questionnaire in desk-based workers.

    Science.gov (United States)

    Pedersen, Scott J; Kitic, Cecilia M; Bird, Marie-Louise; Mainsbridge, Casey P; Cooley, P Dean

    2016-08-19

    With the advent of workplace health and wellbeing programs designed to address prolonged occupational sitting, tools to measure behaviour change within this environment should derive from empirical evidence. In this study we measured aspects of validity and reliability for the Occupational Sitting and Physical Activity Questionnaire that asks employees to recount the percentage of work time they spend in the seated, standing, and walking postures during a typical workday. Three separate cohort samples (N = 236) were drawn from a population of government desk-based employees across several departmental agencies. These volunteers were part of a larger state-wide intervention study. Workplace sitting and physical activity behaviour was measured both subjectively against the International Physical Activity Questionnaire, and objectively against ActivPal accelerometers before the intervention began. Criterion validity and concurrent validity for each of the three posture categories were assessed using Spearman's rank correlation coefficients, and a bias comparison with 95 % limits of agreement. Test-retest reliability of the survey was reported with intraclass correlation coefficients. Criterion validity for this survey was strong for sitting and standing estimates, but weak for walking. Participants significantly overestimated the amount of walking they did at work. Concurrent validity was moderate for sitting and standing, but low for walking. Test-retest reliability of this survey proved to be questionable for our sample. Based on our findings we must caution occupational health and safety professionals about the use of employee self-report data to estimate workplace physical activity. While the survey produced accurate measurements for time spent sitting at work it was more difficult for employees to estimate their workplace physical activity.

  17. Is self-reporting workplace activity worthwhile? Validity and reliability of occupational sitting and physical activity questionnaire in desk-based workers

    Directory of Open Access Journals (Sweden)

    Scott J. Pedersen

    2016-08-01

    Full Text Available Abstract Background With the advent of workplace health and wellbeing programs designed to address prolonged occupational sitting, tools to measure behaviour change within this environment should derive from empirical evidence. In this study we measured aspects of validity and reliability for the Occupational Sitting and Physical Activity Questionnaire that asks employees to recount the percentage of work time they spend in the seated, standing, and walking postures during a typical workday. Methods Three separate cohort samples (N = 236 were drawn from a population of government desk-based employees across several departmental agencies. These volunteers were part of a larger state-wide intervention study. Workplace sitting and physical activity behaviour was measured both subjectively against the International Physical Activity Questionnaire, and objectively against ActivPal accelerometers before the intervention began. Criterion validity and concurrent validity for each of the three posture categories were assessed using Spearman’s rank correlation coefficients, and a bias comparison with 95 % limits of agreement. Test-retest reliability of the survey was reported with intraclass correlation coefficients. Results Criterion validity for this survey was strong for sitting and standing estimates, but weak for walking. Participants significantly overestimated the amount of walking they did at work. Concurrent validity was moderate for sitting and standing, but low for walking. Test-retest reliability of this survey proved to be questionable for our sample. Conclusions Based on our findings we must caution occupational health and safety professionals about the use of employee self-report data to estimate workplace physical activity. While the survey produced accurate measurements for time spent sitting at work it was more difficult for employees to estimate their workplace physical activity.

  18. Comparison of 3 estimation methods of mycophenolic acid AUC based on a limited sampling strategy in renal transplant patients.

    Science.gov (United States)

    Hulin, Anne; Blanchet, Benoît; Audard, Vincent; Barau, Caroline; Furlan, Valérie; Durrbach, Antoine; Taïeb, Fabrice; Lang, Philippe; Grimbert, Philippe; Tod, Michel

    2009-04-01

    A significant relationship between mycophenolic acid (MPA) area under the plasma concentration-time curve (AUC) and the risk for rejection has been reported. Based on 3 concentration measurements, 3 approaches have been proposed for the estimation of MPA AUC, involving either a multilinear regression approach model (MLRA) or a Bayesian estimation using either gamma absorption or zero-order absorption population models. The aim of the study was to compare the 3 approaches for the estimation of MPA AUC in 150 renal transplant patients treated with mycophenolate mofetil and tacrolimus. The population parameters were determined in 77 patients (learning study). The AUC estimation methods were compared in the learning population and in 73 patients from another center (validation study). In the latter study, the reference AUCs were estimated by the trapezoidal rule on 8 measurements. MPA concentrations were measured by liquid chromatography. The gamma absorption model gave the best fit. In the learning study, the AUCs estimated by both Bayesian methods were very similar, whereas the multilinear approach was highly correlated but yielded estimates about 20% lower than Bayesian methods. This resulted in dosing recommendations differing by 250 mg/12 h or more in 27% of cases. In the validation study, AUC estimates based on the Bayesian method with gamma absorption model and multilinear regression approach model were, respectively, 12% higher and 7% lower than the reference values. To conclude, the bicompartmental model with gamma absorption rate gave the best fit. The 3 AUC estimation methods are highly correlated but not concordant. For a given patient, the same estimation method should always be used.

  19. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    Science.gov (United States)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  20. Power system dynamic state estimation using prediction based evolutionary technique

    International Nuclear Information System (INIS)

    Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan

    2016-01-01

    In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.

  1. Web-based questionnaires to assess perinatal outcome proved to be valid.

    Science.gov (United States)

    van Gelder, Marleen M H J; Vorstenbosch, Saskia; Derks, Lineke; Te Winkel, Bernke; van Puijenbroek, Eugène P; Roeleveld, Nel

    2017-10-01

    The objective of this study was to validate a Web-based questionnaire completed by the mother to assess perinatal outcome used in a prospective cohort study. For 882 women with an estimated date of delivery between February 2012 and February 2015 who participated in the PRegnancy and Infant DEvelopment (PRIDE) Study, we compared data on pregnancy outcome, including mode of delivery, plurality, gestational age, birth weight and length, head circumference, birth defects, and infant sex, from Web-based questionnaires administered to the mothers 2 months after delivery with data from obstetric records. For continuous variables, we calculated intraclass correlation coefficients (ICCs) with 95% confidence intervals (CIs), whereas sensitivity and specificity were determined for categorical variables. We observed only very small differences between the two methods of data collection for gestational age (ICC, 0.91; 95% CI, 0.90-0.92), birth weight (ICC, 0.96; 95% CI, 0.95-0.96), birth length (ICC, 0.90; 95% CI, 0.87-0.92), and head circumference (ICC, 0.88; 95% CI, 0.80-0.93). Agreement between the Web-based questionnaire and obstetric records was high as well, with sensitivity ranging between 0.86 (termination of pregnancy) and 1.00 (four outcomes) and specificity between 0.96 (term birth) and 1.00 (nine outcomes). Our study provides evidence that Web-based questionnaires could be considered as a valid complementary or alternative method of data collection. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Validation of equations and proposed reference values to estimate fat mass in Chilean university students.

    Science.gov (United States)

    Gómez Campos, Rossana; Pacheco Carrillo, Jaime; Almonacid Fierro, Alejandro; Urra Albornoz, Camilo; Cossío-Bolaños, Marco

    2018-03-01

    (i) To propose regression equations based on anthropometric measures to estimate fat mass (FM) using dual energy X-ray absorptiometry (DXA) as reference method, and (ii)to establish population reference standards for equation-derived FM. A cross-sectional study on 6,713 university students (3,354 males and 3,359 females) from Chile aged 17.0 to 27.0years. Anthropometric measures (weight, height, waist circumference) were taken in all participants. Whole body DXA was performed in 683 subjects. A total of 478 subjects were selected to develop regression equations, and 205 for their cross-validation. Data from 6,030 participants were used to develop reference standards for FM. Equations were generated using stepwise multiple regression analysis. Percentiles were developed using the LMS method. Equations for men were: (i) FM=-35,997.486 +232.285 *Weight +432.216 *CC (R 2 =0.73, SEE=4.1); (ii)FM=-37,671.303 +309.539 *Weight +66,028.109 *ICE (R2=0.76, SEE=3.8), while equations for women were: (iii)FM=-13,216.917 +461,302 *Weight+91.898 *CC (R 2 =0.70, SEE=4.6), and (iv) FM=-14,144.220 +464.061 *Weight +16,189.297 *ICE (R 2 =0.70, SEE=4.6). Percentiles proposed included p10, p50, p85, and p95. The developed equations provide valid and accurate estimation of FM in both sexes. The values obtained using the equations may be analyzed from percentiles that allow for categorizing body fat levels by age and sex. Copyright © 2017 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.

  3. Estimation of tool wear during CNC milling using neural network-based sensor fusion

    Science.gov (United States)

    Ghosh, N.; Ravi, Y. B.; Patra, A.; Mukhopadhyay, S.; Paul, S.; Mohanty, A. R.; Chattopadhyay, A. B.

    2007-01-01

    Cutting tool wear degrades the product quality in manufacturing processes. Monitoring tool wear value online is therefore needed to prevent degradation in machining quality. Unfortunately there is no direct way of measuring the tool wear online. Therefore one has to adopt an indirect method wherein the tool wear is estimated from several sensors measuring related process variables. In this work, a neural network-based sensor fusion model has been developed for tool condition monitoring (TCM). Features extracted from a number of machining zone signals, namely cutting forces, spindle vibration, spindle current, and sound pressure level have been fused to estimate the average flank wear of the main cutting edge. Novel strategies such as, signal level segmentation for temporal registration, feature space filtering, outlier removal, and estimation space filtering have been proposed. The proposed approach has been validated by both laboratory and industrial implementations.

  4. State of charge estimation of lithium-ion batteries based on an improved parameter identification method

    International Nuclear Information System (INIS)

    Xia, Bizhong; Chen, Chaoren; Tian, Yong; Wang, Mingwang; Sun, Wei; Xu, Zhihui

    2015-01-01

    The SOC (state of charge) is the most important index of the battery management systems. However, it cannot be measured directly with sensors and must be estimated with mathematical techniques. An accurate battery model is crucial to exactly estimate the SOC. In order to improve the model accuracy, this paper presents an improved parameter identification method. Firstly, the concept of polarization depth is proposed based on the analysis of polarization characteristics of the lithium-ion batteries. Then, the nonlinear least square technique is applied to determine the model parameters according to data collected from pulsed discharge experiments. The results show that the proposed method can reduce the model error as compared with the conventional approach. Furthermore, a nonlinear observer presented in the previous work is utilized to verify the validity of the proposed parameter identification method in SOC estimation. Finally, experiments with different levels of discharge current are carried out to investigate the influence of polarization depth on SOC estimation. Experimental results show that the proposed method can improve the SOC estimation accuracy as compared with the conventional approach, especially under the conditions of large discharge current. - Highlights: • The polarization characteristics of lithium-ion batteries are analyzed. • The concept of polarization depth is proposed to improve model accuracy. • A nonlinear least square technique is applied to determine the model parameters. • A nonlinear observer is used as the SOC estimation algorithm. • The validity of the proposed method is verified by experimental results.

  5. MODIS Observation of Aerosols over Southern Africa During SAFARI 2000: Data, Validation, and Estimation of Aerosol Radiative Forcing

    Science.gov (United States)

    Ichoku, Charles; Kaufman, Yoram; Remer, Lorraine; Chu, D. Allen; Mattoo, Shana; Tanre, Didier; Levy, Robert; Li, Rong-Rong; Kleidman, Richard; Lau, William K. M. (Technical Monitor)

    2001-01-01

    Aerosol properties, including optical thickness and size parameters, are retrieved operationally from the MODIS sensor onboard the Terra satellite launched on 18 December 1999. The predominant aerosol type over the Southern African region is smoke, which is generated from biomass burning on land and transported over the southern Atlantic Ocean. The SAFARI-2000 period experienced smoke aerosol emissions from the regular biomass burning activities as well as from the prescribed burns administered on the auspices of the experiment. The MODIS Aerosol Science Team (MAST) formulates and implements strategies for the retrieval of aerosol products from MODIS, as well as for validating and analyzing them in order to estimate aerosol effects in the radiative forcing of climate as accurately as possible. These activities are carried out not only from a global perspective, but also with a focus on specific regions identified as having interesting characteristics, such as the biomass burning phenomenon in southern Africa and the associated smoke aerosol, particulate, and trace gas emissions. Indeed, the SAFARI-2000 aerosol measurements from the ground and from aircraft, along with MODIS, provide excellent data sources for a more intensive validation and a closer study of the aerosol characteristics over Southern Africa. The SAFARI-2000 ground-based measurements of aerosol optical thickness (AOT) from both the automatic Aerosol Robotic Network (AERONET) and handheld Sun photometers have been used to validate MODIS retrievals, based on a sophisticated spatio-temporal technique. The average global monthly distribution of aerosol from MODIS has been combined with other data to calculate the southern African aerosol daily averaged (24 hr) radiative forcing over the ocean for September 2000. It is estimated that on the average, for cloud free conditions over an area of 9 million square kin, this predominantly smoke aerosol exerts a forcing of -30 W/square m C lose to the terrestrial

  6. Validation Of Critical Knowledge-Based Systems

    Science.gov (United States)

    Duke, Eugene L.

    1992-01-01

    Report discusses approach to verification and validation of knowledge-based systems. Also known as "expert systems". Concerned mainly with development of methodologies for verification of knowledge-based systems critical to flight-research systems; e.g., fault-tolerant control systems for advanced aircraft. Subject matter also has relevance to knowledge-based systems controlling medical life-support equipment or commuter railroad systems.

  7. An assessment of the performance of global rainfall estimates without ground-based observations

    Directory of Open Access Journals (Sweden)

    C. Massari

    2017-09-01

    Full Text Available Satellite-based rainfall estimates over land have great potential for a wide range of applications, but their validation is challenging due to the scarcity of ground-based observations of rainfall in many areas of the planet. Recent studies have suggested the use of triple collocation (TC to characterize uncertainties associated with rainfall estimates by using three collocated rainfall products. However, TC requires the simultaneous availability of three products with mutually uncorrelated errors, a requirement which is difficult to satisfy with current global precipitation data sets. In this study, a recently developed method for rainfall estimation from soil moisture observations, SM2RAIN, is demonstrated to facilitate the accurate application of TC within triplets containing two state-of-the-art satellite rainfall estimates and a reanalysis product. The validity of different TC assumptions are indirectly tested via a high-quality ground rainfall product over the contiguous United States (CONUS, showing that SM2RAIN can provide a truly independent source of rainfall accumulation information which uniquely satisfies the assumptions underlying TC. On this basis, TC is applied with SM2RAIN on a global scale in an optimal configuration to calculate, for the first time, reliable global correlations (vs. an unknown truth of the aforementioned products without using a ground benchmark data set. The analysis is carried out during the period 2007–2012 using daily rainfall accumulation products obtained at 1° × 1° spatial resolution. Results convey the relatively high performance of the satellite rainfall estimates in eastern North and South America, southern Africa, southern and eastern Asia, eastern Australia, and southern Europe, as well as complementary performances between the reanalysis product and SM2RAIN, with the first performing reasonably well in the Northern Hemisphere and the second providing very good performance in the Southern

  8. Development, Validation, and Verification of a Self-Assessment Tool to Estimate Agnibala (Digestive Strength).

    Science.gov (United States)

    Singh, Aparna; Singh, Girish; Patwardhan, Kishor; Gehlot, Sangeeta

    2017-01-01

    According to Ayurveda, the traditional system of healthcare of Indian origin, Agni is the factor responsible for digestion and metabolism. Four functional states (Agnibala) of Agni have been recognized: regular, irregular, intense, and weak. The objective of the present study was to develop and validate a self-assessment tool to estimate Agnibala The developed tool was evaluated for its reliability and validity by administering it to 300 healthy volunteers of either gender belonging to 18 to 40-year age group. Besides confirming the statistical validity and reliability, the practical utility of the newly developed tool was also evaluated by recording serum lipid parameters of all the volunteers. The results show that the lipid parameters vary significantly according to the status of Agni The tool, therefore, may be used to screen normal population to look for possible susceptibility to certain health conditions. © The Author(s) 2016.

  9. Opportunities and challenges for evaluating precipitation estimates during GPM mission

    Energy Technology Data Exchange (ETDEWEB)

    Amitai, E. [George Mason Univ. and NASA Goddard Space Flight Center, Greenbelt, MD (United States); NASA Goddard Space Flight Center, Greenbelt, MD (United States); Llort, X.; Sempere-Torres, D. [GRAHI/Univ. Politecnica de Catalunya, Barcelona (Spain)

    2006-10-15

    Data assimilation in conjunction with numerical weather prediction and a variety of hydrologic applications now depend on satellite observations of precipitation. However, providing values of precipitation is not sufficient unless they are accompanied by the associated uncertainty estimates. The main approach of quantifying satellite precipitation uncertainties generally requires establishment of reliable uncertainty estimates for the ground validation rainfall products. This paper discusses several of the relevant validation concepts evolving from the tropical rainfall measuring mission (TRMM) era to the global precipitation measurement mission (GPM) era in the context of determining and reducing uncertainties of ground and space-based radar rainfall estimates. From comparisons of probability distribution functions of rain rates derived from TRMM precipitation radar and co-located ground based radar data - using the new NASA TRMM radar rainfall products (version 6) - this paper provides (1) a brief review of the importance of comparing pdfs of rain rate for statistical and physical verification of space-borne radar estimates of precipitation; (2) a brief review of how well the ground validation estimates compare to the TRMM radar retrieved estimates; and (3) discussion on opportunities and challenges to determine and reduce the uncertainties in space-based and ground-based radar estimates of rain rate distributions. (orig.)

  10. Improving satellite-based post-fire evapotranspiration estimates in semi-arid regions

    Science.gov (United States)

    Poon, P.; Kinoshita, A. M.

    2017-12-01

    Climate change and anthropogenic factors contribute to the increased frequency, duration, and size of wildfires, which can alter ecosystem and hydrological processes. The loss of vegetation canopy and ground cover reduces interception and alters evapotranspiration (ET) dynamics in riparian areas, which can impact rainfall-runoff partitioning. Previous research evaluated the spatial and temporal trends of ET based on burn severity and observed an annual decrease of 120 mm on average for three years after fire. Building upon these results, this research focuses on the Coyote Fire in San Diego, California (USA), which burned a total of 76 km2 in 2003 to calibrate and improve satellite-based ET estimates in semi-arid regions affected by wildfire. The current work utilizes satellite-based products and techniques such as the Google Earth Engine Application programming interface (API). Various ET models (ie. Operational Simplified Surface Energy Balance Model (SSEBop)) are compared to the latent heat flux from two AmeriFlux eddy covariance towers, Sky Oaks Young (US-SO3), and Old Stand (US-SO2), from 2000 - 2015. The Old Stand tower has a low burn severity and the Young Stand tower has a moderate to high burn severity. Both towers are used to validate spatial ET estimates. Furthermore, variables and indices, such as Enhanced Vegetation Index (EVI), Normalized Difference Moisture Index (NDMI), and the Normalized Burn Ratio (NBR) are utilized to evaluate satellite-based ET through a multivariate statistical analysis at both sites. This point-scale study will able to improve ET estimates in spatially diverse regions. Results from this research will contribute to the development of a post-wildfire ET model for semi-arid regions. Accurate estimates of post-fire ET will provide a better representation of vegetation and hydrologic recovery, which can be used to improve hydrologic models and predictions.

  11. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    Science.gov (United States)

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  12. Estimation of dynamic rotor loads for the rotor systems research aircraft: Methodology development and validation

    Science.gov (United States)

    Duval, R. W.; Bahrami, M.

    1985-01-01

    The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.

  13. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  14. Relative Validity and Reproducibility of a Food-Frequency Questionnaire for Estimating Food Intakes among Flemish Preschoolers

    Directory of Open Access Journals (Sweden)

    Inge Huybrechts

    2009-01-01

    Full Text Available The aims of this study were to assess the relative validity and reproducibility of a semi-quantitative food-frequency questionnaire (FFQ applied in a large region-wide survey among 2.5-6.5 year-old children for estimating food group intakes. Parents/guardians were used as a proxy. Estimated diet records (3d were used as reference method and reproducibility was measured by repeated FFQ administrations five weeks apart. In total 650 children were included in the validity analyses and 124 in the reproducibility analyses. Comparing median FFQ1 to FFQ2 intakes, almost all evaluated food groups showed median differences within a range of ± 15%. However, for median vegetables, fruit and cheese intake, FFQ1 was > 20% higher than FFQ2. For most foods a moderate correlation (0.5-0.7 was obtained between FFQ1 and FFQ2. For cheese, sugared drinks and fruit juice intakes correlations were even > 0.7. For median differences between the 3d EDR and the FFQ, six food groups (potatoes & grains; vegetables Fruit; cheese; meat, game, poultry and fish; and sugared drinks gave a difference > 20%. The largest corrected correlations (>0.6 were found for the intake of potatoes and grains, fruit, milk products, cheese, sugared drinks, and fruit juice, while the lowest correlations (<0.4 for bread and meat products. The proportion of subjects classified within one quartile (in the same/adjacent category by FFQ and EDR ranged from 67% (for meat products to 88% (for fruit juice. Extreme misclassification into the opposite quartiles was for all food groups < 10%. The results indicate that our newly developed FFQ gives reproducible estimates of food group intake. Overall, moderate levels of relative validity were observed for estimates of food group intake.

  15. Experimental and Analytical Studies on Improved Feedforward ML Estimation Based on LS-SVR

    Directory of Open Access Journals (Sweden)

    Xueqian Liu

    2013-01-01

    Full Text Available Maximum likelihood (ML algorithm is the most common and effective parameter estimation method. However, when dealing with small sample and low signal-to-noise ratio (SNR, threshold effects are resulted and estimation performance degrades greatly. It is proved that support vector machine (SVM is suitable for small sample. Consequently, we employ the linear relationship between least squares support vector regression (LS-SVR’s inputs and outputs and regard LS-SVR process as a time-varying linear filter to increase input SNR of received signals and decrease the threshold value of mean square error (MSE curve. Furthermore, it is verified that by taking single-tone sinusoidal frequency estimation, for example, and integrating data analysis and experimental validation, if LS-SVR’s parameters are set appropriately, not only can the LS-SVR process ensure the single-tone sinusoid and additive white Gaussian noise (AWGN channel characteristics of original signals well, but it can also improves the frequency estimation performance. During experimental simulations, LS-SVR process is applied to two common and representative single-tone sinusoidal ML frequency estimation algorithms, the DFT-based frequency-domain periodogram (FDP and phase-based Kay ones. And the threshold values of their MSE curves are decreased by 0.3 dB and 1.2 dB, respectively, which obviously exhibit the advantage of the proposed algorithm.

  16. The Air Force Mobile Forward Surgical Team (MFST): Using the Estimating Supplies Program to Validate Clinical Requirement

    National Research Council Canada - National Science Library

    Nix, Ralph E; Onofrio, Kathleen; Konoske, Paula J; Galarneau, Mike R; Hill, Martin

    2004-01-01

    .... The primary objective of the study was to provide the Air Force with the ability to validate clinical requirements of the MFST assemblage, with the goal of using NHRC's Estimating Supplies Program (ESP...

  17. Vision-based stress estimation model for steel frame structures with rigid links

    Science.gov (United States)

    Park, Hyo Seon; Park, Jun Su; Oh, Byung Kwan

    2017-07-01

    This paper presents a stress estimation model for the safety evaluation of steel frame structures with rigid links using a vision-based monitoring system. In this model, the deformed shape of a structure under external loads is estimated via displacements measured by a motion capture system (MCS), which is a non-contact displacement measurement device. During the estimation of the deformed shape, the effective lengths of the rigid link ranges in the frame structure are identified. The radius of the curvature of the structural member to be monitored is calculated using the estimated deformed shape and is employed to estimate stress. Using MCS in the presented model, the safety of a structure can be assessed gauge-freely. In addition, because the stress is directly extracted from the radius of the curvature obtained from the measured deformed shape, information on the loadings and boundary conditions of the structure are not required. Furthermore, the model, which includes the identification of the effective lengths of the rigid links, can consider the influences of the stiffness of the connection and support on the deformation in the stress estimation. To verify the applicability of the presented model, static loading tests for a steel frame specimen were conducted. By comparing the stress estimated by the model with the measured stress, the validity of the model was confirmed.

  18. QbD-Based Development and Validation of a Stability-Indicating HPLC Method for Estimating Ketoprofen in Bulk Drug and Proniosomal Vesicular System.

    Science.gov (United States)

    Yadav, Nand K; Raghuvanshi, Ashish; Sharma, Gajanand; Beg, Sarwar; Katare, Om P; Nanda, Sanju

    2016-03-01

    The current studies entail systematic quality by design (QbD)-based development of simple, precise, cost-effective and stability-indicating high-performance liquid chromatography method for estimation of ketoprofen. Analytical target profile was defined and critical analytical attributes (CAAs) were selected. Chromatographic separation was accomplished with an isocratic, reversed-phase chromatography using C-18 column, pH 6.8, phosphate buffer-methanol (50 : 50v/v) as a mobile phase at a flow rate of 1.0 mL/min and UV detection at 258 nm. Systematic optimization of chromatographic method was performed using central composite design by evaluating theoretical plates and peak tailing as the CAAs. The method was validated as per International Conference on Harmonization guidelines with parameters such as high sensitivity, specificity of the method with linearity ranging between 0.05 and 250 µg/mL, detection limit of 0.025 µg/mL and quantification limit of 0.05 µg/mL. Precision was demonstrated using relative standard deviation of 1.21%. Stress degradation studies performed using acid, base, peroxide, thermal and photolytic methods helped in identifying the degradation products in the proniosome delivery systems. The results successfully demonstrated the utility of QbD for optimizing the chromatographic conditions for developing highly sensitive liquid chromatographic method for ketoprofen. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Estimation of nonpaternity in the Mexican population of Nuevo Leon: a validation study with blood group markers.

    Science.gov (United States)

    Cerda-Flores, R M; Barton, S A; Marty-Gonzalez, L F; Rivas, F; Chakraborty, R

    1999-07-01

    A method for estimating the general rate of nonpaternity in a population was validated using phenotype data on seven blood groups (A1A2BO, MNSs, Rh, Duffy, Lutheran, Kidd, and P) on 396 mother, child, and legal father trios from Nuevo León, Mexico. In all, 32 legal fathers were excluded as the possible father based on genetic exclusions at one or more loci (combined average exclusion probability of 0.694 for specific mother-child phenotype pairs). The maximum likelihood estimate of the general nonpaternity rate in the population was 0.118 +/- 0.020. The nonpaternity rates in Nuevo León were also seen to be inversely related with the socioeconomic status of the families, i.e., the highest in the low and the lowest in the high socioeconomic class. We further argue that with the moderately low (69.4%) power of exclusion for these seven blood group systems, the traditional critical values of paternity index (PI > or = 19) were not good indicators of true paternity, since a considerable fraction (307/364) of nonexcluded legal fathers had a paternity index below 19 based on the seven markers. Implications of these results in the context of genetic-epidemiological studies as well as for detection of true fathers for child-support adjudications are discussed, implying the need to employ a battery of genetic markers (possibly DNA-based tests) that yield a higher power of exclusion. We conclude that even though DNA markers are more informative, the probabilistic approach developed here would still be needed to estimate the true rate of nonpaternity in a population or to evaluate the precision of detecting true fathers.

  20. A model-based adaptive state of charge estimator for a lithium-ion battery using an improved adaptive particle filter

    International Nuclear Information System (INIS)

    Ye, Min; Guo, Hui; Cao, Binggang

    2017-01-01

    Highlights: • Propose an improved adaptive particle swarm filter method. • The SoC estimation method for the battery based on the adaptive particle swarm filter is presented. • The algorithm is validated by the case study of different aged extent batteries. • The effectiveness and applicability of the algorithm are validated by the LiPB batteries. - Abstract: Obtaining accurate parameters, state of charge (SoC) and capacity of a lithium-ion battery is crucial for a battery management system, and establishing a battery model online is complex. In addition, the errors and perturbations of the battery model dramatically increase throughout the battery lifetime, making it more challenging to model the battery online. To overcome these difficulties, this paper provides three contributions: (1) To improve the robustness of the adaptive particle filter algorithm, an error analysis method is added to the traditional adaptive particle swarm algorithm. (2) An online adaptive SoC estimator based on the improved adaptive particle filter is presented; this estimator can eliminate the estimation error due to battery degradation and initial SoC errors. (3) The effectiveness of the proposed method is verified using various initial states of lithium nickel manganese cobalt oxide (NMC) cells and lithium-ion polymer (LiPB) batteries. The experimental analysis shows that the maximum errors are less than 1% for both the voltage and SoC estimations and that the convergence time of the SoC estimation decreased to 120 s.

  1. Assessing the Relative Performance of Microwave-Based Satellite Rain Rate Retrievals Using TRMM Ground Validation Data

    Science.gov (United States)

    Wolff, David B.; Fisher, Brad L.

    2011-01-01

    Space-borne microwave sensors provide critical rain information used in several global multi-satellite rain products, which in turn are used for a variety of important studies, including landslide forecasting, flash flood warning, data assimilation, climate studies, and validation of model forecasts of precipitation. This study employs four years (2003-2006) of satellite data to assess the relative performance and skill of SSM/I (F13, F14 and F15), AMSU-B (N15, N16 and N17), AMSR-E (Aqua) and the TRMM Microwave Imager (TMI) in estimating surface rainfall based on direct instantaneous comparisons with ground-based rain estimates from Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) sites at Kwajalein, Republic of the Marshall Islands (KWAJ) and Melbourne, Florida (MELB). The relative performance of each of these satellite estimates is examined via comparisons with space- and time-coincident GV radar-based rain rate estimates. Because underlying surface terrain is known to affect the relative performance of the satellite algorithms, the data for MELB was further stratified into ocean, land and coast categories using a 0.25deg terrain mask. Of all the satellite estimates compared in this study, TMI and AMSR-E exhibited considerably higher correlations and skills in estimating/observing surface precipitation. While SSM/I and AMSU-B exhibited lower correlations and skills for each of the different terrain categories, the SSM/I absolute biases trended slightly lower than AMSR-E over ocean, where the observations from both emission and scattering channels were used in the retrievals. AMSU-B exhibited the least skill relative to GV in all of the relevant statistical categories, and an anomalous spike was observed in the probability distribution functions near 1.0 mm/hr. This statistical artifact appears to be related to attempts by algorithm developers to include some lighter rain rates, not easily detectable by its scatter-only frequencies. AMSU

  2. Development and validation of risk prediction equations to estimate survival in patients with colorectal cancer: cohort study

    OpenAIRE

    Hippisley-Cox, Julia; Coupland, Carol

    2017-01-01

    Objective: To develop and externally validate risk prediction equations to estimate absolute and conditional survival in patients with colorectal cancer. \\ud \\ud Design: Cohort study.\\ud \\ud Setting: General practices in England providing data for the QResearch database linked to the national cancer registry.\\ud \\ud Participants: 44 145 patients aged 15-99 with colorectal cancer from 947 practices to derive the equations. The equations were validated in 15 214 patients with colorectal cancer ...

  3. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  4. Estimation of skull table thickness with clinical CT and validation with microCT.

    Science.gov (United States)

    Lillie, Elizabeth M; Urban, Jillian E; Weaver, Ashley A; Powers, Alexander K; Stitzel, Joel D

    2015-01-01

    Brain injuries resulting from motor vehicle crashes (MVC) are extremely common yet the details of the mechanism of injury remain to be well characterized. Skull deformation is believed to be a contributing factor to some types of traumatic brain injury (TBI). Understanding biomechanical contributors to skull deformation would provide further insight into the mechanism of head injury resulting from blunt trauma. In particular, skull thickness is thought be a very important factor governing deformation of the skull and its propensity for fracture. Current computed tomography (CT) technology is limited in its ability to accurately measure cortical thickness using standard techniques. A method to evaluate cortical thickness using cortical density measured from CT data has been developed previously. This effort validates this technique for measurement of skull table thickness in clinical head CT scans using two postmortem human specimens. Bone samples were harvested from the skulls of two cadavers and scanned with microCT to evaluate the accuracy of the estimated cortical thickness measured from clinical CT. Clinical scans were collected at 0.488 and 0.625 mm in plane resolution with 0.625 mm thickness. The overall cortical thickness error was determined to be 0.078 ± 0.58 mm for cortical samples thinner than 4 mm. It was determined that 91.3% of these differences fell within the scanner resolution. Color maps of clinical CT thickness estimations are comparable to color maps of microCT thickness measurements, indicating good quantitative agreement. These data confirm that the cortical density algorithm successfully estimates skull table thickness from clinical CT scans. The application of this technique to clinical CT scans enables evaluation of cortical thickness in population-based studies. © 2014 Anatomical Society.

  5. A method for state of energy estimation of lithium-ion batteries based on neural network model

    International Nuclear Information System (INIS)

    Dong, Guangzhong; Zhang, Xu; Zhang, Chenbin; Chen, Zonghai

    2015-01-01

    The state-of-energy is an important evaluation index for energy optimization and management of power battery systems in electric vehicles. Unlike the state-of-charge which represents the residual energy of the battery in traditional applications, state-of-energy is integral result of battery power, which is the product of current and terminal voltage. On the other hand, like state-of-charge, the state-of-energy has an effect on terminal voltage. Therefore, it is hard to solve the nonlinear problems between state-of-energy and terminal voltage, which will complicate the estimation of a battery's state-of-energy. To address this issue, a method based on wavelet-neural-network-based battery model and particle filter estimator is presented for the state-of-energy estimation. The wavelet-neural-network based battery model is used to simulate the entire dynamic electrical characteristics of batteries. The temperature and discharge rate are also taken into account to improve model accuracy. Besides, in order to suppress the measurement noises of current and voltage, a particle filter estimator is applied to estimate cell state-of-energy. Experimental results on LiFePO_4 batteries indicate that the wavelet-neural-network based battery model simulates battery dynamics robustly with high accuracy and the estimation value based on the particle filter estimator converges to the real state-of-energy within an error of ±4%. - Highlights: • State-of-charge is replaced by state-of-energy to determine cells residual energy. • The battery state-space model is established based on a neural network. • Temperature and current influence are considered to improve the model accuracy. • The particle filter is used for state-of-energy estimation to improve accuracy. • The robustness of new method is validated under dynamic experimental conditions.

  6. A 45-Second Self-Test for Cardiorespiratory Fitness: Heart Rate-Based Estimation in Healthy Individuals.

    Science.gov (United States)

    Sartor, Francesco; Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A; Bonomi, Alberto G; Moore, Jonathan P; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter

    2016-01-01

    Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included.

  7. A 45-Second Self-Test for Cardiorespiratory Fitness: Heart Rate-Based Estimation in Healthy Individuals.

    Directory of Open Access Journals (Sweden)

    Francesco Sartor

    Full Text Available Cardio-respiratory fitness (CRF is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9, 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30. The Ruffier-Dickson Index (RDI, which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40, but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91. The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93. It also had a higher agreement in classifying CRF levels (κ = 0.42 than RDI-based model (κ = 0.29. In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included.

  8. Uncertainties in neural network model based on carbon dioxide concentration for occupancy estimation

    Energy Technology Data Exchange (ETDEWEB)

    Alam, Azimil Gani; Rahman, Haolia; Kim, Jung-Kyung; Han, Hwataik [Kookmin University, Seoul (Korea, Republic of)

    2017-05-15

    Demand control ventilation is employed to save energy by adjusting airflow rate according to the ventilation load of a building. This paper investigates a method for occupancy estimation by using a dynamic neural network model based on carbon dioxide concentration in an occupied zone. The method can be applied to most commercial and residential buildings where human effluents to be ventilated. An indoor simulation program CONTAMW is used to generate indoor CO{sub 2} data corresponding to various occupancy schedules and airflow patterns to train neural network models. Coefficients of variation are obtained depending on the complexities of the physical parameters as well as the system parameters of neural networks, such as the numbers of hidden neurons and tapped delay lines. We intend to identify the uncertainties caused by the model parameters themselves, by excluding uncertainties in input data inherent in measurement. Our results show estimation accuracy is highly influenced by the frequency of occupancy variation but not significantly influenced by fluctuation in the airflow rate. Furthermore, we discuss the applicability and validity of the present method based on passive environmental conditions for estimating occupancy in a room from the viewpoint of demand control ventilation applications.

  9. Validity and reliability of central blood pressure estimated by upper arm oscillometric cuff pressure.

    Science.gov (United States)

    Climie, Rachel E D; Schultz, Martin G; Nikolic, Sonja B; Ahuja, Kiran D K; Fell, James W; Sharman, James E

    2012-04-01

    Noninvasive central blood pressure (BP) independently predicts mortality, but current methods are operator-dependent, requiring skill to obtain quality recordings. The aims of this study were first, to determine the validity of an automatic, upper arm oscillometric cuff method for estimating central BP (O(CBP)) by comparison with the noninvasive reference standard of radial tonometry (T(CBP)). Second, we determined the intratest and intertest reliability of O(CBP). To assess validity, central BP was estimated by O(CBP) (Pulsecor R6.5B monitor) and compared with T(CBP) (SphygmoCor) in 47 participants free from cardiovascular disease (aged 57 ± 9 years) in supine, seated, and standing positions. Brachial mean arterial pressure (MAP) and diastolic BP (DBP) from the O(CBP) device were used to calibrate in both devices. Duplicate measures were recorded in each position on the same day to assess intratest reliability, and participants returned within 10 ± 7 days for repeat measurements to assess intertest reliability. There was a strong intraclass correlation (ICC = 0.987, P difference (1.2 ± 2.2 mm Hg) for central systolic BP (SBP) determined by O(CBP) compared with T(CBP). Ninety-six percent of all comparisons (n = 495 acceptable recordings) were within 5 mm Hg. With respect to reliability, there were strong correlations but higher limits of agreement for the intratest (ICC = 0.975, P difference 0.6 ± 4.5 mm Hg) and intertest (ICC = 0.895, P difference 4.3 ± 8.0 mm Hg) comparisons. Estimation of central SBP using cuff oscillometry is comparable to radial tonometry and has good reproducibility. As a noninvasive, relatively operator-independent method, O(CBP) may be as useful as T(CBP) for estimating central BP in clinical practice.

  10. Development and Statistical Validation of Spectrophotometric Methods for the Estimation of Nabumetone in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    A. R. Rote

    2010-01-01

    Full Text Available Three new simple, economic spectrophotometric methods were developed and validated for the estimation of nabumetone in bulk and tablet dosage form. First method includes determination of nabumetone at absorption maxima 330 nm, second method applied was area under curve for analysis of nabumetone in the wavelength range of 326-334 nm and third method was First order derivative spectra with scaling factor 4. Beer law obeyed in the concentration range of 10-30 μg/mL for all three methods. The correlation coefficients were found to be 0.9997, 0.9998 and 0.9998 by absorption maxima, area under curve and first order derivative spectra. Results of analysis were validated statistically and by performing recovery studies. The mean percent recoveries were found satisfactory for all three methods. The developed methods were also compared statistically using one way ANOVA. The proposed methods have been successfully applied for the estimation of nabumetone in bulk and pharmaceutical tablet dosage form.

  11. Validation of Physical Activity Tracking via Android Smartphones Compared to ActiGraph Accelerometer: Laboratory-Based and Free-Living Validation Studies.

    Science.gov (United States)

    Hekler, Eric B; Buman, Matthew P; Grieco, Lauren; Rosenberger, Mary; Winter, Sandra J; Haskell, William; King, Abby C

    2015-04-15

    There is increasing interest in using smartphones as stand-alone physical activity monitors via their built-in accelerometers, but there is presently limited data on the validity of this approach. The purpose of this work was to determine the validity and reliability of 3 Android smartphones for measuring physical activity among midlife and older adults. A laboratory (study 1) and a free-living (study 2) protocol were conducted. In study 1, individuals engaged in prescribed activities including sedentary (eg, sitting), light (sweeping), moderate (eg, walking 3 mph on a treadmill), and vigorous (eg, jogging 5 mph on a treadmill) activity over a 2-hour period wearing both an ActiGraph and 3 Android smartphones (ie, HTC MyTouch, Google Nexus One, and Motorola Cliq). In the free-living study, individuals engaged in usual daily activities over 7 days while wearing an Android smartphone (Google Nexus One) and an ActiGraph. Study 1 included 15 participants (age: mean 55.5, SD 6.6 years; women: 56%, 8/15). Correlations between the ActiGraph and the 3 phones were strong to very strong (ρ=.77-.82). Further, after excluding bicycling and standing, cut-point derived classifications of activities yielded a high percentage of activities classified correctly according to intensity level (eg, 78%-91% by phone) that were similar to the ActiGraph's percent correctly classified (ie, 91%). Study 2 included 23 participants (age: mean 57.0, SD 6.4 years; women: 74%, 17/23). Within the free-living context, results suggested a moderate correlation (ie, ρ=.59, PAndroid smartphone can provide comparable estimates of physical activity to an ActiGraph in both a laboratory-based and free-living context for estimating sedentary and MVPA and that different Android smartphones may reliably confer similar estimates.

  12. Bayesian risk-based decision method for model validation under uncertainty

    International Nuclear Information System (INIS)

    Jiang Xiaomo; Mahadevan, Sankaran

    2007-01-01

    This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment

  13. A validation methodology for fault-tolerant clock synchronization

    Science.gov (United States)

    Johnson, S. C.; Butler, R. W.

    1984-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating an experimental implementation of the Software Implemented Fault Tolerance (SIFT) clock synchronization algorithm. The design proof of the algorithm defines the maximum skew between any two nonfaulty clocks in the system in terms of theoretical upper bounds on certain system parameters. The quantile to which each parameter must be estimated is determined by a combinatorial analysis of the system reliability. The parameters are measured by direct and indirect means, and upper bounds are estimated. A nonparametric method based on an asymptotic property of the tail of a distribution is used to estimate the upper bound of a critical system parameter. Although the proof process is very costly, it is extremely valuable when validating the crucial synchronization subsystem.

  14. The validity of a web-based FFQ assessed by doubly labelled water and multiple 24-h recalls.

    Science.gov (United States)

    Medin, Anine C; Carlsen, Monica H; Hambly, Catherine; Speakman, John R; Strohmaier, Susanne; Andersen, Lene F

    2017-12-01

    The aim of this study was to validate the estimated habitual dietary intake from a newly developed web-based FFQ (WebFFQ), for use in an adult population in Norway. In total, ninety-two individuals were recruited. Total energy expenditure (TEE) measured by doubly labelled water was used as the reference method for energy intake (EI) in a subsample of twenty-nine women, and multiple 24-h recalls (24HR) were used as the reference method for the relative validation of macronutrients and food groups in the entire sample. Absolute differences, ratios, crude and deattenuated correlations, cross-classifications, Bland-Altman plot and plots between misreporting of EI (EI-TEE) and the relative misreporting of food groups (WebFFQ-24HR) were used to assess the validity. Results showed that EI on group level was not significantly different from TEE measured by doubly labelled water (0·7 MJ/d), but ranking abilities were poor (r -0·18). The relative validation showed an overestimation for the majority of the variables using absolute intakes, especially for the food groups 'vegetables' and 'fish and shellfish', but an improved agreement between the test and reference tool was observed for energy adjusted intakes. Deattenuated correlation coefficients were between 0·22 and 0·89, and low levels of grossly misclassified individuals (0-3 %) were observed for the majority of the energy adjusted variables for macronutrients and food groups. In conclusion, energy estimates from the WebFFQ should be used with caution, but the estimated absolute intakes on group level and ranking abilities seem acceptable for macronutrients and most food groups.

  15. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship.

    Science.gov (United States)

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab's software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, pvelocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key pointsSome commercial devices allow to estimate 1 RM from the force-velocity relationship.These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription.Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations.

  16. Secretin-stimulated ultrasound estimation of pancreatic secretion in cystic fibrosis validated by magnetic resonance imaging

    International Nuclear Information System (INIS)

    Engjom, Trond; Dimcevski, Georg; Tjora, Erling; Wathle, Gaute; Erchinger, Friedemann; Laerum, Birger N.; Gilja, Odd H.; Haldorsen, Ingfrid Salvesen

    2018-01-01

    Secretin-stimulated magnetic resonance imaging (s-MRI) is the best validated radiological modality assessing pancreatic secretion. The purpose of this study was to compare volume output measures from secretin-stimulated transabdominal ultrasonography (s-US) to s-MRI for the diagnosis of exocrine pancreatic failure in cystic fibrosis (CF). We performed transabdominal ultrasonography and MRI before and at timed intervals during 15 minutes after secretin stimulation in 21 CF patients and 13 healthy controls. To clearly identify the subjects with reduced exocrine pancreatic function, we classified CF patients as pancreas-sufficient or -insufficient by secretin-stimulated endoscopic short test and faecal elastase. Pancreas-insufficient CF patients had reduced pancreatic secretions compared to pancreas-sufficient subjects based on both imaging modalities (p < 0.001). Volume output estimates assessed by s-US correlated to that of s-MRI (r = 0.56-0.62; p < 0.001). Both s-US (AUC: 0.88) and s-MRI (AUC: 0.99) demonstrated good diagnostic accuracy for exocrine pancreatic failure. Pancreatic volume-output estimated by s-US corresponds well to exocrine pancreatic function in CF patients and yields comparable results to that of s-MRI. s-US provides a simple and feasible tool in the assessment of pancreatic secretion. (orig.)

  17. Validity of fracture toughness determined with small bend specimens

    International Nuclear Information System (INIS)

    Wallin, K.; Rintamaa, R.; Valo, M.

    1994-02-01

    This report considers the validity of fracture toughness estimates obtained with small bend specimens in relation to fracture toughness estimates obtained with large specimens. The study is based upon the analysis and comparison of actual test results. The results prove the validity of the fracture toughness determined based upon small bend specimens, especially when the results are only used to determine the fracture toughness transition temperature T o . In this case the possible error is typically less than 5 deg C and at most 10 deg C. It can be concluded that small bend specimens are very suitable for the estimation of fracture toughness in the case of brittle fracture, provided the results are corrected for statistical size effects. (orig.). (20 refs., 17 figs.)

  18. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    International Nuclear Information System (INIS)

    Wang, Hesheng; Chen, Weidong; Xu, Lifei; He, Tao

    2015-01-01

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  19. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hesheng [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Chen, Weidong, E-mail: wdchen@sjtu.edu.cn [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Xu, Lifei; He, Tao [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2015-10-15

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  20. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry--BREALD-30.

    Science.gov (United States)

    Junkes, Monica C; Fraiz, Fabian C; Sardenberg, Fernanda; Lee, Jessica Y; Paiva, Saul M; Ferreira, Fernanda M

    2015-01-01

    The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis. The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.

  1. ESTIMATING RELIABILITY OF DISTURBANCES IN SATELLITE TIME SERIES DATA BASED ON STATISTICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Z.-G. Zhou

    2016-06-01

    Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  2. Polytomous diagnosis of ovarian tumors as benign, borderline, primary invasive or metastatic: development and validation of standard and kernel-based risk prediction models

    Directory of Open Access Journals (Sweden)

    Testa Antonia C

    2010-10-01

    Full Text Available Abstract Background Hitherto, risk prediction models for preoperative ultrasound-based diagnosis of ovarian tumors were dichotomous (benign versus malignant. We develop and validate polytomous models (models that predict more than two events to diagnose ovarian tumors as benign, borderline, primary invasive or metastatic invasive. The main focus is on how different types of models perform and compare. Methods A multi-center dataset containing 1066 women was used for model development and internal validation, whilst another multi-center dataset of 1938 women was used for temporal and external validation. Models were based on standard logistic regression and on penalized kernel-based algorithms (least squares support vector machines and kernel logistic regression. We used true polytomous models as well as combinations of dichotomous models based on the 'pairwise coupling' technique to produce polytomous risk estimates. Careful variable selection was performed, based largely on cross-validated c-index estimates. Model performance was assessed with the dichotomous c-index (i.e. the area under the ROC curve and a polytomous extension, and with calibration graphs. Results For all models, between 9 and 11 predictors were selected. Internal validation was successful with polytomous c-indexes between 0.64 and 0.69. For the best model dichotomous c-indexes were between 0.73 (primary invasive vs metastatic and 0.96 (borderline vs metastatic. On temporal and external validation, overall discrimination performance was good with polytomous c-indexes between 0.57 and 0.64. However, discrimination between primary and metastatic invasive tumors decreased to near random levels. Standard logistic regression performed well in comparison with advanced algorithms, and combining dichotomous models performed well in comparison with true polytomous models. The best model was a combination of dichotomous logistic regression models. This model is available online

  3. OWL-based reasoning methods for validating archetypes.

    Science.gov (United States)

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Validation of the transportation computer codes HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND

    International Nuclear Information System (INIS)

    Maheras, S.J.; Pippen, H.K.

    1995-05-01

    The computer codes HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND were used to estimate radiation doses from the transportation of radioactive material in the Department of Energy Programmatic Spent Nuclear Fuel Management and Idaho National Engineering Laboratory Environmental Restoration and Waste Management Programs Environmental Impact Statement. HIGHWAY and INTERLINE were used to estimate transportation routes for truck and rail shipments, respectively. RADTRAN 4 was used to estimate collective doses from incident-free transportation and the risk (probability x consequence) from transportation accidents. RISKIND was used to estimate incident-free radiation doses for maximally exposed individuals and the consequences from reasonably foreseeable transportation accidents. The purpose of this analysis is to validate the estimates made by these computer codes; critiques of the conceptual models used in RADTRAN 4 are also discussed. Validation is defined as ''the test and evaluation of the completed software to ensure compliance with software requirements.'' In this analysis, validation means that the differences between the estimates generated by these codes and independent observations are small (i.e., within the acceptance criterion established for the validation analysis). In some cases, the independent observations used in the validation were measurements; in other cases, the independent observations used in the validation analysis were generated using hand calculations. The results of the validation analyses performed for HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND show that the differences between the estimates generated using the computer codes and independent observations were small. Based on the acceptance criterion established for the validation analyses, the codes yielded acceptable results; in all cases the estimates met the requirements for successful validation

  5. Reproducibility and relative validity of a food frequency questionnaire to estimate intake of dietary phylloquinone and menaquinones.

    NARCIS (Netherlands)

    Zwakenberg, S R; Engelen, A I P; Dalmeijer, G W; Booth, S L; Vermeer, C; Drijvers, J J M M; Ocke, M C; Feskens, E J M; van der Schouw, Y T; Beulens, J W J

    2017-01-01

    This study aims to investigate the reproducibility and relative validity of the Dutch food frequency questionnaire (FFQ), to estimate intake of dietary phylloquinone and menaquinones compared with 24-h dietary recalls (24HDRs) and plasma markers of vitamin K status.

  6. Development and validation of a genotype 3 recombinant protein-based immunoassay for hepatitis E virus serology in swine

    Directory of Open Access Journals (Sweden)

    W.H.M. van der Poel

    2014-04-01

    Full Text Available Hepatitis E virus (HEV is classified within the family Hepeviridae, genus Hepevirus. HEV genotype 3 (Gt3 infections are endemic in pigs in Western Europe and in North and South America and cause zoonotic infections in humans. Several serological assays to detect HEV antibodies in pigs have been developed, at first mainly based on HEV genotype 1 (Gt1 antigens. To develop a sensitive HEV Gt3 ELISA, a recombinant baculovirus expression product of HEV Gt3 open reading frame-2 was produced and coated onto polystyrene ELISA plates. After incubation of porcine sera, bound HEV antibodies were detected with anti-porcine anti-IgG and anti-IgM conjugates. For primary estimation of sensitivity and specificity of the assay, sets of sera were used from pigs experimentally infected with HEV Gt3. For further validation of the assay and to set the cutoff value, a batch of 1100 pig sera was used. All pig sera were tested using the developed HEV Gt3 assay and two other serologic assays based on HEV Gt1 antigens. Since there is no gold standard available for HEV antibody testing, further validation and a definite setting of the cutoff of the developed HEV Gt3 assay were performed using a statistical approach based on Bayes' theorem. The developed and validated HEV antibody assay showed effective detection of HEV-specific antibodies. This assay can contribute to an improved detection of HEV antibodies and enable more reliable estimates of the prevalence of HEV Gt3 in swine in different regions.

  7. Phantom-based experimental validation of computational fluid dynamics simulations on cerebral aneurysms

    Energy Technology Data Exchange (ETDEWEB)

    Sun Qi; Groth, Alexandra; Bertram, Matthias; Waechter, Irina; Bruijns, Tom; Hermans, Roel; Aach, Til [Philips Research Europe, Weisshausstrasse 2, 52066 Aachen (Germany) and Institute of Imaging and Computer Vision, RWTH Aachen University, Sommerfeldstrasse 24, 52074 Aachen (Germany); Philips Research Europe, Weisshausstrasse 2, 52066 Aachen (Germany); Philips Healthcare, X-Ray Pre-Development, Veenpluis 4-6, 5684PC Best (Netherlands); Institute of Imaging and Computer Vision, RWTH Aachen University, Sommerfeldstrasse 24, 52074 Aachen (Germany)

    2010-09-15

    Purpose: Recently, image-based computational fluid dynamics (CFD) simulation has been applied to investigate the hemodynamics inside human cerebral aneurysms. The knowledge of the computed three-dimensional flow fields is used for clinical risk assessment and treatment decision making. However, the reliability of the application specific CFD results has not been thoroughly validated yet. Methods: In this work, by exploiting a phantom aneurysm model, the authors therefore aim to prove the reliability of the CFD results obtained from simulations with sufficiently accurate input boundary conditions. To confirm the correlation between the CFD results and the reality, virtual angiograms are generated by the simulation pipeline and are quantitatively compared to the experimentally acquired angiograms. In addition, a parametric study has been carried out to systematically investigate the influence of the input parameters associated with the current measuring techniques on the flow patterns. Results: Qualitative and quantitative evaluations demonstrate good agreement between the simulated and the real flow dynamics. Discrepancies of less than 15% are found for the relative root mean square errors of time intensity curve comparisons from each selected characteristic position. The investigated input parameters show different influences on the simulation results, indicating the desired accuracy in the measurements. Conclusions: This study provides a comprehensive validation method of CFD simulation for reproducing the real flow field in the cerebral aneurysm phantom under well controlled conditions. The reliability of the CFD is well confirmed. Through the parametric study, it is possible to assess the degree of validity of the associated CFD model based on the parameter values and their estimated accuracy range.

  8. Phantom-based experimental validation of computational fluid dynamics simulations on cerebral aneurysms

    International Nuclear Information System (INIS)

    Sun Qi; Groth, Alexandra; Bertram, Matthias; Waechter, Irina; Bruijns, Tom; Hermans, Roel; Aach, Til

    2010-01-01

    Purpose: Recently, image-based computational fluid dynamics (CFD) simulation has been applied to investigate the hemodynamics inside human cerebral aneurysms. The knowledge of the computed three-dimensional flow fields is used for clinical risk assessment and treatment decision making. However, the reliability of the application specific CFD results has not been thoroughly validated yet. Methods: In this work, by exploiting a phantom aneurysm model, the authors therefore aim to prove the reliability of the CFD results obtained from simulations with sufficiently accurate input boundary conditions. To confirm the correlation between the CFD results and the reality, virtual angiograms are generated by the simulation pipeline and are quantitatively compared to the experimentally acquired angiograms. In addition, a parametric study has been carried out to systematically investigate the influence of the input parameters associated with the current measuring techniques on the flow patterns. Results: Qualitative and quantitative evaluations demonstrate good agreement between the simulated and the real flow dynamics. Discrepancies of less than 15% are found for the relative root mean square errors of time intensity curve comparisons from each selected characteristic position. The investigated input parameters show different influences on the simulation results, indicating the desired accuracy in the measurements. Conclusions: This study provides a comprehensive validation method of CFD simulation for reproducing the real flow field in the cerebral aneurysm phantom under well controlled conditions. The reliability of the CFD is well confirmed. Through the parametric study, it is possible to assess the degree of validity of the associated CFD model based on the parameter values and their estimated accuracy range.

  9. Are traditional body fat equations and anthropometry valid to estimate body fat in children and adolescents living with HIV?

    Science.gov (United States)

    Lima, Luiz Rodrigo Augustemak de; Martins, Priscila Custódio; Junior, Carlos Alencar Souza Alves; Castro, João Antônio Chula de; Silva, Diego Augusto Santos; Petroski, Edio Luiz

    The aim of this study was to assess the validity of traditional anthropometric equations and to develop predictive equations of total body and trunk fat for children and adolescents living with HIV based on anthropometric measurements. Forty-eight children and adolescents of both sexes (24 boys) aged 7-17 years, living in Santa Catarina, Brazil, participated in the study. Dual-energy X-ray absorptiometry was used as the reference method to evaluate total body and trunk fat. Height, body weight, circumferences and triceps, subscapular, abdominal and calf skinfolds were measured. The traditional equations of Lohman and Slaughter were used to estimate body fat. Multiple regression models were fitted to predict total body fat (Model 1) and trunk fat (Model 2) using a backward selection procedure. Model 1 had an R 2 =0.85 and a standard error of the estimate of 1.43. Model 2 had an R 2 =0.80 and standard error of the estimate=0.49. The traditional equations of Lohman and Slaughter showed poor performance in estimating body fat in children and adolescents living with HIV. The prediction models using anthropometry provided reliable estimates and can be used by clinicians and healthcare professionals to monitor total body and trunk fat in children and adolescents living with HIV. Copyright © 2017 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.

  10. A PC-based signal validation system for nuclear power plants

    International Nuclear Information System (INIS)

    Erbay, A.S.; Upadhyaya, B.R.; Seker, S.

    1998-01-01

    In order to achieve the desired operating configuration in any process, the system conditions must be measured accurately. Examples of measurements are temperature, pressure, flow, level, motor current, vibration, etc. However, in order to operate within desired limits, it is important to know the reliability of plant measurements. Signal validation (SV) deals with this issue, and is defined as the detection, isolation and characterization of faulty signals. Also referred to as fault detection, signal validation checks inconsistencies among redundant measurements and estimates their expected values using other measurements and system models. The benefits of SV are both economic and safety related. Catastrophic signal failure can result in plant shutdown and lost revenue. Pre-catastrophic failure detection would therefore minimize plant downtime and increase plant availability. The control action taken depends primarily upon the information provided by the plant instruments. Thus, increased plant productivity and increased reliability of operator actions, would result from the implementation of such a system. The purpose of this study is to investigate some of the existing signal validation methods by incremental improvements and to develop new modules. Each of the SV modules performs a specific task. The architecture consists of four modules, an information base and a system executive integrated with a graphical user interface (GUI). All the modules are used for validation during both steady-state and transient operating conditions. The entire system was developed in the PC-framework under Microsoft Windows TM . Some improvements were made in the structure of static data-driven models by incorporating one and two-step regression. Kalman filtering is based on the use of a physical model of plant components and was implemented for a steam generator system in a nuclear power plant. This is applicable to both steady-state and transient operations. The system executive

  11. Intelligence in Bali--A Case Study on Estimating Mean IQ for a Population Using Various Corrections Based on Theory and Empirical Findings

    Science.gov (United States)

    Rindermann, Heiner; te Nijenhuis, Jan

    2012-01-01

    A high-quality estimate of the mean IQ of a country requires giving a well-validated test to a nationally representative sample, which usually is not feasible in developing countries. So, we used a convenience sample and four corrections based on theory and empirical findings to arrive at a good-quality estimate of the mean IQ in Bali. Our study…

  12. Vehicle Position Estimation Based on Magnetic Markers: Enhanced Accuracy by Compensation of Time Delays

    Directory of Open Access Journals (Sweden)

    Yeun-Sub Byun

    2015-11-01

    Full Text Available The real-time recognition of absolute (or relative position and orientation on a network of roads is a core technology for fully automated or driving-assisted vehicles. This paper presents an empirical investigation of the design, implementation, and evaluation of a self-positioning system based on a magnetic marker reference sensing method for an autonomous vehicle. Specifically, the estimation accuracy of the magnetic sensing ruler (MSR in the up-to-date estimation of the actual position was successfully enhanced by compensating for time delays in signal processing when detecting the vertical magnetic field (VMF in an array of signals. In this study, the signal processing scheme was developed to minimize the effects of the distortion of measured signals when estimating the relative positional information based on magnetic signals obtained using the MSR. In other words, the center point in a 2D magnetic field contour plot corresponding to the actual position of magnetic markers was estimated by tracking the errors between pre-defined reference models and measured magnetic signals. The algorithm proposed in this study was validated by experimental measurements using a test vehicle on a pilot network of roads. From the results, the positioning error was found to be less than 0.04 m on average in an operational test.

  13. Estimation of monthly-mean daily global solar radiation based on MODIS and TRMM products

    International Nuclear Information System (INIS)

    Qin, Jun; Chen, Zhuoqi; Yang, Kun; Liang, Shunlin; Tang, Wenjun

    2011-01-01

    Global solar radiation (GSR) is required in a large number of fields. Many parameterization schemes are developed to estimate it using routinely measured meteorological variables, since GSR is directly measured at a limited number of stations. Even so, meteorological stations are sparse, especially, in remote areas. Satellite signals (radiance at the top of atmosphere in most cases) can be used to estimate continuous GSR in space. However, many existing remote sensing products have a relatively coarse spatial resolution and these inversion algorithms are too complicated to be mastered by experts in other research fields. In this study, the artificial neural network (ANN) is utilized to build the mathematical relationship between measured monthly-mean daily GSR and several high-level remote sensing products available for the public, including Moderate Resolution Imaging Spectroradiometer (MODIS) monthly averaged land surface temperature (LST), the number of days in which the LST retrieval is performed in 1 month, MODIS enhanced vegetation index, Tropical Rainfall Measuring Mission satellite (TRMM) monthly precipitation. After training, GSR estimates from this ANN are verified against ground measurements at 12 radiation stations. Then, comparisons are performed among three GSR estimates, including the one presented in this study, a surface data-based estimate, and a remote sensing product by Japan Aerospace Exploration Agency (JAXA). Validation results indicate that the ANN-based method presented in this study can estimate monthly-mean daily GSR at a spatial resolution of about 5 km with high accuracy.

  14. Estimation of the flow resistances exerted in coronary arteries using a vessel length-based method.

    Science.gov (United States)

    Lee, Kyung Eun; Kwon, Soon-Sung; Ji, Yoon Cheol; Shin, Eun-Seok; Choi, Jin-Ho; Kim, Sung Joon; Shim, Eun Bo

    2016-08-01

    Flow resistances exerted in the coronary arteries are the key parameters for the image-based computer simulation of coronary hemodynamics. The resistances depend on the anatomical characteristics of the coronary system. A simple and reliable estimation of the resistances is a compulsory procedure to compute the fractional flow reserve (FFR) of stenosed coronary arteries, an important clinical index of coronary artery disease. The cardiac muscle volume reconstructed from computed tomography (CT) images has been used to assess the resistance of the feeding coronary artery (muscle volume-based method). In this study, we estimate the flow resistances exerted in coronary arteries by using a novel method. Based on a physiological observation that longer coronary arteries have more daughter branches feeding a larger mass of cardiac muscle, the method measures the vessel lengths from coronary angiogram or CT images (vessel length-based method) and predicts the coronary flow resistances. The underlying equations are derived from the physiological relation among flow rate, resistance, and vessel length. To validate the present estimation method, we calculate the coronary flow division over coronary major arteries for 50 patients using the vessel length-based method as well as the muscle volume-based one. These results are compared with the direct measurements in a clinical study. Further proving the usefulness of the present method, we compute the coronary FFR from the images of optical coherence tomography.

  15. A Fuzzy Logic-Based Approach for Estimation of Dwelling Times of Panama Metro Stations

    Directory of Open Access Journals (Sweden)

    Aranzazu Berbey Alvarez

    2015-04-01

    Full Text Available Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation of the elements of the origin-destination matrix and the dwelling time of stations in a railway transport system. The fuzzy inference engine used in the algorithm is based in the principle of maximum entropy. The approach considers passengers’ preferences to assign a level of congestion in each car of the train in function of the properties of the station platforms. This approach is implemented to estimate the passenger flow and dwelling times of the recently opened Line 1 of the Panama Metro. The dwelling times obtained from the simulation are compared to real measurements to validate the approach.

  16. Model-based verification and validation of the SMAP uplink processes

    Science.gov (United States)

    Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.

    Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.

  17. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship

    Science.gov (United States)

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab’s software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001) but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37). The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key points Some commercial devices allow to estimate 1 RM from the force-velocity relationship. These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription. Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations. PMID:24149641

  18. Estimating population cause-specific mortality fractions from in-hospital mortality: validation of a new method.

    Directory of Open Access Journals (Sweden)

    Christopher J L Murray

    2007-11-01

    Full Text Available Cause-of-death data for many developing countries are not available. Information on deaths in hospital by cause is available in many low- and middle-income countries but is not a representative sample of deaths in the population. We propose a method to estimate population cause-specific mortality fractions (CSMFs using data already collected in many middle-income and some low-income developing nations, yet rarely used: in-hospital death records.For a given cause of death, a community's hospital deaths are equal to total community deaths multiplied by the proportion of deaths occurring in hospital. If we can estimate the proportion dying in hospital, we can estimate the proportion dying in the population using deaths in hospital. We propose to estimate the proportion of deaths for an age, sex, and cause group that die in hospital from the subset of the population where vital registration systems function or from another population. We evaluated our method using nearly complete vital registration (VR data from Mexico 1998-2005, which records whether a death occurred in a hospital. In this validation test, we used 45 disease categories. We validated our method in two ways: nationally and between communities. First, we investigated how the method's accuracy changes as we decrease the amount of Mexican VR used to estimate the proportion of each age, sex, and cause group dying in hospital. Decreasing VR data used for this first step from 100% to 9% produces only a 12% maximum relative error between estimated and true CSMFs. Even if Mexico collected full VR information only in its capital city with 9% of its population, our estimation method would produce an average relative error in CSMFs across the 45 causes of just over 10%. Second, we used VR data for the capital zone (Distrito Federal and Estado de Mexico and estimated CSMFs for the three lowest-development states. Our estimation method gave an average relative error of 20%, 23%, and 31% for

  19. Entropy Evaluation Based on Value Validity

    Directory of Open Access Journals (Sweden)

    Tarald O. Kvålseth

    2014-09-01

    Full Text Available Besides its importance in statistical physics and information theory, the Boltzmann-Shannon entropy S has become one of the most widely used and misused summary measures of various attributes (characteristics in diverse fields of study. It has also been the subject of extensive and perhaps excessive generalizations. This paper introduces the concept and criteria for value validity as a means of determining if an entropy takes on values that reasonably reflect the attribute being measured and that permit different types of comparisons to be made for different probability distributions. While neither S nor its relative entropy equivalent S* meet the value-validity conditions, certain power functions of S and S* do to a considerable extent. No parametric generalization offers any advantage over S in this regard. A measure based on Euclidean distances between probability distributions is introduced as a potential entropy that does comply fully with the value-validity requirements and its statistical inference procedure is discussed.

  20. An Optimal-Estimation-Based Aerosol Retrieval Algorithm Using OMI Near-UV Observations

    Science.gov (United States)

    Jeong, U; Kim, J.; Ahn, C.; Torres, O.; Liu, X.; Bhartia, P. K.; Spurr, R. J. D.; Haffner, D.; Chance, K.; Holben, B. N.

    2016-01-01

    An optimal-estimation(OE)-based aerosol retrieval algorithm using the OMI (Ozone Monitoring Instrument) near-ultraviolet observation was developed in this study. The OE-based algorithm has the merit of providing useful estimates of errors simultaneously with the inversion products. Furthermore, instead of using the traditional lookup tables for inversion, it performs online radiative transfer calculations with the VLIDORT (linearized pseudo-spherical vector discrete ordinate radiative transfer code) to eliminate interpolation errors and improve stability. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in northeast Asia (DRAGON NE-Asia 2012) were used to validate the retrieved aerosol optical thickness (AOT) and single scattering albedo (SSA). The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET) products that is comparable to or better than the correlation with the operational product during the campaign. The OEbased estimated error represented the variance of actual biases of AOT at 388 nm between the retrieval and AERONET measurements better than the operational error estimates. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface reflectance at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine-mode fraction (FMF) were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for relevant studies. Detailed advantages of using the OE method were described and discussed in this paper.

  1. Policy and Validity Prospects for Performance-Based Assessment.

    Science.gov (United States)

    Baker, Eva L.; And Others

    1994-01-01

    This article describes performance-based assessment as expounded by its proponents, comments on these conceptions, reviews evidence regarding the technical quality of performance-based assessment, and considers its validity under various policy options. (JDD)

  2. Development and Validation of a Calculator for Estimating the Probability of Urinary Tract Infection in Young Febrile Children.

    Science.gov (United States)

    Shaikh, Nader; Hoberman, Alejandro; Hum, Stephanie W; Alberty, Anastasia; Muniz, Gysella; Kurs-Lasky, Marcia; Landsittel, Douglas; Shope, Timothy

    2018-06-01

    Accurately estimating the probability of urinary tract infection (UTI) in febrile preverbal children is necessary to appropriately target testing and treatment. To develop and test a calculator (UTICalc) that can first estimate the probability of UTI based on clinical variables and then update that probability based on laboratory results. Review of electronic medical records of febrile children aged 2 to 23 months who were brought to the emergency department of Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania. An independent training database comprising 1686 patients brought to the emergency department between January 1, 2007, and April 30, 2013, and a validation database of 384 patients were created. Five multivariable logistic regression models for predicting risk of UTI were trained and tested. The clinical model included only clinical variables; the remaining models incorporated laboratory results. Data analysis was performed between June 18, 2013, and January 12, 2018. Documented temperature of 38°C or higher in children aged 2 months to less than 2 years. With the use of culture-confirmed UTI as the main outcome, cutoffs for high and low UTI risk were identified for each model. The resultant models were incorporated into a calculation tool, UTICalc, which was used to evaluate medical records. A total of 2070 children were included in the study. The training database comprised 1686 children, of whom 1216 (72.1%) were female and 1167 (69.2%) white. The validation database comprised 384 children, of whom 291 (75.8%) were female and 200 (52.1%) white. Compared with the American Academy of Pediatrics algorithm, the clinical model in UTICalc reduced testing by 8.1% (95% CI, 4.2%-12.0%) and decreased the number of UTIs that were missed from 3 cases to none. Compared with empirically treating all children with a leukocyte esterase test result of 1+ or higher, the dipstick model in UTICalc would have reduced the number of treatment delays by 10.6% (95% CI

  3. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  4. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Minuk; Choi, Jong-su; Hong, Sup [Korea Research Insitute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-02-15

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.

  5. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    International Nuclear Information System (INIS)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee; Lee, Minuk; Choi, Jong-su; Hong, Sup

    2015-01-01

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF

  6. Validation of Persian rapid estimate of adult literacy in dentistry.

    Science.gov (United States)

    Pakpour, Amir H; Lawson, Douglas M; Tadakamadla, Santosh K; Fridlund, Bengt

    2016-05-01

    The aim of the present study was to establish the psychometric properties of the Rapid Estimate of adult Literacy in Dentistry-99 (REALD-99) in the Persian language for use in an Iranian population (IREALD-99). A total of 421 participants with a mean age of 28 years (59% male) were included in the study. Participants included those who were 18 years or older and those residing in Quazvin (a city close to Tehran), Iran. A forward-backward translation process was used for the IREALD-99. The Test of Functional Health Literacy in Dentistry (TOFHLiD) was also administrated. The validity of the IREALD-99 was investigated by comparing the IREALD-99 across the categories of education and income levels. To further investigate, the correlation of IREALD-99 with TOFHLiD was computed. A principal component analysis (PCA) was performed on the data to assess unidimensionality and strong first factor. The Rasch mathematical model was used to evaluate the contribution of each item to the overall measure, and whether the data were invariant to differences in sex. Reliability was estimated with Cronbach's α and test-retest correlation. Cronbach's alpha for the IREALD-99 was 0.98, indicating strong internal consistency. The test-retest correlation was 0.97. IREALD-99 scores differed by education levels. IREALD-99 scores were positively related to TOFHLiD scores (rh = 0.72, P < 0.01). In addition, IREALD-99 showed positive correlation with self-rated oral health status (rh = 0.31, P < 0.01) as evidence of convergent validity. The PCA indicated a strong first component, five times the strength of the second component and nine times the third. The empirical data were a close fit with the Rasch mathematical model. There was not a significant difference in scores with respect to income level (P = 0.09), and only the very lowest income level was significantly different (P < 0.01). The IREALD-99 exhibited excellent reliability on repeated administrations, as well as internal

  7. A Priori Implementation Effort Estimation for HW Design Based on Independent-Path Analysis

    DEFF Research Database (Denmark)

    Abildgren, Rasmus; Diguet, Jean-Philippe; Bomel, Pierre

    2008-01-01

    that with the proposed approach it is possible to estimate the hardware implementation effort. This approach, part of our light design space exploration concept, is implemented in our framework ‘‘Design-Trotter'' and offers a new type of tool that can help designers and managers to reduce the time-to-market factor......This paper presents a metric-based approach for estimating the hardware implementation effort (in terms of time) for an application in relation to the number of linear-independent paths of its algorithms. We exploit the relation between the number of edges and linear-independent paths...... in an algorithm and the corresponding implementation effort. We propose an adaptation of the concept of cyclomatic complexity, complemented with a correction function to take designers' learning curve and experience into account. Our experimental results, composed of a training and a validation phase, show...

  8. Validation of the Maslach Burnout Inventory-Human Services Survey for Estimating Burnout in Dental Students.

    Science.gov (United States)

    Montiel-Company, José María; Subirats-Roig, Cristian; Flores-Martí, Pau; Bellot-Arcís, Carlos; Almerich-Silla, José Manuel

    2016-11-01

    The aim of this study was to examine the validity and reliability of the Maslach Burnout Inventory-Human Services Survey (MBI-HSS) as a tool for assessing the prevalence and level of burnout in dental students in Spanish universities. The survey was adapted from English to Spanish. A sample of 533 dental students from 15 Spanish universities and a control group of 188 medical students self-administered the survey online, using the Google Drive service. The test-retest reliability or reproducibility showed an Intraclass Correlation Coefficient of 0.95. The internal consistency of the survey was 0.922. Testing the construct validity showed two components with an eigenvalue greater than 1.5, which explained 51.2% of the total variance. Factor I (36.6% of the variance) comprised the items that estimated emotional exhaustion and depersonalization. Factor II (14.6% of the variance) contained the items that estimated personal accomplishment. The cut-off point for the existence of burnout achieved a sensitivity of 92.2%, a specificity of 92.1%, and an area under the curve of 0.96. Comparison of the total dental students sample and the control group of medical students showed significantly higher burnout levels for the dental students (50.3% vs. 40.4%). In this study, the MBI-HSS was found to be viable, valid, and reliable for measuring burnout in dental students. Since the study also found that the dental students suffered from high levels of this syndrome, these results suggest the need for preventive burnout control programs.

  9. Validation of the Female Sexual Function Index (FSFI) for web-based administration.

    Science.gov (United States)

    Crisp, Catrina C; Fellner, Angela N; Pauls, Rachel N

    2015-02-01

    Web-based questionnaires are becoming increasingly valuable for clinical research. The Female Sexual Function Index (FSFI) is the gold standard for evaluating female sexual function; yet, it has not been validated in this format. We sought to validate the Female Sexual Function Index (FSFI) for web-based administration. Subjects enrolled in a web-based research survey of sexual function from the general population were invited to participate in this validation study. The first 151 respondents were included. Validation participants completed the web-based version of the FSFI followed by a mailed paper-based version. Demographic data were collected for all subjects. Scores were compared using the paired t test and the intraclass correlation coefficient. One hundred fifty-one subjects completed both web- and paper-based versions of the FSFI. Those subjects participating in the validation study did not differ in demographics or FSFI scores from the remaining subjects in the general population study. Total web-based and paper-based FSFI scores were not significantly different (mean 20.31 and 20.29 respectively, p = 0.931). The six domains or subscales of the FSFI were similar when comparing web and paper scores. Finally, intraclass correlation analysis revealed a high degree of correlation between total and subscale scores, r = 0.848-0.943, p Web-based administration of the FSFI is a valid alternative to the paper-based version.

  10. Validation of ecological state space models using the Laplace approximation

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Albertsen, Christoffer Moesgaard; Berg, Casper Willestofte

    2017-01-01

    Many statistical models in ecology follow the state space paradigm. For such models, the important step of model validation rarely receives as much attention as estimation or hypothesis testing, perhaps due to lack of available algorithms and software. Model validation is often based on a naive...... for estimation in general mixed effects models. Implementing one-step predictions in the R package Template Model Builder, we demonstrate that it is possible to perform model validation with little effort, even if the ecological model is multivariate, has non-linear dynamics, and whether observations...... useful directions in which the model could be improved....

  11. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time

    OpenAIRE

    Martin, C. K.; Correa, J. B.; Han, H.; Allen, H. R.; Rood, J.; Champagne, C. M.; Gunturk, B. K.; Bray, G. A.

    2011-01-01

    Two studies are reported; a pilot study to demonstrate feasibility followed by a larger validity study. Study 1’s objective was to test the effect of two ecological momentary assessment (EMA) approaches that varied in intensity on the validity/accuracy of estimating energy intake with the Remote Food Photography Method (RFPM) over six days in free-living conditions. When using the RFPM, Smartphones are used to capture images of food selection and plate waste and to send the images to a server...

  12. The Validity of Value-Added Estimates from Low-Stakes Testing Contexts: The Impact of Change in Test-Taking Motivation and Test Consequences

    Science.gov (United States)

    Finney, Sara J.; Sundre, Donna L.; Swain, Matthew S.; Williams, Laura M.

    2016-01-01

    Accountability mandates often prompt assessment of student learning gains (e.g., value-added estimates) via achievement tests. The validity of these estimates have been questioned when performance on tests is low stakes for students. To assess the effects of motivation on value-added estimates, we assigned students to one of three test consequence…

  13. Mathematical modeling for corrosion environment estimation based on concrete resistivity measurement directly above reinforcement

    International Nuclear Information System (INIS)

    Lim, Young-Chul; Lee, Han-Seung; Noguchi, Takafumi

    2009-01-01

    This study aims to formulate a resistivity model whereby the concrete resistivity expressing the environment of steel reinforcement can be directly estimated and evaluated based on measurement immediately above reinforcement as a method of evaluating corrosion deterioration in reinforced concrete structures. It also aims to provide a theoretical ground for the feasibility of durability evaluation by electric non-destructive techniques with no need for chipping of cover concrete. This Resistivity Estimation Model (REM), which is a mathematical model using the mirror method, combines conventional four-electrode measurement of resistivity with geometric parameters including cover depth, bar diameter, and electrode intervals. This model was verified by estimation using this model at areas directly above reinforcement and resistivity measurement at areas unaffected by reinforcement in regard to the assessment of the concrete resistivity. Both results strongly correlated, proving the validity of this model. It is expected to be applicable to laboratory study and field diagnosis regarding reinforcement corrosion. (author)

  14. Process-based Cost Estimation for Ramjet/Scramjet Engines

    Science.gov (United States)

    Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John

    2003-01-01

    Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.

  15. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry – BREALD-30

    Science.gov (United States)

    Junkes, Monica C.; Fraiz, Fabian C.; Sardenberg, Fernanda; Lee, Jessica Y.; Paiva, Saul M.; Ferreira, Fernanda M.

    2015-01-01

    Objective The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. Methods After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. Results The BREALD-30 demonstrated good internal reliability. Cronbach’s alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent’s perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent’s perception regarding his/her child's oral health remained significant in the multivariate analysis. Conclusion The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil. PMID:26158724

  16. Long-term monitoring of endangered Laysan ducks: Index validation and population estimates 1998–2012

    Science.gov (United States)

    Reynolds, Michelle H.; Courtot, Karen; Brinck, Kevin W.; Rehkemper, Cynthia; Hatfield, Jeffrey

    2015-01-01

    Monitoring endangered wildlife is essential to assessing management or recovery objectives and learning about population status. We tested assumptions of a population index for endangered Laysan duck (or teal; Anas laysanensis) monitored using mark–resight methods on Laysan Island, Hawai’i. We marked 723 Laysan ducks between 1998 and 2009 and identified seasonal surveys through 2012 that met accuracy and precision criteria for estimating population abundance. Our results provide a 15-y time series of seasonal population estimates at Laysan Island. We found differences in detection among seasons and how observed counts related to population estimates. The highest counts and the strongest relationship between count and population estimates occurred in autumn (September–November). The best autumn surveys yielded population abundance estimates that ranged from 674 (95% CI = 619–730) in 2003 to 339 (95% CI = 265–413) in 2012. A population decline of 42% was observed between 2010 and 2012 after consecutive storms and Japan’s To¯hoku earthquake-generated tsunami in 2011. Our results show positive correlations between the seasonal maximum counts and population estimates from the same date, and support the use of standardized bimonthly counts of unmarked birds as a valid index to monitor trends among years within a season at Laysan Island.

  17. Convergent validity of ActiGraph and Actical accelerometers for estimating physical activity in adults

    DEFF Research Database (Denmark)

    Duncan, Scott; Stewart, Tom; Bo Schneller, Mikkel

    2018-01-01

    PURPOSE: The aim of the present study was to examine the convergent validity of two commonly-used accelerometers for estimating time spent in various physical activity intensities in adults. METHODS: The sample comprised 37 adults (26 males) with a mean (SD) age of 37.6 (12.2) years from San Diego......, USA. Participants wore ActiGraph GT3X+ and Actical accelerometers for three consecutive days. Percent agreement was used to compare time spent within four physical activity intensity categories under three counts per minute (CPM) threshold protocols: (1) using thresholds developed specifically......Graph and Actical accelerometers provide significantly different estimates of time spent in various physical activity intensities. Regression and threshold adjustment were able to reduce these differences, although some level of non-agreement persisted. Researchers should be aware of the inherent limitations...

  18. Online Capacity Estimation of Lithium-Ion Batteries Based on Novel Feature Extraction and Adaptive Multi-Kernel Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2015-11-01

    Full Text Available Prognostics is necessary to ensure the reliability and safety of lithium-ion batteries for hybrid electric vehicles or satellites. This process can be achieved by capacity estimation, which is a direct fading indicator for assessing the state of health of a battery. However, the capacity of a lithium-ion battery onboard is difficult to monitor. This paper presents a data-driven approach for online capacity estimation. First, six novel features are extracted from cyclic charge/discharge cycles and used as indirect health indicators. An adaptive multi-kernel relevance machine (MKRVM based on accelerated particle swarm optimization algorithm is used to determine the optimal parameters of MKRVM and characterize the relationship between extracted features and battery capacity. The overall estimation process comprises offline and online stages. A supervised learning step in the offline stage is established for model verification to ensure the generalizability of MKRVM for online application. Cross-validation is further conducted to validate the performance of the proposed model. Experiment and comparison results show the effectiveness, accuracy, efficiency, and robustness of the proposed approach for online capacity estimation of lithium-ion batteries.

  19. Validity of a food frequency questionnaire to estimate long-chain polyunsaturated fatty acid intake among Japanese women in early and late pregnancy.

    Science.gov (United States)

    Kobayashi, Minatsu; Jwa, Seung Chik; Ogawa, Kohei; Morisaki, Naho; Fujiwara, Takeo

    2017-01-01

    The relative validity of food frequency questionnaires for estimating long-chain polyunsaturated fatty acid (LC-PUFA) intake among pregnant Japanese women is currently unclear. The aim of this study was to verify the external validity of a food frequency questionnaire, originally developed for non-pregnant adults, to assess the dietary intake of LC-PUFA using dietary records and serum phospholipid levels among Japanese women in early and late pregnancy. A validation study involving 188 participants in early pregnancy and 169 participants in late pregnancy was conducted. Intake LC-PUFA was estimated using a food frequency questionnaire and evaluated using a 3-day dietary record and serum phospholipid concentrations in both early and late pregnancy. The food frequency questionnaire provided estimates of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) intake with higher precision than dietary records in both early and late pregnancy. Significant correlations were observed for LC-PUFA intake estimated using dietary records in both early and late pregnancy, particularly for EPA and DHA (correlation coefficients ranged from 0.34 to 0.40, p food frequency questionnaire, which was originally designed for non-pregnant adults and was evaluated in this study against dietary records and biological markers, has good validity for assessing LC-PUFA intake, especially EPA and DHA intake, among Japanese women in early and late pregnancy. Copyright © 2016 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  20. Validation of an Innovative Satellite-Based UV Dosimeter

    Science.gov (United States)

    Morelli, Marco; Masini, Andrea; Simeone, Emilio; Khazova, Marina

    2016-08-01

    We present an innovative satellite-based UV (ultraviolet) radiation dosimeter with a mobile app interface that has been validated by exploiting both ground-based measurements and an in-vivo assessment of the erythemal effects on some volunteers having a controlled exposure to solar radiation.Both validations showed that the satellite-based UV dosimeter has a good accuracy and reliability needed for health-related applications.The app with this satellite-based UV dosimeter also includes other related functionalities such as the provision of safe sun exposure time updated in real-time and end exposure visual/sound alert. This app will be launched on the global market by siHealth Ltd in May 2016 under the name of "HappySun" and available both for Android and for iOS devices (more info on http://www.happysun.co.uk).Extensive R&D activities are on-going for further improvement of the satellite-based UV dosimeter's accuracy.

  1. Development and validation of PediaTrac™: A web-based tool to track developing infants.

    Science.gov (United States)

    Lajiness-O'Neill, Renée; Brooks, Judith; Lukomski, Angela; Schilling, Stephen; Huth-Bocks, Alissa; Warschausky, Seth; Flores, Ana-Mercedes; Swick, Casey; Nyman, Tristin; Andersen, Tiffany; Morris, Natalie; Schmitt, Thomas A; Bell-Smith, Jennifer; Moir, Barbara; Hodges, Elise K; Lyddy, James E

    2018-02-01

    PediaTrac™, a 363-item web-based tool to track infant development, administered in modules of ∼40-items per sampling period, newborn (NB), 2--, 4--, 6--, 9-- and 12--months was validated. Caregivers answered demographic, medical, and environmental questions, and questions covering the sensorimotor, feeding/eating, sleep, speech/language, cognition, social-emotional, and attachment domains. Expert Panel Reviews and Cognitive Interviews (CI) were conducted to validate the item bank. Classical Test Theory (CTT) and Item Response Theory (IRT) methods were employed to examine the dimensionality and psychometric properties of PediaTrac with pooled longitudinal and cross-sectional cohorts (N = 132). Intraclass correlation coefficients (ICC) for the Expert Panel Review revealed moderate agreement at 6 -months and good reliability at other sampling periods. ICC estimates for CI revealed moderate reliability regarding clarity of the items at NB and 4 months, good reliability at 2--, 9-- and 12--months and excellent reliability at 6 -months. CTT revealed good coefficient alpha estimates (α ≥ 0.77 for five of the six ages) for the Social-Emotional/Communication, Attachment (α ≥ 0.89 for all ages), and Sensorimotor (α ≥ 0.75 at 6-months) domains, revealing the need for better targeting of sensorimotor items. IRT modeling revealed good reliability (r = 0.85-0.95) for three distinct domains (Feeding/Eating, Social-Emotional/Communication and Attachment) and four subdomains (Feeding Breast/Formula, Feeding Solid Food, Social-Emotional Information Processing, Communication/Cognition). Convergent and discriminant construct validity were demonstrated between our IRT-modeled domains and constructs derived from existing developmental, behavioral and caregiver measures. Our Attachment domain was significantly correlated with existing measures at the NB and 2-month periods, while the Social-Emotional/Communication domain was highly correlated with

  2. A pdf-Free Change Detection Test Based on Density Difference Estimation.

    Science.gov (United States)

    Bu, Li; Alippi, Cesare; Zhao, Dongbin

    2018-02-01

    The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.

  3. Estimating the operator's performance time of emergency procedural tasks based on a task complexity measure

    International Nuclear Information System (INIS)

    Jung, Won Dae; Park, Jink Yun

    2012-01-01

    It is important to understand the amount of time required to execute an emergency procedural task in a high-stress situation for managing human performance under emergencies in a nuclear power plant. However, the time to execute an emergency procedural task is highly dependent upon expert judgment due to the lack of actual data. This paper proposes an analytical method to estimate the operator's performance time (OPT) of a procedural task, which is based on a measure of the task complexity (TACOM). The proposed method for estimating an OPT is an equation that uses the TACOM as a variable, and the OPT of a procedural task can be calculated if its relevant TACOM score is available. The validity of the proposed equation is demonstrated by comparing the estimated OPTs with the observed OPTs for emergency procedural tasks in a steam generator tube rupture scenario.

  4. Controlling for Frailty in Pharmacoepidemiologic Studies of Older Adults: Validation of an Existing Medicare Claims-based Algorithm.

    Science.gov (United States)

    Cuthbertson, Carmen C; Kucharska-Newton, Anna; Faurot, Keturah R; Stürmer, Til; Jonsson Funk, Michele; Palta, Priya; Windham, B Gwen; Thai, Sydney; Lund, Jennifer L

    2018-07-01

    Frailty is a geriatric syndrome characterized by weakness and weight loss and is associated with adverse health outcomes. It is often an unmeasured confounder in pharmacoepidemiologic and comparative effectiveness studies using administrative claims data. Among the Atherosclerosis Risk in Communities (ARIC) Study Visit 5 participants (2011-2013; n = 3,146), we conducted a validation study to compare a Medicare claims-based algorithm of dependency in activities of daily living (or dependency) developed as a proxy for frailty with a reference standard measure of phenotypic frailty. We applied the algorithm to the ARIC participants' claims data to generate a predicted probability of dependency. Using the claims-based algorithm, we estimated the C-statistic for predicting phenotypic frailty. We further categorized participants by their predicted probability of dependency (<5%, 5% to <20%, and ≥20%) and estimated associations with difficulties in physical abilities, falls, and mortality. The claims-based algorithm showed good discrimination of phenotypic frailty (C-statistic = 0.71; 95% confidence interval [CI] = 0.67, 0.74). Participants classified with a high predicted probability of dependency (≥20%) had higher prevalence of falls and difficulty in physical ability, and a greater risk of 1-year all-cause mortality (hazard ratio = 5.7 [95% CI = 2.5, 13]) than participants classified with a low predicted probability (<5%). Sensitivity and specificity varied across predicted probability of dependency thresholds. The Medicare claims-based algorithm showed good discrimination of phenotypic frailty and high predictive ability with adverse health outcomes. This algorithm can be used in future Medicare claims analyses to reduce confounding by frailty and improve study validity.

  5. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    Science.gov (United States)

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  6. Monte Carlo-based tail exponent estimator

    Science.gov (United States)

    Barunik, Jozef; Vacha, Lukas

    2010-11-01

    In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.

  7. Size-based estimation of the status of fish stocks: simulation analysis and comparison with age-based estimations

    DEFF Research Database (Denmark)

    Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders

    , were investigated and our estimations were compared to the ICES advice. Only size-specific catch data were used, in order to emulate data limited situations. The simulation analysis reveals that the status of the stock, i.e. F/Fmsy, is estimated more accurately than the fishing mortality F itself....... Specific knowledge of the natural mortality improves the estimation more than having information about all other life history parameters. Our approach gives, at least qualitatively, an estimated stock status which is similar to the results of an age-based assessment. Since our approach only uses size...

  8. Validation of a Web-Based Tool to Predict the Ipsilateral Breast Tumor Recurrence (IBTR! 2.0) after Breast-Conserving Therapy for Korean Patients.

    Science.gov (United States)

    Jung, Seung Pil; Hur, Sung Mo; Lee, Se Kyung; Kim, Sangmin; Choi, Min-Young; Bae, Soo Youn; Kim, Jiyoung; Kim, Min Kuk; Kil, Won Ho; Choe, Jun-Ho; Kim, Jung-Han; Kim, Jee Soo; Nam, Seok Jin; Bae, Jeoung Won; Lee, Jeong Eon

    2013-03-01

    IBTR! 2.0 is a web-based nomogram that predicts the 10-year ipsilateral breast tumor recurrence (IBTR) rate after breast-conserving therapy. We validated this nomogram in Korean patients. The nomogram was tested for 520 Korean patients, who underwent breast-conserving surgery followed by radiation therapy. Predicted and observed 10-year outcomes were compared for the entire cohort and for each group, predefined by nomogram-predicted risks: group 1, 10%. In overall patients, the overall 10 year predicted and observed estimates of IBTR were 5.22% and 5.70% (p=0.68). In group 1, (n=124), the predicted and observed estimates were 2.25% and 1.80% (p=0.73), in group 2 (n=177), 3.95% and 3.90% (p=0.97), in group 3 (n=181), 7.14% and 8.80% (p=0.42), and in group 4 (n=38), 11.66% and 14.90% (p=0.73), respectively. In a previous validation of this nomogram based on American patients, nomogram-predicted IBTR rates were overestimated in the high-risk subgroup. However, our results based on Korean patients showed that the observed IBTR was higher than the predicted estimates in groups 3 and 4. This difference may arise from ethnic differences, as well as from the methods used to detect IBTR and the healthcare environment. IBTR! 2.0 may be considered as an acceptable nomogram in Korean patients with low- to moderate-risk of in-breast recurrence. Before widespread use of this nomogram, the IBTR! 2.0 needs a larger validation study and continuous modification.

  9. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    Science.gov (United States)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  10. Republic of Georgia estimates for prevalence of drug use: Randomized response techniques suggest under-estimation.

    Science.gov (United States)

    Kirtadze, Irma; Otiashvili, David; Tabatadze, Mzia; Vardanashvili, Irina; Sturua, Lela; Zabransky, Tomas; Anthony, James C

    2018-06-01

    Validity of responses in surveys is an important research concern, especially in emerging market economies where surveys in the general population are a novelty, and the level of social control is traditionally higher. The Randomized Response Technique (RRT) can be used as a check on response validity when the study aim is to estimate population prevalence of drug experiences and other socially sensitive and/or illegal behaviors. To apply RRT and to study potential under-reporting of drug use in a nation-scale, population-based general population survey of alcohol and other drug use. For this first-ever household survey on addictive substances for the Country of Georgia, we used the multi-stage probability sampling of 18-to-64-year-old household residents of 111 urban and 49 rural areas. During the interviewer-administered assessments, RRT involved pairing of sensitive and non-sensitive questions about drug experiences. Based upon the standard household self-report survey estimate, an estimated 17.3% [95% confidence interval, CI: 15.5%, 19.1%] of Georgian household residents have tried cannabis. The corresponding RRT estimate was 29.9% [95% CI: 24.9%, 34.9%]. The RRT estimates for other drugs such as heroin also were larger than the standard self-report estimates. We remain unsure about what is the "true" value for prevalence of using illegal psychotropic drugs in the Republic of Georgia study population. Our RRT results suggest that standard non-RRT approaches might produce 'under-estimates' or at best, highly conservative, lower-end estimates. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Age validation of canary rockfish (Sebastes pinniger) using two independent otolith techniques: lead-radium and bomb radiocarbon dating.

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, A H; Kerr, L A; Cailliet, G M; Brown, T A; Lundstrom, C C; Stanley, R D

    2007-11-04

    Canary rockfish (Sebastes pinniger) have long been an important part of recreational and commercial rockfish fishing from southeast Alaska to southern California, but localized stock abundances have declined considerably. Based on age estimates from otoliths and other structures, lifespan estimates vary from about 20 years to over 80 years. For the purpose of monitoring stocks, age composition is routinely estimated by counting growth zones in otoliths; however, age estimation procedures and lifespan estimates remain largely unvalidated. Typical age validation techniques have limited application for canary rockfish because they are deep dwelling and may be long lived. In this study, the unaged otolith of the pair from fish aged at the Department of Fisheries and Oceans Canada was used in one of two age validation techniques: (1) lead-radium dating and (2) bomb radiocarbon ({sup 14}C) dating. Age estimate accuracy and the validity of age estimation procedures were validated based on the results from each technique. Lead-radium dating proved successful in determining a minimum estimate of lifespan was 53 years and provided support for age estimation procedures up to about 50-60 years. These findings were further supported by {Delta}{sup 14}C data, which indicated a minimum estimate of lifespan was 44 {+-} 3 years. Both techniques validate, to differing degrees, age estimation procedures and provide support for inferring that canary rockfish can live more than 80 years.

  12. Are risk estimates biased in follow-up studies of psychosocial factors with low base-line participation?

    Directory of Open Access Journals (Sweden)

    Andersen Johan

    2011-07-01

    Full Text Available Abstract Background Low participation in population-based follow-up studies addressing psychosocial risk factors may cause biased estimation of health risk but the issue has seldom been examined. We compared risk estimates for selected health outcomes among respondents and the entire source population. Methods In a Danish cohort study of associations between psychosocial characteristics of the work environment and mental health, the source population of public service workers comprised 10,036 employees in 502 work units of which 4,489 participated (participation rate 45%. Data on the psychosocial work environment were obtained for each work unit by calculating the average of the employee self-reports. The average values were assigned all employees and non-respondent at the work unit. Outcome data on sick leave and prescription of antidepressant medication during the follow-up period (1.4.2007-31.12.2008 was obtained by linkage to national registries. Results Respondents differed at baseline from non-respondents by gender, age, employment status, sick leave and hospitalization for affective disorders. However, risk estimates for sick leave and prescription of antidepressant medication, during follow-up, based on the subset of participants, did only differ marginally from risk estimates based upon the entire population. Conclusions We found no indications that low participation at baseline distorts the estimates of associations between the work unit level of psychosocial work environment and mental health outcomes during follow-up. These results may not be valid for other exposures or outcomes.

  13. Validity and Practitality of Acid-Base Module Based on Guided Discovery Learning for Senior High School

    Science.gov (United States)

    Yerimadesi; Bayharti; Jannah, S. M.; Lufri; Festiyed; Kiram, Y.

    2018-04-01

    This Research and Development(R&D) aims to produce guided discovery learning based module on topic of acid-base and determine its validity and practicality in learning. Module development used Four D (4-D) model (define, design, develop and disseminate).This research was performed until development stage. Research’s instruments were validity and practicality questionnaires. Module was validated by five experts (three chemistry lecturers of Universitas Negeri Padang and two chemistry teachers of SMAN 9 Padang). Practicality test was done by two chemistry teachers and 30 students of SMAN 9 Padang. Kappa Cohen’s was used to analyze validity and practicality. The average moment kappa was 0.86 for validity and those for practicality were 0.85 by teachers and 0.76 by students revealing high category. It can be concluded that validity and practicality was proven for high school chemistry learning.

  14. Latency-Based and Psychophysiological Measures of Sexual Interest Show Convergent and Concurrent Validity.

    Science.gov (United States)

    Ó Ciardha, Caoilte; Attard-Johnson, Janice; Bindemann, Markus

    2018-04-01

    Latency-based measures of sexual interest require additional evidence of validity, as do newer pupil dilation approaches. A total of 102 community men completed six latency-based measures of sexual interest. Pupillary responses were recorded during three of these tasks and in an additional task where no participant response was required. For adult stimuli, there was a high degree of intercorrelation between measures, suggesting that tasks may be measuring the same underlying construct (convergent validity). In addition to being correlated with one another, measures also predicted participants' self-reported sexual interest, demonstrating concurrent validity (i.e., the ability of a task to predict a more validated, simultaneously recorded, measure). Latency-based and pupillometric approaches also showed preliminary evidence of concurrent validity in predicting both self-reported interest in child molestation and viewing pornographic material containing children. Taken together, the study findings build on the evidence base for the validity of latency-based and pupillometric measures of sexual interest.

  15. Estimating renal function in children: a new GFR-model based on serum cystatin C and body cell mass.

    Science.gov (United States)

    Andersen, Trine Borup

    2012-07-01

    This PhD thesis is based on four individual studies including 131 children aged 2-14 years with nephro-urologic disorders. The majority (72%) of children had a normal renal function (GFR > 82 ml/min/1.73 square metres), and only 8% had a renal function thesis´ main aims were: 1) to develop a more accurate GFR model based on a novel theory of body cell mass (BCM) and cystatin C (CysC); 2) to investigate the diagnostic performance in comparison to other models as well as serum CysC and creatinine; 3) to validate the new models precision and validity. The model´s diagnostic performance was investigated in study I as the ability to detect changes in renal function (total day-to-day variation), and in study IV as the ability to discriminate between normal and reduced function. The model´s precision and validity were indirectly evaluated in study II and III, and in study I accuracy was estimated by comparison to reference GFR. Several prediction models based on CysC or a combination of CysC and serum creatinine have been developed for predicting GFR in children. Despite these efforts to improve GFR estimates, no alternative to exogenous methods has been found and the Schwartz´s formula based on height, creatinine and an empirically derived constant is still recommended for GFR estimation in children. However, the inclusion of BCM as a possible variable in a CysC-based prediction model has not yet been explored. As CysC is produced at a constant rate from all nucleated cells we hypothesize that including BCM in a new prediction model will increase accuracy of the GFR estimate. Study I aimed at deriving the new GFR-prediction model based on the novel theory of CysC and BCM and comparing the performance to previously published models. The BCM-model took the form GFR (mL/min) = 10.2 × (BCM/CysC)E 0.40 × (height × body surface area/Crea)E 0.65. The model predicted 99% within ± 30% of reference GFR, and 67% within ±10%. This was higher than any other model. The

  16. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    Science.gov (United States)

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Cross Validation of Rain Drop Size Distribution between GPM and Ground Based Polarmetric radar

    Science.gov (United States)

    Chandra, C. V.; Biswas, S.; Le, M.; Chen, H.

    2017-12-01

    Dual-frequency precipitation radar (DPR) on board the Global Precipitation Measurement (GPM) core satellite has reflectivity measurements at two independent frequencies, Ku- and Ka- band. Dual-frequency retrieval algorithms have been developed traditionally through forward, backward, and recursive approaches. However, these algorithms suffer from "dual-value" problem when they retrieve medium volume diameter from dual-frequency ratio (DFR) in rain region. To this end, a hybrid method has been proposed to perform raindrop size distribution (DSD) retrieval for GPM using a linear constraint of DSD along rain profile to avoid "dual-value" problem (Le and Chandrasekar, 2015). In the current GPM level 2 algorithm (Iguchi et al. 2017- Algorithm Theoretical Basis Document) the Solver module retrieves a vertical profile of drop size distributionn from dual-frequency observations and path integrated attenuations. The algorithm details can be found in Seto et al. (2013) . On the other hand, ground based polarimetric radars have been used for a long time to estimate drop size distributions (e.g., Gorgucci et al. 2002 ). In addition, coincident GPM and ground based observations have been cross validated using careful overpass analysis. In this paper, we perform cross validation on raindrop size distribution retrieval from three sources, namely the hybrid method, the standard products from the solver module and DSD retrievals from ground polarimetric radars. The results are presented from two NEXRAD radars located in Dallas -Fort Worth, Texas (i.e., KFWS radar) and Melbourne, Florida (i.e., KMLB radar). The results demonstrate the ability of DPR observations to produce DSD estimates, which can be used subsequently to generate global DSD maps. References: Seto, S., T. Iguchi, T. Oki, 2013: The basic performance of a precipitation retrieval algorithm for the Global Precipitation Measurement mission's single/dual-frequency radar measurements. IEEE Transactions on Geoscience and

  18. Risk Probability Estimating Based on Clustering

    DEFF Research Database (Denmark)

    Chen, Yong; Jensen, Christian D.; Gray, Elizabeth

    2003-01-01

    of prior experiences, recommendations from a trusted entity or the reputation of the other entity. In this paper we propose a dynamic mechanism for estimating the risk probability of a certain interaction in a given environment using hybrid neural networks. We argue that traditional risk assessment models...... from the insurance industry do not directly apply to ubiquitous computing environments. Instead, we propose a dynamic mechanism for risk assessment, which is based on pattern matching, classification and prediction procedures. This mechanism uses an estimator of risk probability, which is based...

  19. Wechsler Adult Intelligence Scale-IV Dyads for Estimating Global Intelligence.

    Science.gov (United States)

    Girard, Todd A; Axelrod, Bradley N; Patel, Ronak; Crawford, John R

    2015-08-01

    All possible two-subtest combinations of the core Wechsler Adult Intelligence Scale-IV (WAIS-IV) subtests were evaluated as possible viable short forms for estimating full-scale IQ (FSIQ). Validity of the dyads was evaluated relative to FSIQ in a large clinical sample (N = 482) referred for neuropsychological assessment. Sample validity measures included correlations, mean discrepancies, and levels of agreement between dyad estimates and FSIQ scores. In addition, reliability and validity coefficients were derived from WAIS-IV standardization data. The Coding + Information dyad had the strongest combination of reliability and validity data. However, several other dyads yielded comparable psychometric performance, albeit with some variability in their particular strengths. We also observed heterogeneity between validity coefficients from the clinical and standardization-based estimates for several dyads. Thus, readers are encouraged to also consider the individual psychometric attributes, their clinical or research goals, and client or sample characteristics when selecting among the dyadic short forms. © The Author(s) 2014.

  20. Validity of selected cardiovascular field-based test among Malaysian ...

    African Journals Online (AJOL)

    Based on emerge obese problem among Malaysian, this research is formulated to validate published tests among healthy female adult. Selected test namely; 20 meter multi-stage shuttle run, 2.4km run test, 1 mile walk test and Harvard Step test were correlated with laboratory test (Bruce protocol) to find the criterion validity ...

  1. Implementing Estimation of Capacity for Freeway Sections

    Directory of Open Access Journals (Sweden)

    Chang-qiao Shao

    2011-01-01

    Full Text Available Based on the stochastic concept for freeway capacity, the procedure of capacity estimation is developed. Due to the fact that it is impossible to observe the value of the capacity and to obtain the probability distribution of the capacity, the product-limit method is used in this paper to estimate the capacity. In order to implement estimation of capacity using this technology, the lifetime table based on statistical methods for lifetime data analysis is introduced and the corresponding procedure is developed. Simulated data based on freeway sections in Beijing, China, were analyzed and the results indicate that the methodology and procedure are applicable and validated.

  2. High-global warming potential F-gas emissions in California: comparison of ambient-based versus inventory-based emission estimates, and implications of refined estimates.

    Science.gov (United States)

    Gallagher, Glenn; Zhan, Tao; Hsu, Ying-Kuang; Gupta, Pamela; Pederson, James; Croes, Bart; Blake, Donald R; Barletta, Barbara; Meinardi, Simone; Ashford, Paul; Vetter, Arnie; Saba, Sabine; Slim, Rayan; Palandre, Lionel; Clodic, Denis; Mathis, Pamela; Wagner, Mark; Forgie, Julia; Dwyer, Harry; Wolf, Katy

    2014-01-21

    To provide information for greenhouse gas reduction policies, the California Air Resources Board (CARB) inventories annual emissions of high-global-warming potential (GWP) fluorinated gases, the fastest growing sector of greenhouse gas (GHG) emissions globally. Baseline 2008 F-gas emissions estimates for selected chlorofluorocarbons (CFC-12), hydrochlorofluorocarbons (HCFC-22), and hydrofluorocarbons (HFC-134a) made with an inventory-based methodology were compared to emissions estimates made by ambient-based measurements. Significant discrepancies were found, with the inventory-based emissions methodology resulting in a systematic 42% under-estimation of CFC-12 emissions from older refrigeration equipment and older vehicles, and a systematic 114% overestimation of emissions for HFC-134a, a refrigerant substitute for phased-out CFCs. Initial, inventory-based estimates for all F-gas emissions had assumed that equipment is no longer in service once it reaches its average lifetime of use. Revised emission estimates using improved models for equipment age at end-of-life, inventories, and leak rates specific to California resulted in F-gas emissions estimates in closer agreement to ambient-based measurements. The discrepancies between inventory-based estimates and ambient-based measurements were reduced from -42% to -6% for CFC-12, and from +114% to +9% for HFC-134a.

  3. iBEST: a program for burnup history estimation of spent fuels based on ORIGEN-S

    International Nuclear Information System (INIS)

    Kim, Do Yeon; Hong, Ser Gi; Ahn, Gil Hoon

    2015-01-01

    In this paper, we describe a computer program, iBEST (inverse Burnup ESTimator), that we developed to accurately estimate the burnup histories of spent nuclear fuels based on sample measurement data. The burnup history parameters include initial uranium enrichment, burnup, cooling time after discharge from reactor, and reactor type. The program uses algebraic equations derived using the simplified burnup chains of major actinides for initial estimations of burnup and uranium enrichment, and it uses the ORIGEN-S code to correct its initial estimations for improved accuracy. In addition, we newly developed a stable bisection method coupled with ORIGEN-S to correct burnup and enrichment values and implemented it in iBEST in order to fully take advantage of the new capabilities of ORIGEN-S for improving accuracy. The iBEST program was tested using several problems for verification and well-known realistic problems with measurement data from spent fuel samples from the Mihama-3 reactor for validation. The test results show that iBEST accurately estimates the burnup history parameters for the test problems and gives an acceptable level of accuracy for the realistic Mihama-3 problems

  4. Global stereo matching algorithm based on disparity range estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Gu, Feifei

    2017-09-01

    The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.

  5. Validation of a protocol for the estimation of three-dimensional body center of mass kinematics in sport.

    Science.gov (United States)

    Mapelli, Andrea; Zago, Matteo; Fusini, Laura; Galante, Domenico; Colombo, Andrea; Sforza, Chiarella

    2014-01-01

    Since strictly related to balance and stability control, body center of mass (CoM) kinematics is a relevant quantity in sport surveys. Many methods have been proposed to estimate CoM displacement. Among them, segmental method appears to be suitable to investigate CoM kinematics in sport: human body is assumed as a system of rigid bodies, hence the whole-body CoM is calculated as the weighted average of the CoM of each segment. The number of landmarks represents a crucial choice in the protocol design process: one have to find the proper compromise between accuracy and invasivity. In this study, using a motion analysis system, a protocol based upon the segmental method is validated, adopting an anatomical model comprising 14 landmarks. Two sets of experiments were conducted. Firstly, our protocol was compared to the ground reaction force method (GRF), accounted as a standard in CoM estimation. In the second experiment, we investigated the aerial phase typical of many disciplines, comparing our protocol with: (1) an absolute reference, the parabolic regression of the vertical CoM trajectory during the time of flight; (2) two common approaches to estimate CoM kinematics in gait, known as sacrum and reconstructed pelvis methods. Recognized accuracy indexes proved that the results obtained were comparable to the GRF; what is more, during the aerial phases our protocol showed to be significantly more accurate than the two other methods. The protocol assessed can therefore be adopted as a reliable tool for CoM kinematics estimation in further sport researches. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Validity of segmental bioelectrical impedance analysis for estimating fat-free mass in children including overweight individuals.

    Science.gov (United States)

    Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki

    2017-02-01

    This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.

  7. Validation of a novel modified wall motion score for estimation of left ventricular ejection fraction in ischemic and non-ischemic cardiomyopathy

    Energy Technology Data Exchange (ETDEWEB)

    Scholl, David, E-mail: David.Scholl@utoronto.ca [Imaging Research Laboratories, Robarts Research Institute, London, Ontario (Canada); Kim, Han W., E-mail: hanwkim@gmail.com [Duke Cardiovascular Magnetic Resonance Center, Division of Cardiology, Duke University, NC (United States); Shah, Dipan, E-mail: djshah@tmhs.org [The Methodist DeBakey Heart Center, Houston, TX (United States); Fine, Nowell M., E-mail: nowellfine@gmail.com [Division of Cardiology, Department of Medicine, Schulich School of Medicine and Dentistry, University of Western Ontario (Canada); Tandon, Shruti, E-mail: standon4@uwo.ca [Division of Cardiology, Department of Medicine, Schulich School of Medicine and Dentistry, University of Western Ontario (Canada); Thompson, Terry, E-mail: thompson@lawsonimaging.ca [Lawson Health Research Institute, London, Ontario (Canada); Department of Medical Biophysics, University of Western Ontario, London, Ontario (Canada); Drangova, Maria, E-mail: mdrangov@imaging.robarts.ca [Imaging Research Laboratories, Robarts Research Institute, London, Ontario (Canada); Department of Medical Biophysics, University of Western Ontario, London, Ontario (Canada); White, James A., E-mail: jwhite@imaging.robarts.ca [Division of Cardiology, Department of Medicine, Schulich School of Medicine and Dentistry, University of Western Ontario (Canada); Lawson Health Research Institute, London, Ontario (Canada); Imaging Research Laboratories, Robarts Research Institute, London, Ontario (Canada)

    2012-08-15

    Background: Visual determination of left ventricular ejection fraction (LVEF) by segmental scoring may be a practical alternative to volumetric analysis of cine magnetic resonance imaging (MRI). The accuracy and reproducibility of this approach for has not been described. The purpose of this study was to validate a novel segmental visual scoring method for LVEF estimation using cine MRI. Methods: 362 patients with known or suspected cardiomyopathy were studied. A modified wall motion score (mWMS) was used to blindly score the wall motion of all cardiac segments from cine MRI imaging. The same datasets were subjected to blinded volumetric analysis using endocardial contour tracing. The population was then separated into a model cohort (N = 181) and validation cohort (N = 181), with the former used to derive a regression equation of mWMS versus true volumetric LVEF. The validation cohort was then used to test the accuracy of this regression model to estimate the true LVEF from a visually determined mWMS. Reproducibility testing of mWMS scoring was performed upon a randomly selected sample of 20 cases. Results: The regression equation relating mWMS to true LVEF in the model cohort was: LVEF = 54.23 - 0.5761 Multiplication-Sign mWMS. In the validation cohort this equation produced a strong correlation between mWMS-derived LVEF and true volumetric LVEF (r = 0.89). Bland and Altman analysis showed no systematic bias in the LVEF estimated using the mWMS (-0.3231%, 95% limits of agreement -12.22% to 11.58%). Inter-observer and intra-observer reproducibility was excellent (r = 0.93 and 0.97, respectively). Conclusion: The mWMS is a practical tool for reporting regional wall motion and provides reproducible estimates of LVEF from cine MRI.

  8. Validation of SMAP Root Zone Soil Moisture Estimates with Improved Cosmic-Ray Neutron Probe Observations

    Science.gov (United States)

    Babaeian, E.; Tuller, M.; Sadeghi, M.; Franz, T.; Jones, S. B.

    2017-12-01

    Soil Moisture Active Passive (SMAP) soil moisture products are commonly validated based on point-scale reference measurements, despite the exorbitant spatial scale disparity. The difference between the measurement depth of point-scale sensors and the penetration depth of SMAP further complicates evaluation efforts. Cosmic-ray neutron probes (CRNP) with an approximately 500-m radius footprint provide an appealing alternative for SMAP validation. This study is focused on the validation of SMAP level-4 root zone soil moisture products with 9-km spatial resolution based on CRNP observations at twenty U.S. reference sites with climatic conditions ranging from semiarid to humid. The CRNP measurements are often biased by additional hydrogen sources such as surface water, atmospheric vapor, or mineral lattice water, which sometimes yield unrealistic moisture values in excess of the soil water storage capacity. These effects were removed during CRNP data analysis. Comparison of SMAP data with corrected CRNP observations revealed a very high correlation for most of the investigated sites, which opens new avenues for validation of current and future satellite soil moisture products.

  9. Remission, continuation, and incidence of eating disorders during early pregnancy: A validation study on a population-based birth cohort

    Science.gov (United States)

    Watson, Hunna J.; Von Holle, Ann; Hamer, Robert M.; Berg, Cecilie Knoph; Torgersen, Leila; Magnus, Per; Stoltenberg, Camilla; Sullivan, Patrick; Reichborn-Kjennerud, Ted; Bulik, Cynthia M.

    2014-01-01

    Background The objective of this study was to validate previously published rates of remission, continuation, and incidence of broadly defined eating disorders during pregnancy. The previous rate modeling was done by our group (Bulik et al. 2007) and yielded participants halfway into recruitment of the planned 100,000 pregnancies in the Norwegian Mother and Child (MoBa) Cohort at the Norwegian Institute of Public Health. This study aimed to internally validate the findings with the completed cohort. Methods 77267 pregnant women enrolled at 17 weeks gestation between 2001 and 2009 were included. Participants were split into a “training” sample (n=41243) based on participants in the MoBa version 2 dataset of the original study and a “validation” sample (n=36024) comprising individuals in the MoBa version 5 dataset that were not in the original study (Bulik et al. 2007). Internal validation of all original rate models involved fitting a calibration model to compare model parameters between the “training” and “validation” samples as well as bootstrap estimates of bias in the entire version 5 dataset. Results Remission, continuation, and incidence estimates from the “training” sample remained stable when evaluated via a split sample validation procedure. Pre-pregnancy prevalence estimates in the “validation” sample were 0.1% for anorexia nervosa, 1.0% for bulimia nervosa (BN), 3.3% for binge eating disorder (BED), and 0.1% for purging disorder (EDNOS-P). In early pregnancy, estimates were 0.2% for BN, 4.8% for BED, and eating disorders during pregnancy. Eating disorders during pregnancy were relatively common, occurring in nearly 1 in every 20 women, although almost all were cases of BED. Pregnancy was a window of remission from BN but a window of vulnerability for onset and continuation of BED. Training to detect the signs and symptoms of eating disorders by obstetricians/gynecologists and interventions to enhance pregnancy and neonatal outcomes

  10. Simulation Based Studies in Software Engineering: A Matter of Validity

    Directory of Open Access Journals (Sweden)

    Breno Bernard Nicolau de França

    2015-04-01

    Full Text Available Despite the possible lack of validity when compared with other science areas, Simulation-Based Studies (SBS in Software Engineering (SE have supported the achievement of some results in the field. However, as it happens with any other sort of experimental study, it is important to identify and deal with threats to validity aiming at increasing their strength and reinforcing results confidence. OBJECTIVE: To identify potential threats to SBS validity in SE and suggest ways to mitigate them. METHOD: To apply qualitative analysis in a dataset resulted from the aggregation of data from a quasi-systematic literature review combined with ad-hoc surveyed information regarding other science areas. RESULTS: The analysis of data extracted from 15 technical papers allowed the identification and classification of 28 different threats to validity concerned with SBS in SE according Cook and Campbell’s categories. Besides, 12 verification and validation procedures applicable to SBS were also analyzed and organized due to their ability to detect these threats to validity. These results were used to make available an improved set of guidelines regarding the planning and reporting of SBS in SE. CONCLUSIONS: Simulation based studies add different threats to validity when compared with traditional studies. They are not well observed and therefore, it is not easy to identify and mitigate all of them without explicit guidance, as the one depicted in this paper.

  11. Hybrid islanding detection method by using grid impedance estimation in parallel-inverters-based microgrid

    DEFF Research Database (Denmark)

    Ghzaiel, Walid; Jebali-Ben Ghorbal, Manel; Slama-Belkhodja, Ilhem

    2014-01-01

    This paper presents a hybrid islanding detection algorithm integrated on the distributed generation unit more close to the point of common coupling of a Microgrid based on parallel inverters where one of them is responsible to control the system. The method is based on resonance excitation under...... parameters, both resistive and inductive parts, from the injected resonance frequency determination. Finally, the inverter will disconnect the microgrid from the faulty grid and reconnect the parallel inverter system to the controllable distributed system in order to ensure high power quality. This paper...... shows that grid impedance variation detection estimation can be an efficient method for islanding detection in microgrid systems. Theoretical analysis and simulation results are presented to validate the proposed method....

  12. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    Directory of Open Access Journals (Sweden)

    Sekhar S Chandra

    2004-01-01

    Full Text Available We address the problem of estimating instantaneous frequency (IF of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE. The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD-based IF estimators for different signal-to-noise ratio (SNR.

  13. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    Science.gov (United States)

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological

  14. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    Directory of Open Access Journals (Sweden)

    Suzana Papile Maciel Carvalho

    2013-07-01

    Full Text Available Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. OBJECTIVE: This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995, previously used in a population sample from Northeast Brazil. MATERIAL AND METHODS: The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. RESULTS: The results demonstrated that the application of the method of Oliveira, et al. (1995 in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. CONCLUSION: It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995 presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South

  15. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  16. Inertial Measurement Units-Based Probe Vehicles: Automatic Calibration, Trajectory Estimation, and Context Detection

    KAUST Repository

    Mousa, Mustafa

    2017-12-06

    Most probe vehicle data is generated using satellite navigation systems, such as the Global Positioning System (GPS), Globalnaya navigatsionnaya sputnikovaya Sistema (GLONASS), or Galileo systems. However, because of their high cost, relatively high position uncertainty in cities, and low sampling rate, a large quantity of satellite positioning data is required to estimate traffic conditions accurately. To address this issue, we introduce a new type of traffic monitoring system based on inexpensive inertial measurement units (IMUs) as probe sensors. IMUs as traffic probes pose unique challenges in that they need to be precisely calibrated, do not generate absolute position measurements, and their position estimates are subject to accumulating errors. In this paper, we address each of these challenges and demonstrate that the IMUs can reliably be used as traffic probes. After discussing the sensing technique, we present an implementation of this system using a custom-designed hardware platform, and validate the system with experimental data.

  17. Inertial Measurement Units-Based Probe Vehicles: Automatic Calibration, Trajectory Estimation, and Context Detection

    KAUST Repository

    Mousa, Mustafa; Sharma, Kapil; Claudel, Christian G.

    2017-01-01

    Most probe vehicle data is generated using satellite navigation systems, such as the Global Positioning System (GPS), Globalnaya navigatsionnaya sputnikovaya Sistema (GLONASS), or Galileo systems. However, because of their high cost, relatively high position uncertainty in cities, and low sampling rate, a large quantity of satellite positioning data is required to estimate traffic conditions accurately. To address this issue, we introduce a new type of traffic monitoring system based on inexpensive inertial measurement units (IMUs) as probe sensors. IMUs as traffic probes pose unique challenges in that they need to be precisely calibrated, do not generate absolute position measurements, and their position estimates are subject to accumulating errors. In this paper, we address each of these challenges and demonstrate that the IMUs can reliably be used as traffic probes. After discussing the sensing technique, we present an implementation of this system using a custom-designed hardware platform, and validate the system with experimental data.

  18. Regional evapotranspiration estimation based on a two-layer remote-sensing scheme in Shahe River basin

    International Nuclear Information System (INIS)

    Yin, Jian; Wang, Huixiao

    2014-01-01

    Land surface evapotranspiration (ET) derived from remote sensing data has significant meaning for plant growth monitoring, crop yield assessment, disaster monitoring and understanding energy and water cycle in river basin area and surrounding regions. In the study, we developed a land surface ET remote sensing retrieval system to estimate the daily ET in Shahe river basin using the TM/ETM+ images. The system is based on the two-layer ET model and includes three parts: inversion of the evaporation ration using two-layer model, calculation of total daily net radiation, and estimation of daily ET based on evaporation fraction method. The results show that the average daily ET is about 2.28mm of the typical days in spring, and 2.97mm in summer, 1.59mm in autumn, and 0.5mm in winter. The ET in upstream areas covered by forest is higher than that in the downstream covered by settlement and farmland. In summer the difference of ET between the upper reaches and lower reaches is smaller compared to the other three seasons. The measurements by large aperture scintillometer and eddy correlation instrument were used for validation. By comparing the observed data with the estimated data, we found the estimation system had a high precision with the relative error between 0 and 16% (mean error of 11.1%), and the variance 0.77mm

  19. Online peak power prediction based on a parameter and state estimator for lithium-ion batteries in electric vehicles

    International Nuclear Information System (INIS)

    Pei, Lei; Zhu, Chunbo; Wang, Tiansi; Lu, Rengui; Chan, C.C.

    2014-01-01

    The goal of this study is to realize real-time predictions of the peak power/state of power (SOP) for lithium-ion batteries in electric vehicles (EVs). To allow the proposed method to be applicable to different temperature and aging conditions, a training-free battery parameter/state estimator is presented based on an equivalent circuit model using a dual extended Kalman filter (DEKF). In this estimator, the model parameters are no longer taken as functions of factors such as SOC (state of charge), temperature, and aging; instead, all parameters will be directly estimated under the present conditions, and the impact of the temperature and aging on the battery model will be included in the parameter identification results. Then, the peak power/SOP will be calculated using the estimated results under the given limits. As an improvement to the calculation method, a combined limit of current and voltage is proposed to obtain results that are more reasonable. Additionally, novel verification experiments are designed to provide the true values of the cells' peak power under various operating conditions. The proposed methods are implemented in experiments with LiFePO 4 /graphite cells. The validating results demonstrate that the proposed methods have good accuracy and high adaptability. - Highlights: • A real-time peak power/SOP prediction method for lithium-ion batteries is proposed. • A training-free method based on DEKF is presented for parameter identification. • The proposed method can be applied to different temperature and aging conditions. • The calculation of peak power under the current and voltage limits is improved. • Validation experiments are designed to verify the accuracy of prediction results

  20. Response-Based Estimation of Sea State Parameters

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...... calculated by a 3-D time domain code and by closed-form (analytical) expressions, respectively. Based on comparisons with wave radar measurements and satellite measurements it is seen that the wave estimations based on closedform expressions exhibit a reasonable energy content, but the distribution of energy...

  1. A new approach on seismic mortality estimations based on average population density

    Science.gov (United States)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  2. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  3. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Particle-filtering-based estimation of maximum available power state in Lithium-Ion batteries

    International Nuclear Information System (INIS)

    Burgos-Mellado, Claudio; Orchard, Marcos E.; Kazerani, Mehrdad; Cárdenas, Roberto; Sáez, Doris

    2016-01-01

    Highlights: • Approach to estimate the state of maximum power available in Lithium-Ion battery. • Optimisation problem is formulated on the basis of a non-linear dynamic model. • Solutions of the optimisation problem are functions of state of charge estimates. • State of charge estimates computed using particle filter algorithms. - Abstract: Battery Energy Storage Systems (BESS) are important for applications related to both microgrids and electric vehicles. If BESS are used as the main energy source, then it is required to include adequate procedures for the estimation of critical variables such as the State of Charge (SoC) and the State of Health (SoH) in the design of Battery Management Systems (BMS). Furthermore, in applications where batteries are exposed to high charge and discharge rates it is also desirable to estimate the State of Maximum Power Available (SoMPA). In this regard, this paper presents a novel approach to the estimation of SoMPA in Lithium-Ion batteries. This method formulates an optimisation problem for the battery power based on a non-linear dynamic model, where the resulting solutions are functions of the SoC. In the battery model, the polarisation resistance is modelled using fuzzy rules that are function of both SoC and the discharge (charge) current. Particle filtering algorithms are used as an online estimation technique, mainly because these algorithms allow approximating the probability density functions of the SoC and SoMPA even in the case of non-Gaussian sources of uncertainty. The proposed method for SoMPA estimation is validated using the experimental data obtained from an experimental setup designed for charging and discharging the Lithium-Ion batteries.

  5. SEE rate estimation based on diffusion approximation of charge collection

    Science.gov (United States)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.

    2018-03-01

    The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.

  6. Validation of OMI erythemal doses with multi-sensor ground-based measurements in Thessaloniki, Greece

    Science.gov (United States)

    Zempila, Melina Maria; Fountoulakis, Ilias; Taylor, Michael; Kazadzis, Stelios; Arola, Antti; Koukouli, Maria Elissavet; Bais, Alkiviadis; Meleti, Chariklia; Balis, Dimitrios

    2018-06-01

    The aim of this study is to validate the Ozone Monitoring Instrument (OMI) erythemal dose rates using ground-based measurements in Thessaloniki, Greece. In the Laboratory of Atmospheric Physics of the Aristotle University of Thessaloniki, a Yankee Environmental System UVB-1 radiometer measures the erythemal dose rates every minute, and a Norsk Institutt for Luftforskning (NILU) multi-filter radiometer provides multi-filter based irradiances that were used to derive erythemal dose rates for the period 2005-2014. Both these datasets were independently validated against collocated UV irradiance spectra from a Brewer MkIII spectrophotometer. Cloud detection was performed based on measurements of the global horizontal radiation from a Kipp & Zonen pyranometer and from NILU measurements in the visible range. The satellite versus ground observation validation was performed taking into account the effect of temporal averaging, limitations related to OMI quality control criteria, cloud conditions, the solar zenith angle and atmospheric aerosol loading. Aerosol optical depth was also retrieved using a collocated CIMEL sunphotometer in order to assess its impact on the comparisons. The effect of total ozone columns satellite versus ground-based differences on the erythemal dose comparisons was also investigated. Since most of the public awareness alerts are based on UV Index (UVI) classifications, an analysis and assessment of OMI capability for retrieving UVIs was also performed. An overestimation of the OMI erythemal product by 3-6% and 4-8% with respect to ground measurements is observed when examining overpass and noontime estimates respectively. The comparisons revealed a relatively small solar zenith angle dependence, with the OMI data showing a slight dependence on aerosol load, especially at high aerosol optical depth values. A mean underestimation of 2% in OMI total ozone columns under cloud-free conditions was found to lead to an overestimation in OMI erythemal

  7. Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation

    Energy Technology Data Exchange (ETDEWEB)

    Hines, J. Wesley [Univ. of Tennessee, Knoxville, TN (United States); Upadhyaya, Belle [Univ. of Tennessee, Knoxville, TN (United States); Sharp, Michael [Univ. of Tennessee, Knoxville, TN (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jeffries, Brien [Univ. of Tennessee, Knoxville, TN (United States); Nam, Alan [Univ. of Tennessee, Knoxville, TN (United States); Strong, Eric [Univ. of Tennessee, Knoxville, TN (United States); Tong, Matthew [Univ. of Tennessee, Knoxville, TN (United States); Welz, Zachary [Univ. of Tennessee, Knoxville, TN (United States); Barbieri, Federico [Univ. of Tennessee, Knoxville, TN (United States); Langford, Seth [Univ. of Tennessee, Knoxville, TN (United States); Meinweiser, Gregory [Univ. of Tennessee, Knoxville, TN (United States); Weeks, Matthew [Univ. of Tennessee, Knoxville, TN (United States)

    2014-11-06

    On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning of component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for

  8. How Valid are Estimates of Occupational Illness?

    Science.gov (United States)

    Hilaski, Harvey J.; Wang, Chao Ling

    1982-01-01

    Examines some of the methods of estimating occupational diseases and suggests that a consensus on the adequacy and reliability of estimates by the Bureau of Labor Statistics and others is not likely. (SK)

  9. Bootstrap-Based Inference for Cube Root Consistent Estimators

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Jansson, Michael; Nagasawa, Kenichi

    This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known...... to be inconsistent. Our method restores consistency of the nonparametric bootstrap by altering the shape of the criterion function defining the estimator whose distribution we seek to approximate. This modification leads to a generic and easy-to-implement resampling method for inference that is conceptually distinct...... from other available distributional approximations based on some form of modified bootstrap. We offer simulation evidence showcasing the performance of our inference method in finite samples. An extension of our methodology to general M-estimation problems is also discussed....

  10. [Prognostic estimation in critical patients. Validation of a new and very simple system of prognostic estimation of survival in an intensive care unit].

    Science.gov (United States)

    Abizanda, R; Padron, A; Vidal, B; Mas, S; Belenguer, A; Madero, J; Heras, A

    2006-04-01

    To make the validation of a new system of prognostic estimation of survival in critical patients (EPEC) seen in a multidisciplinar Intensive care unit (ICU). Prospective analysis of a patient cohort seen in the ICU of a multidisciplinar Intensive Medicine Service of a reference teaching hospital with 19 beds. Four hundred eighty four patients admitted consecutively over 6 months in 2003. Data collection of a basic minimum data set that includes patient identification data (gender, age), reason for admission and their origin, prognostic estimation of survival by EPEC, MPM II 0 and SAPS II (the latter two considered as gold standard). Mortality was evaluated on hospital discharge. EPEC validation was done with analysis of its discriminating capacity (ROC curve), calibration of its prognostic capacity (Hosmer Lemeshow C test), resolution of the 2 x 2 Contingency tables around different probability values (20, 50, 70 and mean value of prognostic estimation). The standardized mortality rate (SMR) for each one of the methods was calculated. Linear regression of the EPEC regarding the MPM II 0 and SAPS II was established and concordance analyses were done (Bland-Altman test) of the prediction of mortality by the three systems. In spite of an apparently good linear correlation, similar accuracy of prediction and discrimination capacity, EPEC is not well-calibrated (no likelihood of death greater than 50%) and the concordance analyses show that more than 10% of the pairs were outside the 95% confidence interval. In spite of its ease of application and calculation and of incorporating delay of admission in ICU as a variable, EPEC does not offer any predictive advantage on MPM II 0 or SAPS II, and its predictions adapt to reality worse.

  11. Estimation of In-Situ Groundwater Conditions Based on Geochemical Equilibrium Simulations

    Directory of Open Access Journals (Sweden)

    Toshiyuki Hokari

    2014-03-01

    Full Text Available This paper presents a means of estimating in-situ groundwater pH and oxidation-redox potential (ORP, two very important parameters for species migration analysis in safety assessments for radioactive waste disposal or carbon dioxide sequestration. The method was applied to a pumping test in a deep borehole drilled in a tertiary formation in Japan for validation. The following application examples are presented: when applied to several other pumping tests at the same site, it could estimate distributions of the in-situ groundwater pH and ORP; applied to multiple points selected in the groundwater database of Japan, it could help estimate the in-situ redox reaction governing the groundwater conditions in some areas.

  12. Validity of anthropometric procedures to estimate body density and body fat percent in military men

    Directory of Open Access Journals (Sweden)

    Ciro Romélio Rodriguez-Añez

    1999-12-01

    Full Text Available The objective of this study was to verify the validity of the Katch e McArdle’s equation (1973,which uses the circumferences of the arm, forearm and abdominal to estimate the body density and the procedure of Cohen (1986 which uses the circumferences of the neck and abdominal to estimate the body fat percent (%F in military men. Therefore data from 50 military men, with mean age of 20.26 ± 2.04 years serving in Santa Maria, RS, was collected. The circumferences were measured according with Katch e McArdle (1973 and Cohen (1986 procedures. The body density measured (Dm obtained under water weighting was used as criteria and its mean value was 1.0706 ± 0.0100 g/ml. The residual lung volume was estimated using the Goldman’s e Becklake’s equation (1959. The %F was obtained with the Siri’s equation (1961 and its mean value was 12.70 ± 4.71%. The validation criterion suggested by Lohman (1992 was followed. The analysis of the results indicated that the procedure developed by Cohen (1986 has concurrent validity to estimate %F in military men or in other samples with similar characteristics with standard error of estimate of 3.45%. . RESUMO Através deste estudo objetivou-se verificar a validade: da equação de Katch e McArdle (1973 que envolve os perímetros do braço, antebraço e abdômen, para estimar a densidade corporal; e, o procedimento de Cohen (1986 que envolve os perímetros do pescoço e abdômen, para estimar o % de gordura (%G; para militares. Para tanto, coletou-se os dados de 50 militares masculinos, com idade média de 20,26 ± 2,04 anos, lotados na cidade de Santa Maria, RS. Mensurou-se os perímetros conforme procedimentos de Katch e McArdle (1973 e Cohen (1986. Utilizou-se a densidade corporal mensurada (Dm através da pesagem hidrostática como critério de validação, cujo valor médio foi de 1,0706 ± 0,0100 g/ml. Estimou-se o volume residual pela equação de Goldman e Becklake (1959. O %G derivado da Dm estimou

  13. Statistical inference based on latent ability estimates

    NARCIS (Netherlands)

    Hoijtink, H.J.A.; Boomsma, A.

    The quality of approximations to first and second order moments (e.g., statistics like means, variances, regression coefficients) based on latent ability estimates is being discussed. The ability estimates are obtained using either the Rasch, oi the two-parameter logistic model. Straightforward use

  14. A citizen science based survey method for estimating the density of urban carnivores

    Science.gov (United States)

    Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980’s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on

  15. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs.

    Science.gov (United States)

    McCoy, A B; Wright, A; Krousel-Wood, M; Thomas, E J; McCoy, J A; Sittig, D F

    2015-01-01

    Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes.

  16. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Eric; Gonder, Jeff; Jehlik, Forrest

    2017-01-01

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  17. Validation of the CHIRPS Satellite Rainfall Estimates over Eastern of Africa

    Science.gov (United States)

    Dinku, T.; Funk, C. C.; Tadesse, T.; Ceccato, P.

    2017-12-01

    Long and temporally consistent rainfall time series are essential in climate analyses and applications. Rainfall data from station observations are inadequate over many parts of the world due to sparse or non-existent observation networks, or limited reporting of gauge observations. As a result, satellite rainfall estimates have been used as an alternative or as a supplement to station observations. However, many satellite-based rainfall products with long time series suffer from coarse spatial and temporal resolutions and inhomogeneities caused by variations in satellite inputs. There are some satellite rainfall products with reasonably consistent time series, but they are often limited to specific geographic areas. The Climate Hazards Group Infrared Precipitation (CHIRP) and CHIRP combined with station observations (CHIRPS) are recently produced satellite-based rainfall products with relatively high spatial and temporal resolutions and quasi-global coverage. In this study, CHIRP and CHIRPS were evaluated over East Africa at daily, dekadal (10-day) and monthly time scales. The evaluation was done by comparing the satellite products with rain gauge data from about 1200 stations. The is unprecedented number of validation stations for this region covering. The results provide a unique region-wide understanding of how satellite products perform over different climatic/geographic (low lands, mountainous regions, and coastal) regions. The CHIRP and CHIRPS products were also compared with two similar satellite rainfall products: the African Rainfall Climatology version 2 (ARC2) and the latest release of the Tropical Applications of Meteorology using Satellite data (TAMSAT). The results show that both CHIRP and CHIRPS products are significantly better than ARC2 with higher skill and low or no bias. These products were also found to be slightly better than the latest version of the TAMSAT product. A comparison was also done between the latest release of the TAMSAT product

  18. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    Science.gov (United States)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  19. Subspace Based Blind Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Hayashi, Kazunori; Matsushima, Hiroki; Sakai, Hideaki

    2012-01-01

    The paper proposes a subspace based blind sparse channel estimation method using 1–2 optimization by replacing the 2–norm minimization in the conventional subspace based method by the 1–norm minimization problem. Numerical results confirm that the proposed method can significantly improve...

  20. Fast LCMV-based Methods for Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Glentis, George-Othon; Christensen, Mads Græsbøll

    2013-01-01

    peaks and require matrix inversions for each point in the search grid. In this paper, we therefore consider fast implementations of LCMV-based fundamental frequency estimators, exploiting the estimators' inherently low displacement rank of the used Toeplitz-like data covariance matrices, using...... with several orders of magnitude, but, as we show, further computational savings can be obtained by the adoption of an approximative IAA-based data covariance matrix estimator, reminiscent of the recently proposed Quasi-Newton IAA technique. Furthermore, it is shown how the considered pitch estimators can...... as such either the classic time domain averaging covariance matrix estimator, or, if aiming for an increased spectral resolution, the covariance matrix resulting from the application of the recent iterative adaptive approach (IAA). The proposed exact implementations reduce the required computational complexity...

  1. Estimating cardiovascular disease incidence from prevalence: a spreadsheet based model

    Directory of Open Access Journals (Sweden)

    Xue Feng Hu

    2017-01-01

    Full Text Available Abstract Background Disease incidence and prevalence are both core indicators of population health. Incidence is generally not as readily accessible as prevalence. Cohort studies and electronic health record systems are two major way to estimate disease incidence. The former is time-consuming and expensive; the latter is not available in most developing countries. Alternatively, mathematical models could be used to estimate disease incidence from prevalence. Methods We proposed and validated a method to estimate the age-standardized incidence of cardiovascular disease (CVD, with prevalence data from successive surveys and mortality data from empirical studies. Hallett’s method designed for estimating HIV infections in Africa was modified to estimate the incidence of myocardial infarction (MI in the U.S. population and incidence of heart disease in the Canadian population. Results Model-derived estimates were in close agreement with observed incidence from cohort studies and population surveillance systems. This method correctly captured the trend in incidence given sufficient waves of cross-sectional surveys. The estimated MI declining rate in the U.S. population was in accordance with the literature. This method was superior to closed cohort, in terms of the estimating trend of population cardiovascular disease incidence. Conclusion It is possible to estimate CVD incidence accurately at the population level from cross-sectional prevalence data. This method has the potential to be used for age- and sex- specific incidence estimates, or to be expanded to other chronic conditions.

  2. A Kalman-based Fundamental Frequency Estimation Algorithm

    DEFF Research Database (Denmark)

    Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    Fundamental frequency estimation is an important task in speech and audio analysis. Harmonic model-based methods typically have superior estimation accuracy. However, such methods usually as- sume that the fundamental frequency and amplitudes are station- ary over a short time frame. In this pape...

  3. [Hyperspectral Estimation of Apple Tree Canopy LAI Based on SVM and RF Regression].

    Science.gov (United States)

    Han, Zhao-ying; Zhu, Xi-cun; Fang, Xian-yi; Wang, Zhuo-yuan; Wang, Ling; Zhao, Geng-Xing; Jiang, Yuan-mao

    2016-03-01

    Leaf area index (LAI) is the dynamic index of crop population size. Hyperspectral technology can be used to estimate apple canopy LAI rapidly and nondestructively. It can be provide a reference for monitoring the tree growing and yield estimation. The Red Fuji apple trees of full bearing fruit are the researching objects. Ninety apple trees canopies spectral reflectance and LAI values were measured by the ASD Fieldspec3 spectrometer and LAI-2200 in thirty orchards in constant two years in Qixia research area of Shandong Province. The optimal vegetation indices were selected by the method of correlation analysis of the original spectral reflectance and vegetation indices. The models of predicting the LAI were built with the multivariate regression analysis method of support vector machine (SVM) and random forest (RF). The new vegetation indices, GNDVI527, ND-VI676, RVI682, FD-NVI656 and GRVI517 and the previous two main vegetation indices, NDVI670 and NDVI705, are in accordance with LAI. In the RF regression model, the calibration set decision coefficient C-R2 of 0.920 and validation set decision coefficient V-R2 of 0.889 are higher than the SVM regression model by 0.045 and 0.033 respectively. The root mean square error of calibration set C-RMSE of 0.249, the root mean square error validation set V-RMSE of 0.236 are lower than that of the SVM regression model by 0.054 and 0.058 respectively. Relative analysis of calibrating error C-RPD and relative analysis of validation set V-RPD reached 3.363 and 2.520, 0.598 and 0.262, respectively, which were higher than the SVM regression model. The measured and predicted the scatterplot trend line slope of the calibration set and validation set C-S and V-S are close to 1. The estimation result of RF regression model is better than that of the SVM. RF regression model can be used to estimate the LAI of red Fuji apple trees in full fruit period.

  4. Optical Tracking Data Validation and Orbit Estimation for Sparse Observations of Satellites by the OWL-Net.

    Science.gov (United States)

    Choi, Jin; Jo, Jung Hyun; Yim, Hong-Suh; Choi, Eun-Jung; Cho, Sungki; Park, Jang-Hyun

    2018-06-07

    An Optical Wide-field patroL-Network (OWL-Net) has been developed for maintaining Korean low Earth orbit (LEO) satellites' orbital ephemeris. The OWL-Net consists of five optical tracking stations. Brightness signals of reflected sunlight of the targets were detected by a charged coupled device (CCD). A chopper system was adopted for fast astrometric data sampling, maximum 50 Hz, within a short observation time. The astrometric accuracy of the optical observation data was validated with precise orbital ephemeris such as Consolidated Prediction File (CPF) data and precise orbit determination result with onboard Global Positioning System (GPS) data from the target satellite. In the optical observation simulation of the OWL-Net for 2017, an average observation span for a single arc of 11 LEO observation targets was about 5 min, while an average optical observation separation time was 5 h. We estimated the position and velocity with an atmospheric drag coefficient of LEO observation targets using a sequential-batch orbit estimation technique after multi-arc batch orbit estimation. Post-fit residuals for the multi-arc batch orbit estimation and sequential-batch orbit estimation were analyzed for the optical measurements and reference orbit (CPF and GPS data). The post-fit residuals with reference show few tens-of-meters errors for in-track direction for multi-arc batch and sequential-batch orbit estimation results.

  5. Optical Tracking Data Validation and Orbit Estimation for Sparse Observations of Satellites by the OWL-Net

    Directory of Open Access Journals (Sweden)

    Jin Choi

    2018-06-01

    Full Text Available An Optical Wide-field patroL-Network (OWL-Net has been developed for maintaining Korean low Earth orbit (LEO satellites’ orbital ephemeris. The OWL-Net consists of five optical tracking stations. Brightness signals of reflected sunlight of the targets were detected by a charged coupled device (CCD. A chopper system was adopted for fast astrometric data sampling, maximum 50 Hz, within a short observation time. The astrometric accuracy of the optical observation data was validated with precise orbital ephemeris such as Consolidated Prediction File (CPF data and precise orbit determination result with onboard Global Positioning System (GPS data from the target satellite. In the optical observation simulation of the OWL-Net for 2017, an average observation span for a single arc of 11 LEO observation targets was about 5 min, while an average optical observation separation time was 5 h. We estimated the position and velocity with an atmospheric drag coefficient of LEO observation targets using a sequential-batch orbit estimation technique after multi-arc batch orbit estimation. Post-fit residuals for the multi-arc batch orbit estimation and sequential-batch orbit estimation were analyzed for the optical measurements and reference orbit (CPF and GPS data. The post-fit residuals with reference show few tens-of-meters errors for in-track direction for multi-arc batch and sequential-batch orbit estimation results.

  6. Addressing Single and Multiple Bad Data in the Modern PMU-based Power System State Estimation

    DEFF Research Database (Denmark)

    Khazraj, Hesam; Silva, Filipe Miguel Faria da; Bak, Claus Leth

    2017-01-01

    utilization in state estimation can detect and identify single and multiple bad data in redundant and critical measurements. To validate simulations, IEEE 30 bus system are implemented in PowerFactory and Matlab is used to solve proposed state estimation using postprocessing of PMUs and mixed methods. Bad...

  7. Prevalence Estimation and Validation of New Instruments in Psychiatric Research: An Application of Latent Class Analysis and Sensitivity Analysis

    Science.gov (United States)

    Pence, Brian Wells; Miller, William C.; Gaynes, Bradley N.

    2009-01-01

    Prevalence and validation studies rely on imperfect reference standard (RS) diagnostic instruments that can bias prevalence and test characteristic estimates. The authors illustrate 2 methods to account for RS misclassification. Latent class analysis (LCA) combines information from multiple imperfect measures of an unmeasurable latent condition to…

  8. The influence of selection on the evolutionary distance estimated from the base changes observed between homologous nucleotide sequences.

    Science.gov (United States)

    Otsuka, J; Kawai, Y; Sugaya, N

    2001-11-21

    In most studies of molecular evolution, the nucleotide base at a site is assumed to change with the apparent rate under functional constraint, and the comparison of base changes between homologous genes is thought to yield the evolutionary distance corresponding to the site-average change rate multiplied by the divergence time. However, this view is not sufficiently successful in estimating the divergence time of species, but mostly results in the construction of tree topology without a time-scale. In the present paper, this problem is investigated theoretically by considering that observed base changes are the results of comparing the survivals through selection of mutated bases. In the case of weak selection, the time course of base changes due to mutation and selection can be obtained analytically, leading to a theoretical equation showing how the selection has influence on the evolutionary distance estimated from the enumeration of base changes. This result provides a new method for estimating the divergence time more accurately from the observed base changes by evaluating both the strength of selection and the mutation rate. The validity of this method is verified by analysing the base changes observed at the third codon positions of amino acid residues with four-fold codon degeneracy in the protein genes of mammalian mitochondria; i.e. the ratios of estimated divergence times are fairly well consistent with a series of fossil records of mammals. Throughout this analysis, it is also suggested that the mutation rates in mitochondrial genomes are almost the same in different lineages of mammals and that the lineage-specific base-change rates indicated previously are due to the selection probably arising from the preference of transfer RNAs to codons.

  9. Stereovision-based pose and inertia estimation of unknown and uncooperative space objects

    Science.gov (United States)

    Pesce, Vincenzo; Lavagna, Michèle; Bevilacqua, Riccardo

    2017-01-01

    Autonomous close proximity operations are an arduous and attractive problem in space mission design. In particular, the estimation of pose, motion and inertia properties of an uncooperative object is a challenging task because of the lack of available a priori information. This paper develops a novel method to estimate the relative position, velocity, angular velocity, attitude and the ratios of the components of the inertia matrix of an uncooperative space object using only stereo-vision measurements. The classical Extended Kalman Filter (EKF) and an Iterated Extended Kalman Filter (IEKF) are used and compared for the estimation procedure. In addition, in order to compute the inertia properties, the ratios of the inertia components are added to the state and a pseudo-measurement equation is considered in the observation model. The relative simplicity of the proposed algorithm could be suitable for an online implementation for real applications. The developed algorithm is validated by numerical simulations in MATLAB using different initial conditions and uncertainty levels. The goal of the simulations is to verify the accuracy and robustness of the proposed estimation algorithm. The obtained results show satisfactory convergence of estimation errors for all the considered quantities. The obtained results, in several simulations, shows some improvements with respect to similar works, which deal with the same problem, present in literature. In addition, a video processing procedure is presented to reconstruct the geometrical properties of a body using cameras. This inertia reconstruction algorithm has been experimentally validated at the ADAMUS (ADvanced Autonomous MUltiple Spacecraft) Lab at the University of Florida. In the future, this different method could be integrated to the inertia ratios estimator to have a complete tool for mass properties recognition.

  10. V and V-based remaining fault estimation model for safety–critical software of a nuclear power plant

    International Nuclear Information System (INIS)

    Eom, Heung-seop; Park, Gee-yong; Jang, Seung-cheol; Son, Han Seong; Kang, Hyun Gook

    2013-01-01

    Highlights: ► A software fault estimation model based on Bayesian Nets and V and V. ► Use of quantified data derived from qualitative V and V results. ► Faults insertion and elimination process was modeled in the context of probability. ► Systematically estimates the expected number of remaining faults. -- Abstract: Quantitative software reliability measurement approaches have some limitations in demonstrating the proper level of reliability in cases of safety–critical software. One of the more promising alternatives is the use of software development quality information. Particularly in the nuclear industry, regulatory bodies in most countries use both probabilistic and deterministic measures for ensuring the reliability of safety-grade digital computers in NPPs. The point of deterministic criteria is to assess the whole development process and its related activities during the software development life cycle for the acceptance of safety–critical software. In addition software Verification and Validation (V and V) play an important role in this process. In this light, we propose a V and V-based fault estimation method using Bayesian Nets to estimate the remaining faults for safety–critical software after the software development life cycle is completed. By modeling the fault insertion and elimination processes during the whole development phases, the proposed method systematically estimates the expected number of remaining faults.

  11. Validity of transcobalamin II-based radioassay for the determination of serum vitamin B12 concentrations

    International Nuclear Information System (INIS)

    Paltridge, G.; Rudzki, Z.; Ryall, R.G.

    1980-01-01

    A valid radioassay for the estimation of serum vitamin B 12 in the presence of naturally occurring vitamin B 12 (= cobalamin) analogues can be operated if serum transcobalamin II (TC II) is used as the binding protein. Serum samples that gave diagnostically discrepant results when their vitamin B 12 content was analysed (i) by a commercial radioassay known to be susceptible to interference from cobalamin analogues, and (ii) by microbiological assay, were further analysed by an alternative radioassay which uses the transcobalamins (principally TC II) of diluted normal serum as the assay binding protein. Concordance between the results from microbiological assay and the TC II-based radioassay was found in all cases. In an extended study over a three-year period, all routine serum samples sent for vitamin B 12 analysis that had a vitamin B 12 content of less than 320 ng/l by the TC II-based radioassay (reference range 200-850 ng/l) were reanalysed using an established microbiological method. Over 1000 samples were thus analysed. The data are presented to demonstrate the validity of the TC II-based radioassay results in this group of patients, serum samples from which are most likely to produce diagnostically erroneous vitamin B 12 results when analysed by a radioassay that is less specific for cobalamins. (author)

  12. Estimating Total Discharge in the Yangtze River Basin Using Satellite-Based Observations

    Directory of Open Access Journals (Sweden)

    Samuel A. Andam‑Akorful

    2013-07-01

    Full Text Available The measurement of total basin discharge along coastal regions is necessary for understanding the hydrological and oceanographic issues related to the water and energy cycles. However, only the observed streamflow (gauge-based observation is used to estimate the total fluxes from the river basin to the ocean, neglecting the portion of discharge that infiltrates to underground and directly discharges into the ocean. Hence, the aim of this study is to assess the total discharge of the Yangtze River (Chang Jiang basin. In this study, we explore the potential response of total discharge to changes in precipitation (from the Tropical Rainfall Measuring Mission—TRMM, evaporation (from four versions of the Global Land Data Assimilation—GLDAS, namely, CLM, Mosaic, Noah and VIC, and water-storage changes (from the Gravity Recovery and Climate Experiment—GRACE by using the terrestrial water budget method. This method has been validated by comparison with the observed streamflow, and shows an agreement with a root mean square error (RMSE of 14.30 mm/month for GRACE-based discharge and 20.98 mm/month for that derived from precipitation minus evaporation (P − E. This improvement of approximately 32% indicates that monthly terrestrial water-storage changes, as estimated by GRACE, cannot be considered negligible over Yangtze basin. The results for the proposed method are more accurate than the results previously reported in the literature.

  13. Practitioner's knowledge representation a pathway to improve software effort estimation

    CERN Document Server

    Mendes, Emilia

    2014-01-01

    The main goal of this book is to help organizations improve their effort estimates and effort estimation processes by providing a step-by-step methodology that takes them through the creation and validation of models that are based on their own knowledge and experience. Such models, once validated, can then be used to obtain predictions, carry out risk analyses, enhance their estimation processes for new projects and generally advance them as learning organizations.Emilia Mendes presents the Expert-Based Knowledge Engineering of Bayesian Networks (EKEBNs) methodology, which she has used and adapted during the course of several industry collaborations with different companies world-wide over more than 6 years. The book itself consists of two major parts: first, the methodology's foundations in knowledge management, effort estimation (with special emphasis on the intricacies of software and Web development) and Bayesian networks are detailed; then six industry case studies are presented which illustrate the pra...

  14. Parameter estimation in X-ray astronomy

    International Nuclear Information System (INIS)

    Lampton, M.; Margon, B.; Bowyer, S.

    1976-01-01

    The problems of model classification and parameter estimation are examined, with the objective of establishing the statistical reliability of inferences drawn from X-ray observations. For testing the validities of classes of models, the procedure based on minimizing the chi 2 statistic is recommended; it provides a rejection criterion at any desired significance level. Once a class of models has been accepted, a related procedure based on the increase of chi 2 gives a confidence region for the values of the model's adjustable parameters. The procedure allows the confidence level to be chosen exactly, even for highly nonlinear models. Numerical experiments confirm the validity of the prescribed technique.The chi 2 /sub min/+1 error estimation method is evaluated and found unsuitable when several parameter ranges are to be derived, because it substantially underestimates their joint errors. The ratio of variances method, while formally correct, gives parameter confidence regions which are more variable than necessary

  15. Palliative Sedation: Reliability and Validity of Sedation Scales

    NARCIS (Netherlands)

    Arevalo Romero, J.; Brinkkemper, T.; van der Heide, A.; Rietjens, J.A.; Ribbe, M.W.; Deliens, L.; Loer, S.A.; Zuurmond, W.W.A.; Perez, R.S.G.M.

    2012-01-01

    Context: Observer-based sedation scales have been used to provide a measurable estimate of the comfort of nonalert patients in palliative sedation. However, their usefulness and appropriateness in this setting has not been demonstrated. Objectives: To study the reliability and validity of

  16. Remaining useful life estimation based on discriminating shapelet extraction

    International Nuclear Information System (INIS)

    Malinowski, Simon; Chebel-Morello, Brigitte; Zerhouni, Noureddine

    2015-01-01

    In the Prognostics and Health Management domain, estimating the remaining useful life (RUL) of critical machinery is a challenging task. Various research topics including data acquisition, fusion, diagnostics and prognostics are involved in this domain. This paper presents an approach, based on shapelet extraction, to estimate the RUL of equipment. This approach extracts, in an offline step, discriminative rul-shapelets from an history of run-to-failure data. These rul-shapelets are patterns that are selected for their correlation with the remaining useful life of the equipment. In other words, every selected rul-shapelet conveys its own information about the RUL of the equipment. In an online step, these rul-shapelets are compared to testing units and the ones that match these units are used to estimate their RULs. Therefore, RUL estimation is based on patterns that have been selected for their high correlation with the RUL. This approach is different from classical similarity-based approaches that attempt to match complete testing units (or only late instants of testing units) with training ones to estimate the RUL. The performance of our approach is evaluated on a case study on the remaining useful life estimation of turbofan engines and performance is compared with other similarity-based approaches. - Highlights: • A data-driven RUL estimation technique based on pattern extraction is proposed. • Patterns are extracted for their correlation with the RUL. • The proposed method shows good performance compared to other techniques

  17. Fading Kalman filter-based real-time state of charge estimation in LiFePO_4 battery-powered electric vehicles

    International Nuclear Information System (INIS)

    Lim, KaiChin; Bastawrous, Hany Ayad; Duong, Van-Huan; See, Khay Wai; Zhang, Peng; Dou, Shi Xue

    2016-01-01

    Highlights: • Real-time battery model parameters and SoC estimation with novel method is proposed. • Cascading filtering stages are used for parameters identification and SoC estimation. • Optimized fading Kalman filter is implemented for SoC estimation. • Accurate SoC estimation is validated in UDDS load profile experiment. • This approach is suitable for BMS in EV applications due to its simplicity. - Abstract: A novel online estimation technique for estimating the state of charge (SoC) of a lithium iron phosphate (LiFePO_4) battery has been developed. Based on a simplified model, the open circuit voltage (OCV) of the battery is estimated through two cascaded linear filtering stages. A recursive least squares filter is employed in the first stage to dynamically estimate the battery model parameters in real-time, and then, a fading Kalman filter (FKF) is used to estimate the OCV from these parameters. FKF can avoid the possibility of large estimation errors, which may occur with a conventional Kalman filter, due to its capability to compensate any modeling error through a fading factor. By optimizing the value of the fading factor in the set of recursion equations of FKF with genetic algorithms, the errors in estimating the battery’s SoC in urban dynamometer driving schedules-based experiments and real vehicle driving cycle experiments were below 3% compared to more than 9% in the case of using an ordinary Kalman filter. The proposed method with its simplified model provides the simplicity and feasibility required for real-time application with highly accurate SoC estimation.

  18. HOTELLING'S T2 CONTROL CHARTS BASED ON ROBUST ESTIMATORS

    Directory of Open Access Journals (Sweden)

    SERGIO YÁÑEZ

    2010-01-01

    Full Text Available Under the presence of multivariate outliers, in a Phase I analysis of historical set of data, the T 2 control chart based on the usual sample mean vector and sample variance covariance matrix performs poorly. Several alternative estimators have been proposed. Among them, estimators based on the minimum volume ellipsoid (MVE and the minimum covariance determinant (MCD are powerful in detecting a reasonable number of outliers. In this paper we propose a T 2 control chart using the biweight S estimators for the location and dispersion parameters when monitoring multivariate individual observations. Simulation studies show that this method outperforms the T 2 control chart based on MVE estimators for a small number of observations.

  19. PEANO, a toolbox for real-time process signal validation and estimation

    International Nuclear Information System (INIS)

    Fantoni, Paolo F.; Figedy, Stefan; Racz, Attila

    1998-02-01

    PEANO (Process Evaluation and Analysis by Neural Operators), a toolbox for real time process signal validation and condition monitoring has been developed. This system analyses the signals, which are e.g. the readings of process monitoring sensors, computes their expected values and alerts if real values are deviated from the expected ones more than limits allow. The reliability level of the current analysis is also produced. The system is based on neuro-fuzzy techniques. Artificial Neural Networks and Fuzzy Logic models can be combined to exploit learning and generalisation capability of the first technique with the approximate reasoning embedded in the second approach. Real-time process signal validation is an application field where the use of this technique can improve the diagnosis of faulty sensors and the identification of outliers in a robust and reliable way. This study implements a fuzzy and possibilistic clustering algorithm to classify the operating region where the validation process has to be performed. The possibilistic approach (rather than probabilistic) allows a ''don't know'' classification that results in a fast detection of unforeseen plant conditions or outliers. Specialised Artificial Neural Networks are used for the validation process, one for each fuzzy cluster in which the operating map has been divided. There are two main advantages in using this technique: the accuracy and generalisation capability is increased compared to the case of a single network working in the entire operating region, and the ability to identify abnormal conditions, where the system is not capable to operate with a satisfactory accuracy, is improved. This model has been tested in a simulated environment on a French PWR, to monitor safety-related reactor variables over the entire power-flow operating map. (author)

  20. PEANO, a toolbox for real-time process signal validation and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Fantoni, Paolo F.; Figedy, Stefan; Racz, Attila

    1998-02-01

    PEANO (Process Evaluation and Analysis by Neural Operators), a toolbox for real time process signal validation and condition monitoring has been developed. This system analyses the signals, which are e.g. the readings of process monitoring sensors, computes their expected values and alerts if real values are deviated from the expected ones more than limits allow. The reliability level of the current analysis is also produced. The system is based on neuro-fuzzy techniques. Artificial Neural Networks and Fuzzy Logic models can be combined to exploit learning and generalisation capability of the first technique with the approximate reasoning embedded in the second approach. Real-time process signal validation is an application field where the use of this technique can improve the diagnosis of faulty sensors and the identification of outliers in a robust and reliable way. This study implements a fuzzy and possibilistic clustering algorithm to classify the operating region where the validation process has to be performed. The possibilistic approach (rather than probabilistic) allows a ''don't know'' classification that results in a fast detection of unforeseen plant conditions or outliers. Specialised Artificial Neural Networks are used for the validation process, one for each fuzzy cluster in which the operating map has been divided. There are two main advantages in using this technique: the accuracy and generalisation capability is increased compared to the case of a single network working in the entire operating region, and the ability to identify abnormal conditions, where the system is not capable to operate with a satisfactory accuracy, is improved. This model has been tested in a simulated environment on a French PWR, to monitor safety-related reactor variables over the entire power-flow operating map. (author)

  1. Bias in tensor based morphometry Stat-ROI measures may result in unrealistic power estimates.

    Science.gov (United States)

    Thompson, Wesley K; Holland, Dominic

    2011-07-01

    A series of reports have recently appeared using tensor based morphometry statistically-defined regions of interest, Stat-ROIs, to quantify longitudinal atrophy in structural MRIs from the Alzheimer's Disease Neuroimaging Initiative (ADNI). This commentary focuses on one of these reports, Hua et al. (2010), but the issues raised here are relevant to the others as well. Specifically, we point out a temporal pattern of atrophy in subjects with Alzheimer's disease and mild cognitive impairment whereby the majority of atrophy in two years occurs within the first 6 months, resulting in overall elevated estimated rates of change. Using publicly-available ADNI data, this temporal pattern is also found in a group of identically-processed healthy controls, strongly suggesting that methodological bias is corrupting the measures. The resulting bias seriously impacts the validity of conclusions reached using these measures; for example, sample size estimates reported by Hua et al. (2010) may be underestimated by a factor of five to sixteen. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Statistical Validation of a Web-Based GIS Application and Its Applicability to Cardiovascular-Related Studies.

    Science.gov (United States)

    Lee, Jae Eun; Sung, Jung Hye; Malouhi, Mohamad

    2015-12-22

    There is abundant evidence that neighborhood characteristics are significantly linked to the health of the inhabitants of a given space within a given time frame. This study is to statistically validate a web-based GIS application designed to support cardiovascular-related research developed by the NIH funded Research Centers in Minority Institutions (RCMI) Translational Research Network (RTRN) Data Coordinating Center (DCC) and discuss its applicability to cardiovascular studies. Geo-referencing, geocoding and geospatial analyses were conducted for 500 randomly selected home addresses in a U.S. southeastern Metropolitan area. The correlation coefficient, factor analysis and Cronbach's alpha (α) were estimated to quantify measures of the internal consistency, reliability and construct/criterion/discriminant validity of the cardiovascular-related geospatial variables (walk score, number of hospitals, fast food restaurants, parks and sidewalks). Cronbach's α for CVD GEOSPATIAL variables was 95.5%, implying successful internal consistency. Walk scores were significantly correlated with number of hospitals (r = 0.715; p restaurants (r = 0.729; p application were internally consistent and demonstrated satisfactory validity. Therefore, the GIS application may be useful to apply to cardiovascular-related studies aimed to investigate potential impact of geospatial factors on diseases and/or the long-term effect of clinical trials.

  3. Validity in work-based assessment: expanding our horizons

    NARCIS (Netherlands)

    Govaerts, M.; Vleuten, C.P.M. van der

    2013-01-01

    CONTEXT: Although work-based assessments (WBA) may come closest to assessing habitual performance, their use for summative purposes is not undisputed. Most criticism of WBA stems from approaches to validity consistent with the quantitative psychometric framework. However, there is increasing

  4. Validation analysis of probabilistic models of dietary exposure to food additives.

    Science.gov (United States)

    Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J

    2003-10-01

    The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.

  5. Estimating time-based instantaneous total mortality rate based on the age-structured abundance index

    Science.gov (United States)

    Wang, Yingbin; Jiao, Yan

    2015-05-01

    The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.

  6. A new lithium-ion battery internal temperature on-line estimate method based on electrochemical impedance spectroscopy measurement

    Science.gov (United States)

    Zhu, J. G.; Sun, Z. C.; Wei, X. Z.; Dai, H. F.

    2015-01-01

    The power battery thermal management problem in EV (electric vehicle) and HEV (hybrid electric vehicle) has been widely discussed, and EIS (electrochemical impedance spectroscopy) is an effective experimental method to test and estimate the status of the battery. Firstly, an electrochemical-based impedance matrix analysis for lithium-ion battery is developed to describe the impedance response of electrochemical impedance spectroscopy. Then a method, based on electrochemical impedance spectroscopy measurement, has been proposed to estimate the internal temperature of power lithium-ion battery by analyzing the phase shift and magnitude of impedance at different ambient temperatures. Respectively, the SoC (state of charge) and temperature have different effects on the impedance characteristics of battery at various frequency ranges in the electrochemical impedance spectroscopy experimental study. Also the impedance spectrum affected by SoH (state of health) is discussed in the paper preliminary. Therefore, the excitation frequency selected to estimate the inner temperature is in the frequency range which is significantly influenced by temperature without the SoC and SoH. The intrinsic relationship between the phase shift and temperature is established under the chosen excitation frequency. And the magnitude of impedance related to temperature is studied in the paper. In practical applications, through obtaining the phase shift and magnitude of impedance, the inner temperature estimation could be achieved. Then the verification experiments are conduced to validate the estimate method. Finally, an estimate strategy and an on-line estimation system implementation scheme utilizing battery management system are presented to describe the engineering value.

  7. Search-free license plate localization based on saliency and local variance estimation

    Science.gov (United States)

    Safaei, Amin; Tang, H. L.; Sanei, S.

    2015-02-01

    In recent years, the performance and accuracy of automatic license plate number recognition (ALPR) systems have greatly improved, however the increasing number of applications for such systems have made ALPR research more challenging than ever. The inherent computational complexity of search dependent algorithms remains a major problem for current ALPR systems. This paper proposes a novel search-free method of localization based on the estimation of saliency and local variance. Gabor functions are then used to validate the choice of candidate license plate. The algorithm was applied to three image datasets with different levels of complexity and the results compared with a number of benchmark methods, particularly in terms of speed. The proposed method outperforms the state of the art methods and can be used for real time applications.

  8. Validation test case generation based on safety analysis ontology

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Wang, Wen-Shing

    2012-01-01

    Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.

  9. Estimation of hand hygiene opportunities on an adult medical ward using 24-hour camera surveillance: validation of the HOW2 Benchmark Study.

    Science.gov (United States)

    Diller, Thomas; Kelly, J William; Blackhurst, Dawn; Steed, Connie; Boeker, Sue; McElveen, Danielle C

    2014-06-01

    We previously published a formula to estimate the number of hand hygiene opportunities (HHOs) per patient-day using the World Health Organization's "Five Moments for Hand Hygiene" methodology (HOW2 Benchmark Study). HHOs can be used as a denominator for calculating hand hygiene compliance rates when product utilization data are available. This study validates the previously derived HHO estimate using 24-hour video surveillance of health care worker hand hygiene activity. The validation study utilized 24-hour video surveillance recordings of 26 patients' hospital stays to measure the actual number of HHOs per patient-day on a medicine ward in a large teaching hospital. Statistical methods were used to compare these results to those obtained by episodic observation of patient activity in the original derivation study. Total hours of data collection were 81.3 and 1,510.8, resulting in 1,740 and 4,522 HHOs in the derivation and validation studies, respectively. Comparisons of the mean and median HHOs per 24-hour period did not differ significantly. HHOs were 71.6 (95% confidence interval: 64.9-78.3) and 73.9 (95% confidence interval: 69.1-84.1), respectively. This study validates the HOW2 Benchmark Study and confirms that expected numbers of HHOs can be estimated from the unit's patient census and patient-to-nurse ratio. These data can be used as denominators in calculations of hand hygiene compliance rates from electronic monitoring using the "Five Moments for Hand Hygiene" methodology. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  10. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. The Effectiveness of Using Limited Gauge Measurements for Bias Adjustment of Satellite-Based Precipitation Estimation over Saudi Arabia

    Science.gov (United States)

    Alharbi, Raied; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan

    2018-01-01

    Precipitation is a key input variable for hydrological and climate studies. Rain gauges are capable of providing reliable precipitation measurements at point scale. However, the uncertainty of rain measurements increases when the rain gauge network is sparse. Satellite -based precipitation estimations appear to be an alternative source of precipitation measurements, but they are influenced by systematic bias. In this study, a method for removing the bias from the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) over a region where the rain gauge is sparse is investigated. The method consists of monthly empirical quantile mapping, climate classification, and inverse-weighted distance method. Daily PERSIANN-CCS is selected to test the capability of the method for removing the bias over Saudi Arabia during the period of 2010 to 2016. The first six years (2010 - 2015) are calibrated years and 2016 is used for validation. The results show that the yearly correlation coefficient was enhanced by 12%, the yearly mean bias was reduced by 93% during validated year. Root mean square error was reduced by 73% during validated year. The correlation coefficient, the mean bias, and the root mean square error show that the proposed method removes the bias on PERSIANN-CCS effectively that the method can be applied to other regions where the rain gauge network is sparse.

  13. Validating CDIAC's population-based approach to the disaggregation of within-country CO2 emissions

    International Nuclear Information System (INIS)

    Cushman, R.M.; Beauchamp, J.J.; Brenkert, A.L.

    1998-01-01

    The Carbon Dioxide Information Analysis Center produces and distributes a data base of CO 2 emissions from fossil-fuel combustion and cement production, expressed as global, regional, and national estimates. CDIAC also produces a companion data base, expressed on a one-degree latitude-longitude grid. To do this gridding, emissions within each country are spatially disaggregated according to the distribution of population within that country. Previously, the lack of within-country emissions data prevented a validation of this approach. But emissions inventories are now becoming available for most US states. An analysis of these inventories confirms that population distribution explains most, but not all, of the variance in the distribution of CO 2 emissions within the US. Additional sources of variance (coal production, non-carbon energy sources, and interstate electricity transfers) are explored, with the hope that the spatial disaggregation of emissions can be improved

  14. Sparse estimation of model-based diffuse thermal dust emission

    Science.gov (United States)

    Irfan, Melis O.; Bobin, Jérôme

    2018-03-01

    Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

  15. Generalizability of GMAT[R] Validity to Programs outside the U.S.

    Science.gov (United States)

    Talento-Miller, Eileen

    2008-01-01

    This study explores the predictive validity of GMAT[R] scores for predicting performance in graduate management programs outside the United States. Results suggest that the validity estimates based on the combination of GMAT[R] scores were about a third of a standard deviation higher for non-U.S. programs compared with existing data on U.S.…

  16. Comparative studies of parameters based on the most probable versus an approximate linear extrapolation distance estimates for circular cylindrical absorbing rod

    International Nuclear Information System (INIS)

    Wassef, W.A.

    1982-01-01

    Estimates and techniques that are valid to calculate the linear extrapolation distance for an infinitely long circular cylindrical absorbing region are reviewed. Two estimates, in particular, are put into consideration, that is the most probable and the value resulting from an approximate technique based on matching the integral transport equation inside the absorber with the diffusion approximation in the surrounding infinite scattering medium. Consequently, the effective diffusion parameters and the blackness of the cylinder are derived and subjected to comparative studies. A computer code is set up to calculate and compare the different parameters, which is useful in reactor analysis and serves to establish a beneficial estimates that are amenable to direct application to reactor design codes

  17. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs

    Science.gov (United States)

    Wright, A.; Krousel-Wood, M.; Thomas, E. J.; McCoy, J. A.; Sittig, D. F.

    2015-01-01

    Summary Background Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. Objective We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. Methods We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. Results The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. Conclusions We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes. PMID:26171079

  18. A statistic to estimate the variance of the histogram-based mutual information estimator based on dependent pairs of observations

    NARCIS (Netherlands)

    Moddemeijer, R

    In the case of two signals with independent pairs of observations (x(n),y(n)) a statistic to estimate the variance of the histogram based mutual information estimator has been derived earlier. We present such a statistic for dependent pairs. To derive this statistic it is necessary to avail of a

  19. Trace-based post-silicon validation for VLSI circuits

    CERN Document Server

    Liu, Xiao

    2014-01-01

    This book first provides a comprehensive coverage of state-of-the-art validation solutions based on real-time signal tracing to guarantee the correctness of VLSI circuits.  The authors discuss several key challenges in post-silicon validation and provide automated solutions that are systematic and cost-effective.  A series of automatic tracing solutions and innovative design for debug (DfD) techniques are described, including techniques for trace signal selection for enhancing visibility of functional errors, a multiplexed signal tracing strategy for improving functional error detection, a tracing solution for debugging electrical errors, an interconnection fabric for increasing data bandwidth and supporting multi-core debug, an interconnection fabric design and optimization technique to increase transfer flexibility and a DfD design and associated tracing solution for improving debug efficiency and expanding tracing window. The solutions presented in this book improve the validation quality of VLSI circuit...

  20. Semiparametric Gaussian copula models : Geometry and efficient rank-based estimation

    NARCIS (Netherlands)

    Segers, J.; van den Akker, R.; Werker, B.J.M.

    2014-01-01

    We propose, for multivariate Gaussian copula models with unknown margins and structured correlation matrices, a rank-based, semiparametrically efficient estimator for the Euclidean copula parameter. This estimator is defined as a one-step update of a rank-based pilot estimator in the direction of

  1. Sensorless SPMSM Position Estimation Using Position Estimation Error Suppression Control and EKF in Wide Speed Range

    Directory of Open Access Journals (Sweden)

    Zhanshan Wang

    2014-01-01

    Full Text Available The control of a high performance alternative current (AC motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM. In order to guarantee the accuracy of rotor position estimation in the flux-weakening region, one scheme of identifying the permanent magnet flux of SPMSM by extended Kalman filter (EKF is also proposed, which formed the effective combination method to realize the sensorless control of SPMSM with high accuracy. The simulation results demonstrated the validity and feasibility of the proposed position/speed estimation system.

  2. A novel SURE-based criterion for parametric PSF estimation.

    Science.gov (United States)

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  3. Validity of a Semi-Quantitative Food Frequency Questionnaire for Collegiate Athletes

    Directory of Open Access Journals (Sweden)

    Ayaka Sunam

    2016-06-01

    Full Text Available Background: Food frequency questionnaires (FFQs have been developed and validated for various populations. To our knowledge, however, no FFQ has been validated for young athletes. Here, we investigated whether an FFQ that was developed and validated to estimate dietary intake in middle-aged persons was also valid for estimating that in young athletes. Methods: We applied an FFQ that had been developed for the Japan Public Health Center-based Prospective Cohort Study with modification to the duration of recollection. A total of 156 participants (92 males completed the FFQ and a 3-day non-consecutive 24-hour dietary recall (24hDR. Validity of the mean estimates was evaluated by calculating the percentage differences between the 24hDR and FFQ. Ranking estimation was validated using Spearman’s correlation coefficient (CC, and the degree of miscategorization was determined by joint classification. Results: The FFQ underestimated energy intake by approximately 10% for both males and females. For 35 nutrients, the median (range deattenuated CC was 0.30 (0.10 to 0.57 for males and 0.32 (−0.08 to 0.62 for females. For 19 food groups, the median (range deattenuated CC was 0.32 (0.17 to 0.72 for males and 0.34 (−0.11 to 0.58 for females. For both nutrient and food group intakes, cross-classification analysis indicated extreme miscategorization rates of 3% to 5%. Conclusions: An FFQ developed and validated for middle-aged persons had comparable validity among young athletes. This FFQ might be useful for assessing habitual dietary intake in collegiate athletes, especially for calcium, vitamin C, vegetables, fruits, and milk and dairy products.

  4. Comparisons of Modeling and State of Charge Estimation for Lithium-Ion Battery Based on Fractional Order and Integral Order Methods

    Directory of Open Access Journals (Sweden)

    Renxin Xiao

    2016-03-01

    Full Text Available In order to properly manage lithium-ion batteries of electric vehicles (EVs, it is essential to build the battery model and estimate the state of charge (SOC. In this paper, the fractional order forms of Thevenin and partnership for a new generation of vehicles (PNGV models are built, of which the model parameters including the fractional orders and the corresponding resistance and capacitance values are simultaneously identified based on genetic algorithm (GA. The relationships between different model parameters and SOC are established and analyzed. The calculation precisions of the fractional order model (FOM and integral order model (IOM are validated and compared under hybrid test cycles. Finally, extended Kalman filter (EKF is employed to estimate the SOC based on different models. The results prove that the FOMs can simulate the output voltage more accurately and the fractional order EKF (FOEKF can estimate the SOC more precisely under dynamic conditions.

  5. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    Science.gov (United States)

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  6. Preclinical validation of automated dual-energy X-ray absorptiometry and computed tomography-based body composition measurements

    International Nuclear Information System (INIS)

    DEVRIESE, Joke; Pottel, Hans; BEELS, Laurence; VAN DE WIELE, Christophe; MAES, Alex; GHEYSENS, Olivier

    2016-01-01

    The aim of this study was to determine and validate a set of Hounsfield unit (HU) ranges to segment computed tomography (CT) images into tissue types and to test the validity of dual-energy X-ray absorptiometry (DXA) tissue segmentation on pure, unmixed porcine tissues. This preclinical prospective study was approved by the local ethical committee. Different quantities of porcine bone tissue (BT), lean tissue (LT) and adipose tissue (AT) were scanned using DXA and CT. Tissue type segmentation in DXA was performed via the standard clinical protocol and in CT through different sets of HU ranges. Percent coefficients of variation (%CV) were used to assess precision while % differences of observed masses were tested against zero using the Wilcoxon signed-rank Test. Total mass DXA measurements differ little but significantly (P=0.016) from true mass, while total mass CT measurements based on literature values show non-significant (P=0.69) differences of 1.7% and 2.0%. BT mass estimates with DXA differed more from true mass (median -78.2 to -75.8%) than other tissue types (median -11.3 to -8.1%). Tissue mass estimates with CT and literature HU ranges showed small differences from true mass for every tissue type (median -10.4 to 8.8%). The most suited method for automated tissue segmentation is CT and can become a valuable tool in quantitative nuclear medicine.

  7. Model-based estimation of finite population total in stratified sampling

    African Journals Online (AJOL)

    The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...

  8. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  9. Model validation and error estimation of tsunami runup using high resolution data in Sadeng Port, Gunungkidul, Yogyakarta

    Science.gov (United States)

    Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo

    2017-07-01

    A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.

  10. Novel Equations for Estimating Lean Body Mass in Peritoneal Dialysis Patients.

    Science.gov (United States)

    Dong, Jie; Li, Yan-Jun; Xu, Rong; Yang, Zhi-Kai; Zheng, Ying-Dong

    2015-12-01

    ♦ To develop and validate equations for estimating lean body mass (LBM) in peritoneal dialysis (PD) patients. ♦ Two equations for estimating LBM, one based on mid-arm muscle circumference (MAMC) and hand grip strength (HGS), i.e., LBM-M-H, and the other based on HGS, i.e., LBM-H, were developed and validated with LBM obtained by dual-energy X-ray absorptiometry (DEXA). The developed equations were compared to LBM estimated from creatinine kinetics (LBM-CK) and anthropometry (LBM-A) in terms of bias, precision, and accuracy. The prognostic values of LBM estimated from the equations in all-cause mortality risk were assessed. ♦ The developed equations incorporated gender, height, weight, and dialysis duration. Compared to LBM-DEXA, the bias of the developed equations was lower than that of LBM-CK and LBM-A. Additionally, LBM-M-H and LBM-H had better accuracy and precision. The prognostic values of LBM in all-cause mortality risk based on LBM-M-H, LBM-H, LBM-CK, and LBM-A were similar. ♦ Lean body mass estimated by the new equations based on MAMC and HGS was correlated with LBM obtained by DEXA and may serve as practical surrogate markers of LBM in PD patients. Copyright © 2015 International Society for Peritoneal Dialysis.

  11. Teletactile System Based on Mechanical Properties Estimation

    Directory of Open Access Journals (Sweden)

    Mauro M. Sette

    2011-01-01

    Full Text Available Tactile feedback is a major missing feature in minimally invasive procedures; it is an essential means of diagnosis and orientation during surgical procedures. Previous works have presented a remote palpation feedback system based on the coupling between a pressure sensor and a general haptic interface. Here a new approach is presented based on the direct estimation of the tissue mechanical properties and finally their presentation to the operator by means of a haptic interface. The approach presents different technical difficulties and some solutions are proposed: the implementation of a fast Young’s modulus estimation algorithm, the implementation of a real time finite element model, and finally the implementation of a stiffness estimation approach in order to guarantee the system’s stability. The work is concluded with an experimental evaluation of the whole system.

  12. A web-based team-oriented medical error communication assessment tool: development, preliminary reliability, validity, and user ratings.

    Science.gov (United States)

    Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas

    2011-01-01

    Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.

  13. An automated multi-model based evapotranspiration estimation framework for understanding crop-climate interactions in India

    Science.gov (United States)

    Bhattarai, N.; Jain, M.; Mallick, K.

    2017-12-01

    A remote sensing based multi-model evapotranspiration (ET) estimation framework is developed using MODIS and NASA Merra-2 reanalysis data for data poor regions, and we apply this framework to the Indian subcontinent. The framework eliminates the need for in-situ calibration data and hence estimates ET completely from space and is replicable across all regions in the world. Currently, six surface energy balance models ranging from widely-used SEBAL, METRIC, and SEBS to moderately-used S-SEBI, SSEBop, and a relatively new model, STIC1.2 are being integrated and validated. Preliminary analysis suggests good predictability of the models for estimating near- real time ET under clear sky conditions from various crop types in India with coefficient of determination 0.32-0.55 and percent bias -15%-28%, when compared against Bowen Ratio based ET estimates. The results are particularly encouraging given that no direct ground input data were used in the analysis. The framework is currently being extended to estimate seasonal ET across the Indian subcontinent using a model-ensemble approach that uses all available MODIS 8-day datasets since 2000. These ET products are being used to monitor inter-seasonal and inter-annual dynamics of ET and crop water use across different crop and irrigation practices in India. Particularly, the potential impacts of changes in precipitation patterns and extreme heat (e.g., extreme degree days) on seasonal crop water consumption is being studied. Our ET products are able to locate the water stress hotspots that need to be targeted with water saving interventions to maintain agricultural production in the face of climate variability and change.

  14. Novel Equations for Estimating Lean Body Mass in Patients With Chronic Kidney Disease.

    Science.gov (United States)

    Tian, Xue; Chen, Yuan; Yang, Zhi-Kai; Qu, Zhen; Dong, Jie

    2018-05-01

    Simplified methods to estimate lean body mass (LBM), an important nutritional measure representing muscle mass and somatic protein, are lacking in nondialyzed patients with chronic kidney disease (CKD). We developed and tested 2 reliable equations for estimation of LBM in daily clinical practice. The development and validation groups both included 150 nondialyzed patients with CKD Stages 3 to 5. Two equations for estimating LBM based on mid-arm muscle circumference (MAMC) or handgrip strength (HGS) were developed and validated in CKD patients with dual-energy x-ray absorptiometry as referenced gold method. We developed and validated 2 equations for estimating LBM based on HGS and MAMC. These equations, which also incorporated sex, height, and weight, were developed and validated in CKD patients. The new equations were found to exhibit only small biases when compared with dual-energy x-ray absorptiometry, with median differences of 0.94 and 0.46 kg observed in the HGS and MAMC equations, respectively. Good precision and accuracy were achieved for both equations, as reflected by small interquartile ranges in the differences and in the percentages of estimates that were 20% of measured LBM. The bias, precision, and accuracy of each equation were found to be similar when it was applied to groups of patients divided by the median measured LBM, the median ratio of extracellular to total body water, and the stages of CKD. LBM estimated from MAMC or HGS were found to provide accurate estimates of LBM in nondialyzed patients with CKD. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  15. Manifold absolute pressure estimation using neural network with hybrid training algorithm.

    Directory of Open Access Journals (Sweden)

    Mohd Taufiq Muslim

    Full Text Available In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM algorithm, Bayesian Regularization (BR algorithm and Particle Swarm Optimization (PSO algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS. The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value.

  16. Manifold absolute pressure estimation using neural network with hybrid training algorithm.

    Science.gov (United States)

    Muslim, Mohd Taufiq; Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli

    2017-01-01

    In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value.

  17. Inter-regional metric disadvantages when comparing countries’ happiness on a global scale. A Rasch based consequential validity analysis

    Directory of Open Access Journals (Sweden)

    Diego Fernando Rojas-Gualdrón

    2017-07-01

    Full Text Available Measurement confounding due to socioeconomic differences between world regions may bias the estimations of countries’ happiness and global inequality. Potential implications of this bias have not been researched. In this study, the consequential validity of the Happy Planet Index, 2012 as an indicator of global inequality is evaluated from the Rasch measurement perspective. Differential Item Functioning by world region and bias in the estimated magnitude of inequalities were analyzed. The recalculated measure showed a good fit to Rasch model assumptions. The original index underestimated relative inequalities between world regions by 20%. DIF had no effect on relative measures but affected absolute measures by overestimating world average happiness and underestimating its variance. These findings suggest measurement confounding by unmeasured characteristics. Metric disadvantages must be adjusted to make fair comparisons. Public policy decisions based on biased estimations could have relevant negative consequences on people’s health and well-being by not focusing efforts on real vulnerable populations.

  18. An examination of healthy aging across a conceptual continuum: prevalence estimates, demographic patterns, and validity.

    Science.gov (United States)

    McLaughlin, Sara J; Jette, Alan M; Connell, Cathleen M

    2012-06-01

    Although the notion of healthy aging has gained wide acceptance in gerontology, measuring the phenomenon is challenging. Guided by a prominent conceptualization of healthy aging, we examined how shifting from a more to less stringent definition of healthy aging influences prevalence estimates, demographic patterns, and validity. Data are from adults aged 65 years and older who participated in the Health and Retirement Study. We examined four operational definitions of healthy aging. For each, we calculated prevalence estimates and examined the odds of healthy aging by age, education, gender, and race-ethnicity in 2006. We also examined the association between healthy aging and both self-rated health and death. Across definitions, the prevalence of healthy aging ranged from 3.3% to 35.5%. For all definitions, those classified as experiencing healthy aging had lower odds of fair or poor self-rated health and death over an 8-year period. The odds of being classified as "healthy" were lower among those of advanced age, those with less education, and women than for their corresponding counterparts across all definitions. Moving across the conceptual continuum--from a more to less rigid definition of healthy aging--markedly increases the measured prevalence of healthy aging. Importantly, results suggest that all examined definitions identified a subgroup of older adults who had substantially lower odds of reporting fair or poor health and dying over an 8-year period, providing evidence of the validity of our definitions. Conceptualizations that emphasize symptomatic disease and functional health may be particularly useful for public health purposes.

  19. Cooperative Learning: Improving University Instruction by Basing Practice on Validated Theory

    Science.gov (United States)

    Johnson, David W.; Johnson, Roger T.; Smith, Karl A.

    2014-01-01

    Cooperative learning is an example of how theory validated by research may be applied to instructional practice. The major theoretical base for cooperative learning is social interdependence theory. It provides clear definitions of cooperative, competitive, and individualistic learning. Hundreds of research studies have validated its basic…

  20. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  1. Ontology-based validation and identification of regulatory phenotypes

    KAUST Repository

    Kulmanov, Maxat

    2018-01-31

    Motivation: Function annotations of gene products, and phenotype annotations of genotypes, provide valuable information about molecular mechanisms that can be utilized by computational methods to identify functional and phenotypic relatedness, improve our understanding of disease and pathobiology, and lead to discovery of drug targets. Identifying functions and phenotypes commonly requires experiments which are time-consuming and expensive to carry out; creating the annotations additionally requires a curator to make an assertion based on reported evidence. Support to validate the mutual consistency of functional and phenotype annotations as well as a computational method to predict phenotypes from function annotations, would greatly improve the utility of function annotations Results: We developed a novel ontology-based method to validate the mutual consistency of function and phenotype annotations. We apply our method to mouse and human annotations, and identify several inconsistencies that can be resolved to improve overall annotation quality. Our method can also be applied to the rule-based prediction of phenotypes from functions. We show that the predicted phenotypes can be utilized for identification of protein-protein interactions and gene-disease associations. Based on experimental functional annotations, we predict phenotypes for 1,986 genes in mouse and 7,301 genes in human for which no experimental phenotypes have yet been determined.

  2. Ontology-based validation and identification of regulatory phenotypes

    KAUST Repository

    Kulmanov, Maxat; Schofield, Paul N; Gkoutos, Georgios V; Hoehndorf, Robert

    2018-01-01

    Motivation: Function annotations of gene products, and phenotype annotations of genotypes, provide valuable information about molecular mechanisms that can be utilized by computational methods to identify functional and phenotypic relatedness, improve our understanding of disease and pathobiology, and lead to discovery of drug targets. Identifying functions and phenotypes commonly requires experiments which are time-consuming and expensive to carry out; creating the annotations additionally requires a curator to make an assertion based on reported evidence. Support to validate the mutual consistency of functional and phenotype annotations as well as a computational method to predict phenotypes from function annotations, would greatly improve the utility of function annotations Results: We developed a novel ontology-based method to validate the mutual consistency of function and phenotype annotations. We apply our method to mouse and human annotations, and identify several inconsistencies that can be resolved to improve overall annotation quality. Our method can also be applied to the rule-based prediction of phenotypes from functions. We show that the predicted phenotypes can be utilized for identification of protein-protein interactions and gene-disease associations. Based on experimental functional annotations, we predict phenotypes for 1,986 genes in mouse and 7,301 genes in human for which no experimental phenotypes have yet been determined.

  3. Design of Artificial Neural Network-Based pH Estimator

    Directory of Open Access Journals (Sweden)

    Shebel A. Alsabbah

    2010-10-01

    Full Text Available Taking into consideration the cost, size and drawbacks might be found with real hardware instrument for measuring pH values such that the complications of the wiring, installing, calibrating and troubleshooting the system, would make a person look for a cheaper, accurate, and alternative choice to perform the measuring operation, Where’s hereby, a feedforward artificial neural network-based pH estimator has to be proposed. The proposed estimator has been designed with multi- layer perceptrons. One input which is a measured base stream and two outputs represent pH values at strong base and strong/weak acids for a titration process. The created data base has been obtained with consideration of temperature variation. The final numerical results ensure the effectiveness and robustness of the design neural network-based pH estimator.

  4. Validating automated kidney stone volumetry in computed tomography and mathematical correlation with estimated stone volume based on diameter.

    Science.gov (United States)

    Wilhelm, Konrad; Miernik, Arkadiusz; Hein, Simon; Schlager, Daniel; Adams, Fabian; Benndorf, Matthias; Fritz, Benjamin; Langer, Mathias; Hesse, Albrecht; Schoenthaler, Martin; Neubauer, Jakob

    2018-06-02

    To validate AutoMated UroLithiasis Evaluation Tool (AMULET) software for kidney stone volumetry and compare its performance to standard clinical practice. Maximum diameter and volume of 96 urinary stones were measured as reference standard by three independent urologists. The same stones were positioned in an anthropomorphic phantom and CT scans acquired in standard settings. Three independent radiologists blinded to the reference values took manual measurements of the maximum diameter and automatic measurements of maximum diameter and volume. An "expected volume" was calculated based on manual diameter measurements using the formula: V=4/3 πr³. 96 stones were analyzed in the study. We had initially aimed to assess 100. Nine were replaced during data acquisition due of crumbling and 4 had to be excluded because the automated measurement did not work. Mean reference maximum diameter was 13.3 mm (5.2-32.1 mm). Correlation coefficients among all measured outcomes were compared. The correlation between the manual and automatic diameter measurements to the reference was 0.98 and 0.91, respectively (pvolumetry is possible and significantly more accurate than diameter-based volumetric calculations. To avoid bias in clinical trials, size should be measured as volume. However, automated diameter measurements are not as accurate as manual measurements.

  5. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

    Science.gov (United States)

    Guchhait, Shyamal; Banerjee, Biswanath

    2018-04-01

    In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

  6. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  7. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  8. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    1997-01-01

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  9. Parametric estimation of covariance function in Gaussian-process based Kriging models. Application to uncertainty quantification for computer experiments

    International Nuclear Information System (INIS)

    Bachoc, F.

    2013-01-01

    The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr

  10. Development and validation of the mindfulness-based interventions - teaching assessment criteria (MBI:TAC).

    Science.gov (United States)

    Crane, Rebecca S; Eames, Catrin; Kuyken, Willem; Hastings, Richard P; Williams, J Mark G; Bartley, Trish; Evans, Alison; Silverton, Sara; Soulsby, Judith G; Surawy, Christina

    2013-12-01

    The assessment of intervention integrity is essential in psychotherapeutic intervention outcome research and psychotherapist training. There has been little attention given to it in mindfulness-based interventions research, training programs, and practice. To address this, the Mindfulness-Based Interventions: Teaching Assessment Criteria (MBI:TAC) was developed. This article describes the MBI:TAC and its development and presents initial data on reliability and validity. Sixteen assessors from three centers evaluated teaching integrity of 43 teachers using the MBI:TAC. Internal consistency (α = .94) and interrater reliability (overall intraclass correlation coefficient = .81; range = .60-.81) were high. Face and content validity were established through the MBI:TAC development process. Data on construct validity were acceptable. Initial data indicate that the MBI:TAC is a reliable and valid tool. It can be used in Mindfulness-Based Stress Reduction/Mindfulness-Based Cognitive Therapy outcome evaluation research, training and pragmatic practice settings, and in research to assess the impact of teaching integrity on participant outcome.

  11. Estimating chronic hepatitis C prognosis using transient elastography-based liver stiffness: A systematic review and meta-analysis.

    Science.gov (United States)

    Erman, A; Sathya, A; Nam, A; Bielecki, J M; Feld, J J; Thein, H-H; Wong, W W L; Grootendorst, P; Krahn, M D

    2018-05-01

    Chronic hepatitis C (CHC) is a leading cause of hepatic fibrosis and cirrhosis. The level of fibrosis is traditionally established by histology, and prognosis is estimated using fibrosis progression rates (FPRs; annual probability of progressing across histological stages). However, newer noninvasive alternatives are quickly replacing biopsy. One alternative, transient elastography (TE), quantifies fibrosis by measuring liver stiffness (LSM). Given these developments, the purpose of this study was (i) to estimate prognosis in treatment-naïve CHC patients using TE-based liver stiffness progression rates (LSPR) as an alternative to FPRs and (ii) to compare consistency between LSPRs and FPRs. A systematic literature search was performed using multiple databases (January 1990 to February 2016). LSPRs were calculated using either a direct method (given the difference in serial LSMs and time elapsed) or an indirect method given a single LSM and the estimated duration of infection and pooled using random-effects meta-analyses. For validation purposes, FPRs were also estimated. Heterogeneity was explored by random-effects meta-regression. Twenty-seven studies reporting on 39 groups of patients (N = 5874) were identified with 35 groups allowing for indirect and 8 for direct estimation of LSPR. The majority (~58%) of patients were HIV/HCV-coinfected. The estimated time-to-cirrhosis based on TE vs biopsy was 39 and 38 years, respectively. In univariate meta-regressions, male sex and HIV were positively and age at assessment, negatively associated with LSPRs. Noninvasive prognosis of HCV is consistent with FPRs in predicting time-to-cirrhosis, but more longitudinal studies of liver stiffness are needed to obtain refined estimates. © 2017 John Wiley & Sons Ltd.

  12. Estimating salinity stress in sugarcane fields with spaceborne hyperspectral vegetation indices

    Science.gov (United States)

    Hamzeh, S.; Naseri, A. A.; AlaviPanah, S. K.; Mojaradi, B.; Bartholomeus, H. M.; Clevers, J. G. P. W.; Behzad, M.

    2013-04-01

    The presence of salt in the soil profile negatively affects the growth and development of vegetation. As a result, the spectral reflectance of vegetation canopies varies for different salinity levels. This research was conducted to (1) investigate the capability of satellite-based hyperspectral vegetation indices (VIs) for estimating soil salinity in agricultural fields, (2) evaluate the performance of 21 existing VIs and (3) develop new VIs based on a combination of wavelengths sensitive for multiple stresses and find the best one for estimating soil salinity. For this purpose a Hyperion image of September 2, 2010, and data on soil salinity at 108 locations in sugarcane (Saccharum officina L.) fields were used. Results show that soil salinity could well be estimated by some of these VIs. Indices related to chlorophyll absorption bands or based on a combination of chlorophyll and water absorption bands had the highest correlation with soil salinity. In contrast, indices that are only based on water absorption bands had low to medium correlations, while indices that use only visible bands did not perform well. From the investigated indices the optimized soil-adjusted vegetation index (OSAVI) had the strongest relationship (R2 = 0.69) with soil salinity for the training data, but it did not perform well in the validation phase. The validation procedure showed that the new salinity and water stress indices (SWSI) implemented in this study (SWSI-1, SWSI-2, SWSI-3) and the Vogelmann red edge index yielded the best results for estimating soil salinity for independent fields with root mean square errors of 1.14, 1.15, 1.17 and 1.15 dS/m, respectively. Our results show that soil salinity could be estimated by satellite-based hyperspectral VIs, but validation of obtained models for independent data is essential for selecting the best model.

  13. Estimation and Validation of Land Surface Temperatures from Chinese Second-Generation Polar-Orbit FY-3A VIRR Data

    Directory of Open Access Journals (Sweden)

    Bo-Hui Tang

    2015-03-01

    Full Text Available This work estimated and validated the land surface temperature (LST from thermal-infrared Channels 4 (10.8 µm and 5 (12.0 µm of the Visible and Infrared Radiometer (VIRR onboard the second-generation Chinese polar-orbiting FengYun-3A (FY-3A meteorological satellite. The LST, mean emissivity and atmospheric water vapor content (WVC were divided into several tractable sub-ranges with little overlap to improve the fitting accuracy. The experimental results showed that the root mean square errors (RMSEs were proportional to the viewing zenith angles (VZAs and WVC. The RMSEs were below 1.0 K for VZA sub-ranges less than 30° or for VZA sub-ranges less than 60° and WVC less than 3.5 g/cm2, provided that the land surface emissivities were known. A preliminary validation using independently simulated data showed that the estimated LSTs were quite consistent with the actual inputs, with a maximum RMSE below 1 K for all VZAs. An inter-comparison using the Moderate Resolution Imaging Spectroradiometer (MODIS-derived LST product MOD11_L2 showed that the minimum RMSE was 1.68 K for grass, and the maximum RMSE was 3.59 K for barren or sparsely vegetated surfaces. In situ measurements at the Hailar field site in northeastern China from October, 2013, to September, 2014, were used to validate the proposed method. The result showed that the RMSE between the LSTs calculated from the ground measurements and derived from the VIRR data was 1.82 K.

  14. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    Science.gov (United States)

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  15. Using groundwater levels to estimate recharge

    Science.gov (United States)

    Healy, R.W.; Cook, P.G.

    2002-01-01

    Accurate estimation of groundwater recharge is extremely important for proper management of groundwater systems. Many different approaches exist for estimating recharge. This paper presents a review of methods that are based on groundwater-level data. The water-table fluctuation method may be the most widely used technique for estimating recharge; it requires knowledge of specific yield and changes in water levels over time. Advantages of this approach include its simplicity and an insensitivity to the mechanism by which water moves through the unsaturated zone. Uncertainty in estimates generated by this method relate to the limited accuracy with which specific yield can be determined and to the extent to which assumptions inherent in the method are valid. Other methods that use water levels (mostly based on the Darcy equation) are also described. The theory underlying the methods is explained. Examples from the literature are used to illustrate applications of the different methods.

  16. Global Validation of MODIS Atmospheric Profile-Derived Near-Surface Air Temperature and Dew Point Estimates

    Science.gov (United States)

    Famiglietti, C.; Fisher, J.; Halverson, G. H.

    2017-12-01

    This study validates a method of remote sensing near-surface meteorology that vertically interpolates MODIS atmospheric profiles to surface pressure level. The extraction of air temperature and dew point observations at a two-meter reference height from 2001 to 2014 yields global moderate- to fine-resolution near-surface temperature distributions that are compared to geographically and temporally corresponding measurements from 114 ground meteorological stations distributed worldwide. This analysis is the first robust, large-scale validation of the MODIS-derived near-surface air temperature and dew point estimates, both of which serve as key inputs in models of energy, water, and carbon exchange between the land surface and the atmosphere. Results show strong linear correlations between remotely sensed and in-situ near-surface air temperature measurements (R2 = 0.89), as well as between dew point observations (R2 = 0.77). Performance is relatively uniform across climate zones. The extension of mean climate-wise percent errors to the entire remote sensing dataset allows for the determination of MODIS air temperature and dew point uncertainties on a global scale.

  17. Age estimation in the living

    DEFF Research Database (Denmark)

    Tangmose, Sara; Thevissen, Patrick; Lynnerup, Niels

    2015-01-01

    A radiographic assessment of third molar development is essential for differentiating between juveniles and adolescents in forensic age estimations. As the developmental stages of third molars are highly correlated, age estimates based on a combination of a full set of third molar scores...... are statistically complicated. Transition analysis (TA) is a statistical method developed for estimating age at death in skeletons, which combines several correlated developmental traits into one age estimate including a 95% prediction interval. The aim of this study was to evaluate the performance of TA...... in the living on a full set of third molar scores. A cross sectional sample of 854 panoramic radiographs, homogenously distributed by sex and age (15.0-24.0 years), were randomly split in two; a reference sample for obtaining age estimates including a 95% prediction interval according to TA; and a validation...

  18. Clinical Validity of the ADI-R in a US-Based Latino Population

    Science.gov (United States)

    Vanegas, Sandra B.; Magaña, Sandra; Morales, Miguel; McNamara, Ellyn

    2016-01-01

    The Autism Diagnostic Interview-Revised (ADI-R) has been validated as a tool to aid in the diagnosis of Autism; however, given the growing diversity in the United States, the ADI-R must be validated for different languages and cultures. This study evaluates the validity of the ADI-R in a US-based Latino, Spanish-speaking population of 50 children…

  19. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    Science.gov (United States)

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  20. Comparison of regression coefficient and GIS-based methodologies for regional estimates of forest soil carbon stocks

    International Nuclear Information System (INIS)

    Elliott Campbell, J.; Moen, Jeremie C.; Ney, Richard A.; Schnoor, Jerald L.

    2008-01-01

    Estimates of forest soil organic carbon (SOC) have applications in carbon science, soil quality studies, carbon sequestration technologies, and carbon trading. Forest SOC has been modeled using a regression coefficient methodology that applies mean SOC densities (mass/area) to broad forest regions. A higher resolution model is based on an approach that employs a geographic information system (GIS) with soil databases and satellite-derived landcover images. Despite this advancement, the regression approach remains the basis of current state and federal level greenhouse gas inventories. Both approaches are analyzed in detail for Wisconsin forest soils from 1983 to 2001, applying rigorous error-fixing algorithms to soil databases. Resulting SOC stock estimates are 20% larger when determined using the GIS method rather than the regression approach. Average annual rates of increase in SOC stocks are 3.6 and 1.0 million metric tons of carbon per year for the GIS and regression approaches respectively. - Large differences in estimates of soil organic carbon stocks and annual changes in stocks for Wisconsin forestlands indicate a need for validation from forthcoming forest surveys

  1. Smartphone based automatic organ validation in ultrasound video.

    Science.gov (United States)

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  2. Validity, responsiveness, and minimal clinically important difference of EQ-5D-5L in stroke patients undergoing rehabilitation.

    Science.gov (United States)

    Chen, Poyu; Lin, Keh-Chung; Liing, Rong-Jiuan; Wu, Ching-Yi; Chen, Chia-Ling; Chang, Ku-Chou

    2016-06-01

    To examine the criterion validity, responsiveness, and minimal clinically important difference (MCID) of the EuroQoL 5-Dimensions Questionnaire (EQ-5D-5L) and visual analog scale (EQ-VAS) in people receiving rehabilitation after stroke. The EQ-5D-5L, along with four criterion measures-the Medical Research Council scales for muscle strength, the Fugl-Meyer assessment, the functional independence measure, and the Stroke Impact Scale-was administered to 65 patients with stroke before and after 3- to 4-week therapy. Criterion validity was estimated using the Spearman correlation coefficient. Responsiveness was analyzed by the effect size, standardized response mean (SRM), and criterion responsiveness. The MCID was determined by anchor-based and distribution-based approaches. The percentage of patients exceeding the MCID was also reported. Concurrent validity of the EQ-Index was better compared with the EQ-VAS. The EQ-Index has better power for predicting the rehabilitation outcome in the activities of daily living than other motor-related outcome measures. The EQ-Index was moderately responsive to change (SRM = 0.63), whereas the EQ-VAS was only mildly responsive to change. The MCID estimation of the EQ-Index (the percentage of patients exceeding the MCID) was 0.10 (33.8 %) and 0.10 (33.8 %) based on the anchor-based and distribution-based approaches, respectively, and the estimation of EQ-VAS was 8.61 (41.5 %) and 10.82 (32.3 %). The EQ-Index has shown reasonable concurrent validity, limited predictive validity, and acceptable responsiveness for detecting the health-related quality of life in stroke patients undergoing rehabilitation, but not for EQ-VAS. Future research considering different recovery stages after stroke is warranted to validate these estimations.

  3. Metric Indices for Performance Evaluation of a Mixed Measurement based State Estimator

    Directory of Open Access Journals (Sweden)

    Paula Sofia Vide

    2013-01-01

    Full Text Available With the development of synchronized phasor measurement technology in recent years, it gains great interest the use of PMU measurements to improve state estimation performances due to their synchronized characteristics and high data transmission speed. The ability of the Phasor Measurement Units (PMU to directly measure the system state is a key over SCADA measurement system. PMU measurements are superior to the conventional SCADA measurements in terms of resolution and accuracy. Since the majority of measurements in existing estimators are from conventional SCADA measurement system, it is hard to be fully replaced by PMUs in the near future so state estimators including both phasor and conventional SCADA measurements are being considered. In this paper, a mixed measurement (SCADA and PMU measurements state estimator is proposed. Several useful measures for evaluating various aspects of the performance of the mixed measurement state estimator are proposed and explained. State Estimator validity, performance and characteristics of the results on IEEE 14 bus test system and IEEE 30 bus test system are presented.

  4. Validation of Reported Whole-Grain Intake from a Web-Based Dietary Record against Plasma Alkylresorcinol Concentrations in 8- to 11-Year-Olds Participating in a Randomized Controlled Trial

    DEFF Research Database (Denmark)

    Biltoft-Jensen, Anja Pia; Damsgaard, Camilla T.; W. Andersen, Elisabeth

    2016-01-01

    meal × 3 mo crossover trial. Reported WG intake and plasma AR concentrations were compared when children ate their usual bread-based lunch (UBL) and when served a hot lunch meal (HLM). Correlations and cross-classification were used to rank subjects according to intake. The intraclass correlation......BACKGROUND: Whole-grain (WG) intake is important for human health, but accurate intake estimation is challenging. Use of a biomarker for WG intake provides a possible way to validate dietary assessment methods. OBJECTIVE: Our aim was to validate WG intake from 2 diets reported by children, using...... plasma alkylresorcinol (AR) concentrations, and to investigate the 3-mo reproducibility of AR concentrations and reported WG intake. METHODS: AR concentrations were analyzed in fasting blood plasma samples, and WG intake was estimated in a 7-d web-based diary by 750 participants aged 8-11 y in a 2 school...

  5. Development and Validation of Spectrophotometric Methods for Simultaneous Estimation of Valsartan and Hydrochlorothiazide in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    Monika L. Jadhav

    2014-01-01

    Full Text Available Two UV-spectrophotometric methods have been developed and validated for simultaneous estimation of valsartan and hydrochlorothiazide in a tablet dosage form. The first method employed solving of simultaneous equations based on the measurement of absorbance at two wavelengths, 249.4 nm and 272.6 nm, λmax for valsartan and hydrochlorothiazide, respectively. The second method was absorbance ratio method, which involves formation of Q-absorbance equation at 258.4 nm (isoabsorptive point and also at 272.6 nm (λmax of hydrochlorothiazide. The methods were found to be linear between the range of 5–30 µg/mL for valsartan and 4–24 μg/mL for hydrochlorothiazide using 0.1 N NaOH as solvent. The mean percentage recovery was found to be 100.20% and 100.19% for the simultaneous equation method and 98.56% and 97.96% for the absorbance ratio method, for valsartan and hydrochlorothiazide, respectively, at three different levels of standard additions. The precision (intraday, interday of methods was found within limits (RSD<2%. It could be concluded from the results obtained in the present investigation that the two methods for simultaneous estimation of valsartan and hydrochlorothiazide in tablet dosage form are simple, rapid, accurate, precise and economical and can be used, successfully, in the quality control of pharmaceutical formulations and other routine laboratory analysis.

  6. Statistics of Parameter Estimates: A Concrete Example

    KAUST Repository

    Aguilar, Oscar

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise levels, models, or prior knowledge. But what can we say about the validity of such estimates, and the influence of these assumptions? This paper is concerned with methods to address these questions, and for didactic purposes it is written in the context of a concrete nonlinear parameter estimation problem. We will use the results of a physical experiment conducted by Allmaras et al. at Texas A&M University [M. Allmaras et al., SIAM Rev., 55 (2013), pp. 149-167] to illustrate the importance of validation procedures for statistical parameter estimation. We describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions. We explain how different statistical methods can be used in complementary ways to improve the understanding of parameter estimates and their uncertainties.

  7. Development and Validation of Web-Based Courseware for Junior Secondary School Basic Technology Students in Nigeria

    Directory of Open Access Journals (Sweden)

    Amosa Isiaka Gambari

    2018-02-01

    Full Text Available This research aimed to develop and validate a web-based courseware for junior secondary school basic technology students in Nigeria. In this study, a mixed method quantitative pilot study design with qualitative components was used to test and ascertain the ease of development and validation of the web-based courseware. Dick and Carey instructional system design model was adopted for developing the courseware. Convenience sampling technique was used in selecting the three content, computer and educational technology experts to validate the web-based courseware. Non-randomized and non-equivalent Junior secondary school students from two schools were used for field trial validation. Four validating instruments were employed in conducting this study: (i Content Validation Assessment Report (CVAR; (ii Computer Expert Validation Assessment Report (CEAR; (iii Educational Technology Experts Validation Assessment Report (ETEVAR; and (iv Students Validation Questionnaire (SVQ. All the instruments were face and content validated. SVQ was pilot tested and reliability coefficient of 0.85 was obtained using Cronbach Alpha. CVAR, CEAR, ETEVAR were administered on content specialists, computer experts, and educational technology experts, while SVQ was administered on 83 JSS students from two selected secondary schools in Minna. The findings revealed that the process of developing web-based courseware using Dick and Carey Instructional System Design was successful. In addition, the report from the validating team revealed that the web-based courseware is valuable for learning basic technology. It is therefore recommended that web-based courseware should be produced to teach basic technology concepts on large scale.

  8. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2016-07-01

    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  9. State Estimation-based Transmission line parameter identification

    Directory of Open Access Journals (Sweden)

    Fredy Andrés Olarte Dussán

    2010-01-01

    Full Text Available This article presents two state-estimation-based algorithms for identifying transmission line parameters. The identification technique used simultaneous state-parameter estimation on an artificial power system composed of several copies of the same transmission line, using measurements at different points in time. The first algorithm used active and reactive power measurements at both ends of the line. The second method used synchronised phasor voltage and current measurements at both ends. The algorithms were tested in simulated conditions on the 30-node IEEE test system. All line parameters for this system were estimated with errors below 1%.

  10. Fine-tuning satellite-based rainfall estimates

    Science.gov (United States)

    Harsa, Hastuadi; Buono, Agus; Hidayat, Rahmat; Achyar, Jaumil; Noviati, Sri; Kurniawan, Roni; Praja, Alfan S.

    2018-05-01

    Rainfall datasets are available from various sources, including satellite estimates and ground observation. The locations of ground observation scatter sparsely. Therefore, the use of satellite estimates is advantageous, because satellite estimates can provide data on places where the ground observations do not present. However, in general, the satellite estimates data contain bias, since they are product of algorithms that transform the sensors response into rainfall values. Another cause may come from the number of ground observations used by the algorithms as the reference in determining the rainfall values. This paper describe the application of bias correction method to modify the satellite-based dataset by adding a number of ground observation locations that have not been used before by the algorithm. The bias correction was performed by utilizing Quantile Mapping procedure between ground observation data and satellite estimates data. Since Quantile Mapping required mean and standard deviation of both the reference and the being-corrected data, thus the Inverse Distance Weighting scheme was applied beforehand to the mean and standard deviation of the observation data in order to provide a spatial composition of them, which were originally scattered. Therefore, it was possible to provide a reference data point at the same location with that of the satellite estimates. The results show that the new dataset have statistically better representation of the rainfall values recorded by the ground observation than the previous dataset.

  11. A Study on Parametric Wave Estimation Based on Measured Ship Motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Iseki, Toshio

    2011-01-01

    The paper studies parametric wave estimation based on the ‘wave buoy analogy’, and data and results obtained from the training ship Shioji-maru are compared with estimates of the sea states obtained from other measurements and observations. Furthermore, the estimating characteristics of the param......The paper studies parametric wave estimation based on the ‘wave buoy analogy’, and data and results obtained from the training ship Shioji-maru are compared with estimates of the sea states obtained from other measurements and observations. Furthermore, the estimating characteristics...... of the parametric model are discussed by considering the results of a similar estimation concept based on Bayesian modelling. The purpose of the latter comparison is not to favour the one estimation approach to the other but rather to highlight some of the advantages and disadvantages of the two approaches....

  12. Validation of KENO-based criticality calculations at Rocky Flats

    International Nuclear Information System (INIS)

    Felsher, P.D.; McKamy, J.N.; Monahan, S.P.

    1992-01-01

    In the absence of experimental data, it is necessary to rely on computer-based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two-part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the code's ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper, the authors discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG and G Rocky Flats. The validation process at Rocky Flats consists of both global and local techniques. The global validation resulted in a maximum k eff limit of 0.95 for the limiting-accident scanarios of a criticality evaluation

  13. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  14. Improving cluster-based missing value estimation of DNA microarray data.

    Science.gov (United States)

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  15. Parallel Factor-Based Model for Two-Dimensional Direction Estimation

    Directory of Open Access Journals (Sweden)

    Nizar Tayem

    2017-01-01

    Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.

  16. Exploiting Growing Stock Volume Maps for Large Scale Forest Resource Assessment: Cross-Comparisons of ASAR- and PALSAR-Based GSV Estimates with Forest Inventory in Central Siberia

    Directory of Open Access Journals (Sweden)

    Christian Hüttich

    2014-07-01

    Full Text Available Growing stock volume is an important biophysical parameter describing the state and dynamics of the Boreal zone. Validation of growing stock volume (GSV maps based on satellite remote sensing is challenging due to the lack of consistent ground reference data. The monitoring and assessment of the remote Russian forest resources of Siberia can only be done by integrating remote sensing techniques and interdisciplinary collaboration. In this paper, we assess the information content of GSV estimates in Central Siberian forests obtained at 25 m from ALOS-PALSAR and 1 km from ENVISAT-ASAR backscatter data. The estimates have been cross-compared with respect to forest inventory data showing 34% relative RMSE for the ASAR-based GSV retrievals and 39.4% for the PALSAR-based estimates of GSV. Fragmentation analyses using a MODIS-based land cover dataset revealed an increase of retrieval error with increasing fragmentation of the landscape. Cross-comparisons of multiple SAR-based GSV estimates helped to detect inconsistencies in the forest inventory data and can support an update of outdated forest inventory stands.

  17. Estimation of Thermal Sensation Based on Wrist Skin Temperatures

    Science.gov (United States)

    Sim, Soo Young; Koh, Myung Jun; Joo, Kwang Min; Noh, Seungwoo; Park, Sangyun; Kim, Youn Ho; Park, Kwang Suk

    2016-01-01

    Thermal comfort is an essential environmental factor related to quality of life and work effectiveness. We assessed the feasibility of wrist skin temperature monitoring for estimating subjective thermal sensation. We invented a wrist band that simultaneously monitors skin temperatures from the wrist (i.e., the radial artery and ulnar artery regions, and upper wrist) and the fingertip. Skin temperatures from eight healthy subjects were acquired while thermal sensation varied. To develop a thermal sensation estimation model, the mean skin temperature, temperature gradient, time differential of the temperatures, and average power of frequency band were calculated. A thermal sensation estimation model using temperatures of the fingertip and wrist showed the highest accuracy (mean root mean square error [RMSE]: 1.26 ± 0.31). An estimation model based on the three wrist skin temperatures showed a slightly better result to the model that used a single fingertip skin temperature (mean RMSE: 1.39 ± 0.18). When a personalized thermal sensation estimation model based on three wrist skin temperatures was used, the mean RMSE was 1.06 ± 0.29, and the correlation coefficient was 0.89. Thermal sensation estimation technology based on wrist skin temperatures, and combined with wearable devices may facilitate intelligent control of one’s thermal environment. PMID:27023538

  18. Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.

    Science.gov (United States)

    Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo

    2012-01-01

    The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.

  19. Validation by theoretical approach to the experimental estimation of efficiency for gamma spectrometry of gas in 100 ml standard flask

    International Nuclear Information System (INIS)

    Mohan, V.; Chudalayandi, K.; Sundaram, M.; Krishnamony, S.

    1996-01-01

    Estimation of gaseous activity forms an important component of air monitoring at Madras Atomic Power Station (MAPS). The gases of importance are argon 41 an air activation product and fission product noble gas xenon 133. For estimating the concentration, the experimental method is used in which a grab sample is collected in a 100 ml volumetric standard flask. The activity of gas is then computed by gamma spectrometry using a predetermined efficiency estimated experimentally. An attempt is made using theoretical approach to validate the experimental method of efficiency estimation. Two analytical models named relative flux model and absolute activity model were developed independently of each other. Attention is focussed on the efficiencies for 41 Ar and 133 Xe. Results show that the present method of sampling and analysis using 100 ml volumetric flask is adequate and acceptable. (author). 5 refs., 2 tabs

  20. Estimating Driving Performance Based on EEG Spectrum Analysis

    Directory of Open Access Journals (Sweden)

    Jung Tzyy-Ping

    2005-01-01

    Full Text Available The growing number of traffic accidents in recent years has become a serious concern to society. Accidents caused by driver's drowsiness behind the steering wheel have a high fatality rate because of the marked decline in the driver's abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedbacks to maintain their maximum performance. This paper proposes an EEG-based drowsiness estimation system that combines electroencephalogram (EEG log subband power spectrum, correlation analysis, principal component analysis, and linear regression models to indirectly estimate driver's drowsiness level in a virtual-reality-based driving simulator. Our results demonstrated that it is feasible to accurately estimate quantitatively driving performance, expressed as deviation between the center of the vehicle and the center of the cruising lane, in a realistic driving simulator.

  1. Validity of a hospital-based obstetric register using medical records as reference

    DEFF Research Database (Denmark)

    Brixval, Carina Sjöberg; Thygesen, Lau Caspar; Johansen, Nanna Roed

    2015-01-01

    BACKGROUND: Data from hospital-based registers and medical records offer valuable sources of information for clinical and epidemiological research purposes. However, conducting high-quality epidemiological research requires valid and complete data sources. OBJECTIVE: To assess completeness...... and validity of a hospital-based clinical register - the Obstetric Database - using a national register and medical records as references. METHODS: We assessed completeness of a hospital-based clinical register - the Obstetric Database - by linking data from all women registered in the Obstetric Database...... Database therefore offers a valuable source for examining clinical, administrative, and research questions....

  2. Triple collocation-based estimation of spatially correlated observation error covariance in remote sensing soil moisture data assimilation

    Science.gov (United States)

    Wu, Kai; Shu, Hong; Nie, Lei; Jiao, Zhenhang

    2018-01-01

    Spatially correlated errors are typically ignored in data assimilation, thus degenerating the observation error covariance R to a diagonal matrix. We argue that a nondiagonal R carries more observation information making assimilation results more accurate. A method, denoted TC_Cov, was proposed for soil moisture data assimilation to estimate spatially correlated observation error covariance based on triple collocation (TC). Assimilation experiments were carried out to test the performance of TC_Cov. AMSR-E soil moisture was assimilated with a diagonal R matrix computed using the TC and assimilated using a nondiagonal R matrix, as estimated by proposed TC_Cov. The ensemble Kalman filter was considered as the assimilation method. Our assimilation results were validated against climate change initiative data and ground-based soil moisture measurements using the Pearson correlation coefficient and unbiased root mean square difference metrics. These experiments confirmed that deterioration of diagonal R assimilation results occurred when model simulation is more accurate than observation data. Furthermore, nondiagonal R achieved higher correlation coefficient and lower ubRMSD values over diagonal R in experiments and demonstrated the effectiveness of TC_Cov to estimate richly structuralized R in data assimilation. In sum, compared with diagonal R, nondiagonal R may relieve the detrimental effects of assimilation when simulated model results outperform observation data.

  3. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  4. Collocation mismatch uncertainties in satellite aerosol retrieval validation

    Science.gov (United States)

    Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Rodríguez, Edith; Saponaro, Giulia; de Leeuw, Gerrit

    2018-02-01

    Satellite-based aerosol products are routinely validated against ground-based reference data, usually obtained from sun photometer networks such as AERONET (AEROsol RObotic NETwork). In a typical validation exercise a spatial sample of the instantaneous satellite data is compared against a temporal sample of the point-like ground-based data. The observations do not correspond to exactly the same column of the atmosphere at the same time, and the representativeness of the reference data depends on the spatiotemporal variability of the aerosol properties in the samples. The associated uncertainty is known as the collocation mismatch uncertainty (CMU). The validation results depend on the sampling parameters. While small samples involve less variability, they are more sensitive to the inevitable noise in the measurement data. In this paper we study systematically the effect of the sampling parameters in the validation of AATSR (Advanced Along-Track Scanning Radiometer) aerosol optical depth (AOD) product against AERONET data and the associated collocation mismatch uncertainty. To this end, we study the spatial AOD variability in the satellite data, compare it against the corresponding values obtained from densely located AERONET sites, and assess the possible reasons for observed differences. We find that the spatial AOD variability in the satellite data is approximately 2 times larger than in the ground-based data, and the spatial variability correlates only weakly with that of AERONET for short distances. We interpreted that only half of the variability in the satellite data is due to the natural variability in the AOD, and the rest is noise due to retrieval errors. However, for larger distances (˜ 0.5°) the correlation is improved as the noise is averaged out, and the day-to-day changes in regional AOD variability are well captured. Furthermore, we assess the usefulness of the spatial variability of the satellite AOD data as an estimate of CMU by comparing the

  5. Time-Domain Voltage Sag State Estimation Based on the Unscented Kalman Filter for Power Systems with Nonlinear Components

    Directory of Open Access Journals (Sweden)

    Rafael Cisneros-Magaña

    2018-06-01

    Full Text Available This paper proposes a time-domain methodology based on the unscented Kalman filter to estimate voltage sags and their characteristics, such as magnitude and duration in power systems represented by nonlinear models. Partial and noisy measurements from the electrical network with nonlinear loads, used as data, are assumed. The characteristics of voltage sags can be calculated in a discrete form with the unscented Kalman filter to estimate all the busbar voltages; being possible to determine the rms voltage magnitude and the voltage sag starting and ending time, respectively. Voltage sag state estimation results can be used to obtain the power quality indices for monitored and unmonitored busbars in the power grid and to design adequate mitigating techniques. The proposed methodology is successfully validated against the results obtained with the time-domain system simulation for the power system with nonlinear components, being the normalized root mean square error less than 3%.

  6. Design and validation of new genotypic tools for easy and reliable estimation of HIV tropism before using CCR5 antagonists.

    Science.gov (United States)

    Poveda, Eva; Seclén, Eduardo; González, María del Mar; García, Federico; Chueca, Natalia; Aguilera, Antonio; Rodríguez, Jose Javier; González-Lahoz, Juan; Soriano, Vincent

    2009-05-01

    Genotypic tools may allow easier and less expensive estimation of HIV tropism before prescription of CCR5 antagonists compared with the Trofile assay (Monogram Biosciences, South San Francisco, CA, USA). Paired genotypic and Trofile results were compared in plasma samples derived from the maraviroc expanded access programme (EAP) in Europe. A new genotypic approach was built to improve the sensitivity to detect X4 variants based on an optimization of the webPSSM algorithm. Then, the new tool was validated in specimens from patients included in the ALLEGRO trial, a multicentre study conducted in Spain to assess the prevalence of R5 variants in treatment-experienced HIV patients. A total of 266 specimens from the maraviroc EAP were tested. Overall geno/pheno concordance was above 72%. A high specificity was generally seen for the detection of X4 variants using genotypic tools (ranging from 58% to 95%), while sensitivity was low (ranging from 31% to 76%). The PSSM score was then optimized to enhance the sensitivity to detect X4 variants changing the original threshold for R5 categorization. The new PSSM algorithms, PSSM(X4R5-8) and PSSM(SINSI-6.4), considered as X4 all V3 scoring values above -8 or -6.4, respectively, increasing the sensitivity to detect X4 variants up to 80%. The new algorithms were then validated in 148 specimens derived from patients included in the ALLEGRO trial. The sensitivity/specificity to detect X4 variants was 93%/69% for PSSM(X4R5-8) and 93%/70% for PSSM(SINSI-6.4). PSSM(X4R5-8) and PSSM(SINSI-6.4) may confidently assist therapeutic decisions for using CCR5 antagonists in HIV patients, providing an easier and rapid estimation of tropism in clinical samples.

  7. Particle filter based MAP state estimation: A comparison

    NARCIS (Netherlands)

    Saha, S.; Boers, Y.; Driessen, J.N.; Mandal, Pranab K.; Bagchi, Arunabha

    2009-01-01

    MAP estimation is a good alternative to MMSE for certain applications involving nonlinear non Gaussian systems. Recently a new particle filter based MAP estimator has been derived. This new method extracts the MAP directly from the output of a running particle filter. In the recent past, a Viterbi

  8. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  9. Validation and measurement uncertainty estimation in food microbiology: differences between quantitative and qualitative methods

    Directory of Open Access Journals (Sweden)

    Vesna Režić Dereani

    2010-09-01

    Full Text Available The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the technical competence of the staff involved in the tests, recently are widely introduced in food microbiology laboratories in Croatia. In addition to quality manual introduction, and a lot of general documents, some of the most demanding procedures in routine microbiology laboratories are measurement uncertainty (MU procedures and validation experiment design establishment. Those procedures are not standardized yet even at international level, and they require practical microbiological knowledge, altogether with statistical competence. Differences between validation experiments design for quantitative and qualitative food microbiology analysis are discussed in this research, and practical solutions are shortly described. MU for quantitative determinations is more demanding issue than qualitative MU calculation. MU calculations are based on external proficiency testing data and internal validation data. In this paper, practical schematic descriptions for both procedures are shown.

  10. GPS Water Vapor Tomography Based on Accurate Estimations of the GPS Tropospheric Parameters

    Science.gov (United States)

    Champollion, C.; Masson, F.; Bock, O.; Bouin, M.; Walpersdorf, A.; Doerflinger, E.; van Baelen, J.; Brenot, H.

    2003-12-01

    The Global Positioning System (GPS) is now a common technique for the retrieval of zenithal integrated water vapor (IWV). Further applications in meteorology need also slant integrated water vapor (SIWV) which allow to precisely define the high variability of tropospheric water vapor at different temporal and spatial scales. Only precise estimations of IWV and horizontal gradients allow the estimation of accurate SIWV. We present studies developed to improve the estimation of tropospheric water vapor from GPS data. Results are obtained from several field experiments (MAP, ESCOMPTE, OHM-CV, IHOP, .). First IWV are estimated using different GPS processing strategies and results are compared to radiosondes. The role of the reference frame and the a priori constraints on the coordinates of the fiducial and local stations is generally underestimated. It seems to be of first order in the estimation of the IWV. Second we validate the estimated horizontal gradients comparing zenith delay gradients and single site gradients. IWV, gradients and post-fit residuals are used to construct slant integrated water delays. Validation of the SIWV is under progress comparing GPS SIWV, Lidar measurements and high resolution meteorological models (Meso-NH). A careful analysis of the post-fit residuals is needed to separate tropospheric signal from multipaths. The slant tropospheric delays are used to study the 3D heterogeneity of the troposphere. We develop a tomographic software to model the three-dimensional distribution of the tropospheric water vapor from GPS data. The software is applied to the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers operated in southern France. Three inversions have been successfully compared to three successive radiosonde launches. Good resolution is obtained up to heights of 3000 m.

  11. Estimation of Compaction Parameters Based on Soil Classification

    Science.gov (United States)

    Lubis, A. S.; Muis, Z. A.; Hastuty, I. P.; Siregar, I. M.

    2018-02-01

    Factors that must be considered in compaction of the soil works were the type of soil material, field control, maintenance and availability of funds. Those problems then raised the idea of how to estimate the density of the soil with a proper implementation system, fast, and economical. This study aims to estimate the compaction parameter i.e. the maximum dry unit weight (γ dmax) and optimum water content (Wopt) based on soil classification. Each of 30 samples were being tested for its properties index and compaction test. All of the data’s from the laboratory test results, were used to estimate the compaction parameter values by using linear regression and Goswami Model. From the research result, the soil types were A4, A-6, and A-7 according to AASHTO and SC, SC-SM, and CL based on USCS. By linear regression, the equation for estimation of the maximum dry unit weight (γdmax *)=1,862-0,005*FINES- 0,003*LL and estimation of the optimum water content (wopt *)=- 0,607+0,362*FINES+0,161*LL. By Goswami Model (with equation Y=mLogG+k), for estimation of the maximum dry unit weight (γdmax *) with m=-0,376 and k=2,482, for estimation of the optimum water content (wopt *) with m=21,265 and k=-32,421. For both of these equations a 95% confidence interval was obtained.

  12. Deblending of simultaneous-source data using iterative seislet frame thresholding based on a robust slope estimation

    Science.gov (United States)

    Zhou, Yatong; Han, Chunying; Chi, Yue

    2018-06-01

    In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.

  13. The Validation of a Beta-Binomial Model for Overdispersed Binomial Data.

    Science.gov (United States)

    Kim, Jongphil; Lee, Ji-Hyun

    2017-01-01

    The beta-binomial model has been widely used as an analytically tractable alternative that captures the overdispersion of an intra-correlated, binomial random variable, X . However, the model validation for X has been rarely investigated. As a beta-binomial mass function takes on a few different shapes, the model validation is examined for each of the classified shapes in this paper. Further, the mean square error (MSE) is illustrated for each shape by the maximum likelihood estimator (MLE) based on a beta-binomial model approach and the method of moments estimator (MME) in order to gauge when and how much the MLE is biased.

  14. Validation of IT-based Data Communication Protocol for Nuclear Power Plant

    International Nuclear Information System (INIS)

    Jeong, K. I.; Kim, D. H.; Lee, J. C.

    2009-12-01

    The communication network designed to transmit control and processing signals in digital Instrument and Control (I and C) systems in Nuclear Power Plant (NPP), should provide a high level of safety and reliability. There are different features between the communication networks of NPPs and other commercial communication networks. Safety and reliability are the most important factors in the communication networks of an NPP rather than efficiency which are important factors of a commercial communication network design. To develop Data Communication Protocol for Nuclear Power Plant, We analyze the design criteria and performance requirements of existing commercial communication protocols based on Information Technology(IT). And also, we examine the adaptability to the communication protocol of an NPP. Based on these results, we developed our own protocol(Nuclear power plant Safety Communication Protocol : NSCP) for NPP I and C, which meet the required specifications through design overall protocol architecture and data frame format, definition of functional requirements and specifications. NSCP is the communication protocol designed for a safety-grade control network in the nuclear power plant. In this report, we had specified NSCP protocol by FDT(Formal Description Technique) and established validation procedures based on the validation methodology. It was confirmed specification error, major function's validity and reachability of NSCP by performing simulation and the validation process using Telelogic Tau tool

  15. Estimation for sparse vegetation information in desertification region based on Tiangong-1 hyperspectral image.

    Science.gov (United States)

    Wu, Jun-Jun; Gao, Zhi-Hai; Li, Zeng-Yuan; Wang, Hong-Yan; Pang, Yong; Sun, Bin; Li, Chang-Long; Li, Xu-Zhi; Zhang, Jiu-Xing

    2014-03-01

    In order to estimate the sparse vegetation information accurately in desertification region, taking southeast of Sunite Right Banner, Inner Mongolia, as the test site and Tiangong-1 hyperspectral image as the main data, sparse vegetation coverage and biomass were retrieved based on normalized difference vegetation index (NDVI) and soil adjusted vegetation index (SAVI), combined with the field investigation data. Then the advantages and disadvantages between them were compared. Firstly, the correlation between vegetation indexes and vegetation coverage under different bands combination was analyzed, as well as the biomass. Secondly, the best bands combination was determined when the maximum correlation coefficient turned up between vegetation indexes (VI) and vegetation parameters. It showed that the maximum correlation coefficient between vegetation parameters and NDVI could reach as high as 0.7, while that of SAVI could nearly reach 0.8. The center wavelength of red band in the best bands combination for NDVI was 630nm, and that of the near infrared (NIR) band was 910 nm. Whereas, when the center wavelength was 620 and 920 nm respectively, they were the best combination for SAVI. Finally, the linear regression models were established to retrieve vegetation coverage and biomass based on Tiangong-1 VIs. R2 of all models was more than 0.5, while that of the model based on SAVI was higher than that based on NDVI, especially, the R2 of vegetation coverage retrieve model based on SAVI was as high as 0.59. By intersection validation, the standard errors RMSE based on SAVI models were lower than that of the model based on NDVI. The results showed that the abundant spectral information of Tiangong-1 hyperspectral image can reflect the actual vegetaion condition effectively, and SAVI can estimate the sparse vegetation information more accurately than NDVI in desertification region.

  16. Validity of Students Worksheet Based Problem-Based Learning for 9th Grade Junior High School in living organism Inheritance and Food Biotechnology.

    Science.gov (United States)

    Jefriadi, J.; Ahda, Y.; Sumarmin, R.

    2018-04-01

    Based on preliminary research of students worksheet used by teachers has several disadvantages such as students worksheet arranged directly drove learners conduct an investigation without preceded by directing learners to a problem or provide stimulation, student's worksheet not provide a concrete imageand presentation activities on the students worksheet not refer to any one learning models curicullum recommended. To address problems Reviews these students then developed a worksheet based on problem-based learning. This is a research development that using Ploom models. The phases are preliminary research, development and assessment. The instruments used in data collection that includes pieces of observation/interviews, instrument self-evaluation, instruments validity. The results of the validation expert on student worksheets get a valid result the average value 80,1%. Validity of students worksheet based problem-based learning for 9th grade junior high school in living organism inheritance and food biotechnology get valid category.

  17. Characteristics and Validation Techniques for PCA-Based Gene-Expression Signatures

    Directory of Open Access Journals (Sweden)

    Anders E. Berglund

    2017-01-01

    Full Text Available Background. Many gene-expression signatures exist for describing the biological state of profiled tumors. Principal Component Analysis (PCA can be used to summarize a gene signature into a single score. Our hypothesis is that gene signatures can be validated when applied to new datasets, using inherent properties of PCA. Results. This validation is based on four key concepts. Coherence: elements of a gene signature should be correlated beyond chance. Uniqueness: the general direction of the data being examined can drive most of the observed signal. Robustness: if a gene signature is designed to measure a single biological effect, then this signal should be sufficiently strong and distinct compared to other signals within the signature. Transferability: the derived PCA gene signature score should describe the same biology in the target dataset as it does in the training dataset. Conclusions. The proposed validation procedure ensures that PCA-based gene signatures perform as expected when applied to datasets other than those that the signatures were trained upon. Complex signatures, describing multiple independent biological components, are also easily identified.

  18. A survey on OFDM channel estimation techniques based on denoising strategies

    Directory of Open Access Journals (Sweden)

    Pallaviram Sure

    2017-04-01

    Full Text Available Channel estimation forms the heart of any orthogonal frequency division multiplexing (OFDM based wireless communication receiver. Frequency domain pilot aided channel estimation techniques are either least squares (LS based or minimum mean square error (MMSE based. LS based techniques are computationally less complex. Unlike MMSE ones, they do not require a priori knowledge of channel statistics (KCS. However, the mean square error (MSE performance of the channel estimator incorporating MMSE based techniques is better compared to that obtained with the incorporation of LS based techniques. To enhance the MSE performance using LS based techniques, a variety of denoising strategies have been developed in the literature, which are applied on the LS estimated channel impulse response (CIR. The advantage of denoising threshold based LS techniques is that, they do not require KCS but still render near optimal MMSE performance similar to MMSE based techniques. In this paper, a detailed survey on various existing denoising strategies, with a comparative discussion of these strategies is presented.

  19. Accuracy and feasibility of estimated tumour volumetry in primary gastric gastrointestinal stromal tumours: validation using semiautomated technique in 127 patients.

    Science.gov (United States)

    Tirumani, Sree Harsha; Shinagare, Atul B; O'Neill, Ailbhe C; Nishino, Mizuki; Rosenthal, Michael H; Ramaiya, Nikhil H

    2016-01-01

    To validate estimated tumour volumetry in primary gastric gastrointestinal stromal tumours (GISTs) using semiautomated volumetry. In this IRB-approved retrospective study, we measured the three longest diameters in x, y, z axes on CTs of primary gastric GISTs in 127 consecutive patients (52 women, 75 men, mean age 61 years) at our institute between 2000 and 2013. Segmented volumes (Vsegmented) were obtained using commercial software by two radiologists. Estimate volumes (V1-V6) were obtained using formulae for spheres and ellipsoids. Intra- and interobserver agreement of Vsegmented and agreement of V1-6 with Vsegmented were analysed with concordance correlation coefficients (CCC) and Bland-Altman plots. Median Vsegmented and V1-V6 were 75.9, 124.9, 111.6, 94.0, 94.4, 61.7 and 80.3 cm(3), respectively. There was strong intra- and interobserver agreement for Vsegmented. Agreement with Vsegmented was highest for V6 (scalene ellipsoid, x ≠ y ≠ z), with CCC of 0.96 [95 % CI 0.95-0.97]. Mean relative difference was smallest for V6 (0.6 %), while it was -19.1 % for V5, +14.5 % for V4, +17.9 % for V3, +32.6 % for V2 and +47 % for V1. Ellipsoidal approximations of volume using three measured axes may be used to closely estimate Vsegmented when semiautomated techniques are unavailable. Estimation of tumour volume in primary GIST using mathematical formulae is feasible. Gastric GISTs are rarely spherical. Segmented volumes are highly concordant with three axis-based scalene ellipsoid volumes. Ellipsoid volume can be used as an alternative for automated tumour volumetry.

  20. COST ESTIMATING RELATIONSHIPS IN ONSHORE DRILLING PROJECTS

    Directory of Open Access Journals (Sweden)

    Ricardo de Melo e Silva Accioly

    2017-03-01

    Full Text Available Cost estimating relationships (CERs are very important tools in the planning phases of an upstream project. CERs are, in general, multiple regression models developed to estimate the cost of a particular item or scope of a project. They are based in historical data that should pass through a normalization process before fitting a model. In the early phases they are the primary tool for cost estimating. In later phases they are usually used as an estimation validation tool and sometimes for benchmarking purposes. As in any other modeling methodology there are number of important steps to build a model. In this paper the process of building a CER to estimate drilling cost of onshore wells will be addressed.

  1. Development and Validation of a Data-Based Food Frequency Questionnaire for Adults in Eastern Rural Area of Rwanda

    Directory of Open Access Journals (Sweden)

    Ayumi Yanagisawa

    2016-01-01

    Full Text Available This study aimed to develop and evaluate the validity of a food frequency questionnaire (FFQ for rural Rwandans. Since our FFQ was developed to assess malnutrition, it measured energy, protein, vitamin A, and iron intakes only. We collected 260 weighed food records (WFRs from a total of 162 Rwandans. Based on the WFR data, we developed a tentative FFQ and examined the food list by percent contribution to energy and nutrient intakes. To assess the validity, nutrient intakes estimated from the FFQ were compared with those calculated from three-day WFRs by correlation coefficient and cross-classification for 17 adults. Cumulative contributions of the 18-item FFQ to the total intakes of energy and nutrients reached nearly 100%. Crude and energy-adjusted correlation coefficients ranged from -0.09 (vitamin A to 0.58 (protein and from -0.19 (vitamin A to 0.68 (iron, respectively. About 50%-60% of the participants were classified into the same tertile. Our FFQ provided acceptable validity for energy and iron intakes and could rank Rwandan adults in eastern rural area correctly according to their energy and iron intakes.

  2. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    Science.gov (United States)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  3. Simultaneous Validation of Seven Physical Activity Questionnaires Used in Japanese Cohorts for Estimating Energy Expenditure: A Doubly Labeled Water Study.

    Science.gov (United States)

    Sasai, Hiroyuki; Nakata, Yoshio; Murakami, Haruka; Kawakami, Ryoko; Nakae, Satoshi; Tanaka, Shigeho; Ishikawa-Takata, Kazuko; Yamada, Yosuke; Miyachi, Motohiko

    2018-04-28

    Physical activity questionnaires (PAQs) used in large-scale Japanese cohorts have rarely been simultaneously validated against the gold standard doubly labeled water (DLW) method. This study examined the validity of seven PAQs used in Japan for estimating energy expenditure against the DLW method. Twenty healthy Japanese adults (9 men; mean age, 32.4 [standard deviation {SD}, 9.4] years, mainly researchers and students) participated in this study. Fifteen-day daily total energy expenditure (TEE) and basal metabolic rate (BMR) were measured using the DLW method and a metabolic chamber, respectively. Activity energy expenditure (AEE) was calculated as TEE - BMR - 0.1 × TEE. Seven PAQs were self-administered to estimate TEE and AEE. The mean measured values of TEE and AEE were 2,294 (SD, 318) kcal/day and 721 (SD, 161) kcal/day, respectively. All of the PAQs indicated moderate-to-strong correlations with the DLW method in TEE (rho = 0.57-0.84). Two PAQs (Japan Public Health Center Study [JPHC]-PAQ Short and JPHC-PAQ Long) showed significant equivalence in TEE and moderate intra-class correlation coefficients (ICC). None of the PAQs showed significantly equivalent AEE estimates, with differences ranging from -547 to 77 kcal/day. Correlations and ICCs in AEE were mostly weak or fair (rho = 0.02-0.54, and ICC = 0.00-0.44). Only JPHC-PAQ Short provided significant and fair agreement with the DLW method. TEE estimated by the PAQs showed moderate or strong correlations with the results of DLW. Two PAQs showed equivalent TEE and moderate agreement. None of the PAQs showed equivalent AEE estimation to the gold standard, with weak-to-fair correlations and agreements. Further studies with larger sample sizes are needed to confirm these findings.

  4. Contingency inferences driven by base rates: Valid by sampling

    Directory of Open Access Journals (Sweden)

    Florian Kutzner

    2011-04-01

    Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.

  5. Arctic Sea Ice Thickness Estimation from CryoSat-2 Satellite Data Using Machine Learning-Based Lead Detection

    Directory of Open Access Journals (Sweden)

    Sanggyun Lee

    2016-08-01

    Full Text Available Satellite altimeters have been used to monitor Arctic sea ice thickness since the early 2000s. In order to estimate sea ice thickness from satellite altimeter data, leads (i.e., cracks between ice floes should first be identified for the calculation of sea ice freeboard. In this study, we proposed novel approaches for lead detection using two machine learning algorithms: decision trees and random forest. CryoSat-2 satellite data collected in March and April of 2011–2014 over the Arctic region were used to extract waveform parameters that show the characteristics of leads, ice floes and ocean, including stack standard deviation, stack skewness, stack kurtosis, pulse peakiness and backscatter sigma-0. The parameters were used to identify leads in the machine learning models. Results show that the proposed approaches, with overall accuracy >90%, produced much better performance than existing lead detection methods based on simple thresholding approaches. Sea ice thickness estimated based on the machine learning-detected leads was compared to the averaged Airborne Electromagnetic (AEM-bird data collected over two days during the CryoSat Validation experiment (CryoVex field campaign in April 2011. This comparison showed that the proposed machine learning methods had better performance (up to r = 0.83 and Root Mean Square Error (RMSE = 0.29 m compared to thickness estimation based on existing lead detection methods (RMSE = 0.86–0.93 m. Sea ice thickness based on the machine learning approaches showed a consistent decline from 2011–2013 and rebounded in 2014.

  6. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  7. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  8. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship

    OpenAIRE

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15...

  9. Estimating evaporative vapor generation from automobiles based on parking activities

    International Nuclear Information System (INIS)

    Dong, Xinyi; Tschantz, Michael; Fu, Joshua S.

    2015-01-01

    A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade–Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5–8% less than calculation without considering parking activity. - Highlights: • We applied real parking distribution data to estimate evaporative vapor generation. • We applied real hourly temperature data to estimate hourly incremental vapor generation rate. • Evaporative emission for Florence is estimated based on parking distribution and hourly rate. - A new approach is proposed to quantify the weighted evaporative vapor generation based on parking distribution with an hourly incremental vapor generation rate

  10. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  11. Variable disparity-motion estimation based fast three-view video coding

    Science.gov (United States)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  12. Nonparametric bootstrap procedures for predictive inference based on recursive estimation schemes

    OpenAIRE

    Corradi, Valentina; Swanson, Norman R.

    2005-01-01

    Our objectives in this paper are twofold. First, we introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two examples where predictive accuracy tests are made operational using our new bootstrap procedures. In one application, we outline a consistent test for out-of-sample nonlinear Granger causality, and in the other we outline a test for selecting amongst multiple alternative forecasting models, all of which are possibl...

  13. Applying Kane's Validity Framework to a Simulation Based Assessment of Clinical Competence

    Science.gov (United States)

    Tavares, Walter; Brydges, Ryan; Myre, Paul; Prpic, Jason; Turner, Linda; Yelle, Richard; Huiskamp, Maud

    2018-01-01

    Assessment of clinical competence is complex and inference based. Trustworthy and defensible assessment processes must have favourable evidence of validity, particularly where decisions are considered high stakes. We aimed to organize, collect and interpret validity evidence for a high stakes simulation based assessment strategy for certifying…

  14. Validity of a multipass, web-based, 24-hour self-administered recall for assessment of total energy intake in blacks and whites.

    Science.gov (United States)

    Arab, Lenore; Tseng, Chi-Hong; Ang, Alfonso; Jardack, Patricia

    2011-12-01

    To date, Web-based 24-hour recalls have not been validated using objective biomarkers. From 2006 to 2009, the validity of 6 Web-based DietDay 24-hour recalls was tested among 115 black and 118 white healthy adults from Los Angeles, California, by using the doubly labeled water method, and the results were compared with the results of the Diet History Questionnaire, a food frequency questionnaire developed by the National Cancer Institute. The authors performed repeated measurements in a subset of 53 subjects approximately 6 months later to estimate the stability of the doubly labeled water measurement. The attenuation factors for the DietDay recall were 0.30 for blacks and 0.26 for whites. For the Diet History Questionnaire, the attenuation factors were 0.15 and 0.17 for blacks and whites, respectively. Adjusted correlations between true energy intake and the recalls were 0.50 and 0.47 for blacks and whites, respectively, for the DietDay recall. For the Diet History Questionnaire, they were 0.34 and 0.36 for blacks and whites, respectively. The rate of underreporting of more than 30% of calories was lower with the recalls than with the questionnaire (25% and 41% vs. 34% and 52% for blacks and whites, respectively). These findings suggest that Web-based DietDay dietary recalls offer an inexpensive and widely accessible dietary assessment alternative, the validity of which is equally strong among black and white adults. The validity of the Web-administered recall was superior to that of the paper food frequency questionnaire.

  15. Are Validity and Reliability "Relevant" in Qualitative Evaluation Research?

    Science.gov (United States)

    Goodwin, Laura D.; Goodwin, William L.

    1984-01-01

    The views of prominant qualitative methodologists on the appropriateness of validity and reliability estimation for the measurement strategies employed in qualitative evaluations are summarized. A case is made for the relevance of validity and reliability estimation. Definitions of validity and reliability for qualitative measurement are presented…

  16. Statistical Validation of a Web-Based GIS Application and Its Applicability to Cardiovascular-Related Studies

    Directory of Open Access Journals (Sweden)

    Jae Eun Lee

    2015-12-01

    Full Text Available Purpose: There is abundant evidence that neighborhood characteristics are significantly linked to the health of the inhabitants of a given space within a given time frame. This study is to statistically validate a web-based GIS application designed to support cardiovascular-related research developed by the NIH funded Research Centers in Minority Institutions (RCMI Translational Research Network (RTRN Data Coordinating Center (DCC and discuss its applicability to cardiovascular studies. Methods: Geo-referencing, geocoding and geospatial analyses were conducted for 500 randomly selected home addresses in a U.S. southeastern Metropolitan area. The correlation coefficient, factor analysis and Cronbach’s alpha (α were estimated to quantify measures of the internal consistency, reliability and construct/criterion/discriminant validity of the cardiovascular-related geospatial variables (walk score, number of hospitals, fast food restaurants, parks and sidewalks. Results: Cronbach’s α for CVD GEOSPATIAL variables was 95.5%, implying successful internal consistency. Walk scores were significantly correlated with number of hospitals (r = 0.715; p < 0.0001, fast food restaurants (r = 0.729; p < 0.0001, parks (r = 0.773; p < 0.0001 and sidewalks (r = 0.648; p < 0.0001 within a mile from homes. It was also significantly associated with diversity index (r = 0.138, p = 0.0023, median household incomes (r = −0.181; p < 0.0001, and owner occupied rates (r = −0.440; p < 0.0001. However, its non-significant correlation was found with median age, vulnerability, unemployment rate, labor force, and population growth rate. Conclusion: Our data demonstrates that geospatial data generated by the web-based application were internally consistent and demonstrated satisfactory validity. Therefore, the GIS application may be useful to apply to cardiovascular-related studies aimed to investigate potential impact of geospatial factors on diseases and/or the long

  17. Primary Sclerosing Cholangitis Risk Estimate Tool (PREsTo) Predicts Outcomes in PSC: A Derivation & Validation Study Using Machine Learning.

    Science.gov (United States)

    Eaton, John E; Vesterhus, Mette; McCauley, Bryan M; Atkinson, Elizabeth J; Schlicht, Erik M; Juran, Brian D; Gossard, Andrea A; LaRusso, Nicholas F; Gores, Gregory J; Karlsen, Tom H; Lazaridis, Konstantinos N

    2018-05-09

    Improved methods are needed to risk stratify and predict outcomes in patients with primary sclerosing cholangitis (PSC). Therefore, we sought to derive and validate a new prediction model and compare its performance to existing surrogate markers. The model was derived using 509 subjects from a multicenter North American cohort and validated in an international multicenter cohort (n=278). Gradient boosting, a machine based learning technique, was used to create the model. The endpoint was hepatic decompensation (ascites, variceal hemorrhage or encephalopathy). Subjects with advanced PSC or cholangiocarcinoma at baseline were excluded. The PSC risk estimate tool (PREsTo) consists of 9 variables: bilirubin, albumin, serum alkaline phosphatase (SAP) times the upper limit of normal (ULN), platelets, AST, hemoglobin, sodium, patient age and the number of years since PSC was diagnosed. Validation in an independent cohort confirms PREsTo accurately predicts decompensation (C statistic 0.90, 95% confidence interval (CI) 0.84-0.95) and performed well compared to MELD score (C statistic 0.72, 95% CI 0.57-0.84), Mayo PSC risk score (C statistic 0.85, 95% CI 0.77-0.92) and SAP statistic 0.65, 95% CI 0.55-0.73). PREsTo continued to be accurate among individuals with a bilirubin statistic 0.90, 95% CI 0.82-0.96) and when the score was re-applied at a later course in the disease (C statistic 0.82, 95% CI 0.64-0.95). PREsTo accurately predicts hepatic decompensation in PSC and exceeds the performance among other widely available, noninvasive prognostic scoring systems. This article is protected by copyright. All rights reserved. © 2018 by the American Association for the Study of Liver Diseases.

  18. Cumulant-Based Coherent Signal Subspace Method for Bearing and Range Estimation

    Directory of Open Access Journals (Sweden)

    Bourennane Salah

    2007-01-01

    Full Text Available A new method for simultaneous range and bearing estimation for buried objects in the presence of an unknown Gaussian noise is proposed. This method uses the MUSIC algorithm with noise subspace estimated by using the slice fourth-order cumulant matrix of the received data. The higher-order statistics aim at the removal of the additive unknown Gaussian noise. The bilinear focusing operator is used to decorrelate the received signals and to estimate the coherent signal subspace. A new source steering vector is proposed including the acoustic scattering model at each sensor. Range and bearing of the objects at each sensor are expressed as a function of those at the first sensor. This leads to the improvement of object localization anywhere, in the near-field or in the far-field zone of the sensor array. Finally, the performances of the proposed method are validated on data recorded during experiments in a water tank.

  19. Estimating external causes of death in Thailand 1996-2009 based on the 2005 Verbal Autopsy study

    Directory of Open Access Journals (Sweden)

    Nuntaporn Klinjun

    2014-12-01

    Full Text Available This study aimed to develop models based on Verbal Autopsy (VA data and to estimate correct number of deaths from external causes in Thailand from 1996 to 2009. Logistic regression was used to create models of the three external causes of death classified by province, gender-age group and Vital registration (VR cause-location group. Receiver operating characteristic (ROC curves were used to validate the models by matching the number of reported deaths to the number of deaths predicted by the models. The models provided accurate prediction results, with false positive error rates 1.6%, 2.0% and 0.6% and sensitivities 73.8%, 46.3% and 62.0%, respectively. The results reveal that under-reporting of external causes of death increased over the 14-year period. Our statistical method confirms that the Thai 2005 VA data can be used to estimate external causes of death from VR report in Thailand to allow for the under-reporting rate.

  20. BWR level estimation using Kalman Filtering approach

    International Nuclear Information System (INIS)

    Garner, G.; Divakaruni, S.M.; Meyer, J.E.

    1986-01-01

    Work is in progress on development of a system for Boiling Water Reactor (BWR) vessel level validation and failure detection. The levels validated include the liquid level both inside and outside the core shroud. This work is a major part of a larger effort to develop a complete system for BWR signal validation. The demonstration plant is the Oyster Creek BWR. Liquid level inside the core shroud is not directly measured during full power operation. This level must be validated using measurements of other quantities and analytic models. Given the available sensors, analytic models for level that are based on mass and energy balances can contain open integrators. When such a model is driven by noisy measurements, the model predicted level will deviate from the true level over time. To validate the level properly and to avoid false alarms, the open integrator must be stabilized. In addition, plant parameters will change slowly with time. The respective model must either account for these plant changes or be insensitive to them to avoid false alarms and maintain sensitivity to true failures of level instrumentation. Problems are addressed here by combining the extended Kalman Filter and Parity Space Decision/Estimator. The open integrator is stabilized by integrating from the validated estimate at the beginning of each sampling interval, rather than from the model predicted value. The model is adapted to slow plant/sensor changes by updating model parameters on-line

  1. Validation of a Process-Based Agro-Ecosystem Model (Agro-IBIS for Maize in Xinjiang, Northwest China

    Directory of Open Access Journals (Sweden)

    Tureniguli Amuti

    2018-03-01

    Full Text Available Agricultural oasis expansion and intensive management practices have occurred in arid and semiarid regions of China during the last few decades. Accordingly, regional carbon and water budgets have been profoundly impacted by agroecosystems in these regions. Therefore, study on the methods used to accurately estimate energy, water, and carbon exchanges is becoming increasingly important. Process-based models can represent the complex processes between land and atmosphere among agricultural ecosystems. However, before the models can be applied they must be validated under different environmental and climatic conditions. In this study, a process-based agricultural ecosystem model (Agro-IBIS was validated for maize crops using 3 years of soil and biometric measurements at Wulanwusu agrometeorological site (WAS located in the Shihezi oasis in Xinjiang, northwest China. The model satisfactorily represented leaf area index (LAI during the growing season, simulating its peak values within the magnitude of 0–10%. The total biomass carbon was overestimated by 15%, 8%, and 16% in 2004, 2005, and 2006, respectively. The model satisfactorily simulated the soil temperature (0–10 cm and volumetric water content (VWC (0–25 cm of farmland during the growing season. However, it overestimated soil temperature approximately by 4 °C and VWC by 15–30% during the winter, coinciding with the period of no vegetation cover in Xinjiang. Overall, the results indicate that the model could represent crop growth, and seems to be applicable in multiple sites in arid oases agroecosystems of Xinjiang. Future application of the model will impose more comprehensive validation using eddy covariance flux data, and consider including dynamics of crop residue and improving characterization of the final stage of leaf development.

  2. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    Science.gov (United States)

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  3. A METHOD TO ESTIMATE TEMPORAL INTERACTION IN A CONDITIONAL RANDOM FIELD BASED APPROACH FOR CROP RECOGNITION

    Directory of Open Access Journals (Sweden)

    P. M. A. Diaz

    2016-06-01

    Full Text Available This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.

  4. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    Science.gov (United States)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the

  5. Experimental validation of pulsed column inventory estimators

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Weh, R.; Eiben, K.; Dander, T.; Hakkila, E.A.

    1991-01-01

    Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may be an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs

  6. Statistical inference for remote sensing-based estimates of net deforestation

    Science.gov (United States)

    Ronald E. McRoberts; Brian F. Walters

    2012-01-01

    Statistical inference requires expression of an estimate in probabilistic terms, usually in the form of a confidence interval. An approach to constructing confidence intervals for remote sensing-based estimates of net deforestation is illustrated. The approach is based on post-classification methods using two independent forest/non-forest classifications because...

  7. Validation of a Semi-Quantitative Food Frequency Questionnaire for Argentinean Adults

    OpenAIRE

    Dehghan, Mahshid; del Cerro, Silvia; Zhang, Xiaohe; Cuneo, Jose Maini; Linetzky, Bruno; Diaz, Rafael; Merchant, Anwar T.

    2012-01-01

    BACKGROUND: The Food Frequency Questionnaire (FFQ) is the most commonly used method for ranking individuals based on long term food intake in large epidemiological studies. The validation of an FFQ for specific populations is essential as food consumption is culture dependent. The aim of this study was to develop a Semi-quantitative Food Frequency Questionnaire (SFFQ) and evaluate its validity and reproducibility in estimating nutrient intake in urban and rural areas of Argentina. METHODS/PRI...

  8. Experimental validation of Monte Carlo calculations for organ dose

    International Nuclear Information System (INIS)

    Yalcintas, M.G.; Eckerman, K.F.; Warner, G.G.

    1980-01-01

    The problem of validating estimates of absorbed dose due to photon energy deposition is examined. The computational approaches used for the estimation of the photon energy deposition is examined. The limited data for validation of these approaches is discussed and suggestions made as to how better validation information might be obtained

  9. Fuzzy logic based ELF magnetic field estimation in substations

    International Nuclear Information System (INIS)

    Kosalay, I.

    2008-01-01

    This paper examines estimation of the extremely low frequency magnetic fields (MF) in the power substation. First, the results of the previous relevant research studies and the MF measurements in a sample power substation are presented. Then, a fuzzy logic model based on the geometric definitions in order to estimate the MF distribution is explained. Visual software, which has a three-dimensional screening unit, based on the fuzzy logic technique, has been developed. (authors)

  10. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  11. Actinide solubility in deep groundwaters - estimates for upper limits based on chemical equilibrium calculations

    International Nuclear Information System (INIS)

    Schweingruber, M.

    1983-12-01

    A chemical equilibrium model is used to estimate maximum upper concentration limits for some actinides (Th, U, Np, Pu, Am) in groundwaters. Eh/pH diagrams for solubility isopleths, dominant dissolved species and limiting solids are constructed for fixed parameter sets including temperature, thermodynamic database, ionic strength and total concentrations of most important inorganic ligands (carbonate, fluoride, phosphate, sulphate, chloride). In order to assess conservative conditions, a reference water is defined with high ligand content and ionic strength, but without competing cations. In addition, actinide oxides and hydroxides are the only solid phases considered. Recommendations for 'safe' upper actinide solubility limits for deep groundwaters are derived from such diagrams, based on the predicted Eh/pH domain. The model results are validated as far as the scarce experimental data permit. (Auth.)

  12. Using plot experiments to test the validity of mass balance models employed to estimate soil redistribution rates from 137Cs and 210Pb(ex) measurements.

    Science.gov (United States)

    Porto, Paolo; Walling, Des E

    2012-10-01

    Information on rates of soil loss from agricultural land is a key requirement for assessing both on-site soil degradation and potential off-site sediment problems. Many models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution as a function of the local topography, hydrometeorology, soil type and land management, but empirical data remain essential for validating and calibrating such models and prediction procedures. Direct measurements using erosion plots are, however, costly and the results obtained relate to a small enclosed area, which may not be representative of the wider landscape. In recent years, the use of fallout radionuclides and more particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)) has been shown to provide a very effective means of documenting rates of soil loss and soil and sediment redistribution in the landscape. Several of the assumptions associated with the theoretical conversion models used with such measurements remain essentially unvalidated. This contribution describes the results of a measurement programme involving five experimental plots located in southern Italy, aimed at validating several of the basic assumptions commonly associated with the use of mass balance models for estimating rates of soil redistribution on cultivated land from (137)Cs and (210)Pb(ex) measurements. Overall, the results confirm the general validity of these assumptions and the importance of taking account of the fate of fresh fallout. However, further work is required to validate the conversion models employed in using fallout radionuclide measurements to document soil redistribution in the landscape and this could usefully direct attention to different environments and to the validation of the final estimates of soil redistribution rate as well as the assumptions of the models employed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Designing and validation of a yoga-based intervention for obsessive compulsive disorder.

    Science.gov (United States)

    Bhat, Shubha; Varambally, Shivarama; Karmani, Sneha; Govindaraj, Ramajayam; Gangadhar, B N

    2016-06-01

    Some yoga-based practices have been found to be useful for patients with obsessive compulsive disorder (OCD). The authors could not find a validated yoga therapy module available for OCD. This study attempted to formulate a generic yoga-based intervention module for OCD. A yoga module was designed based on traditional and contemporary yoga literature. The module was sent to 10 yoga experts for content validation. The experts rated the usefulness of the practices on a scale of 1-5 (5 = extremely useful). The final version of the module was pilot-tested on patients with OCD (n = 17) for both feasibility and effect on symptoms. Eighty-eight per cent (22 out of 25) of the items in the initial module were retained, with modifications in the module as suggested by the experts along with patients' inputs and authors' experience. The module was found to be feasible and showed an improvement in symptoms of OCD on total Yale-Brown Obsessive-Compulsive Scale (YBOCS) score (p = 0.001). A generic yoga therapy module for OCD was validated by experts in the field and found feasible to practice in patients. A decrease in the symptom scores was also found following yoga practice of 2 weeks. Further clinical validation is warranted to confirm efficacy.

  14. Validation of computer code TRAFIC used for estimation of charcoal heatup in containment ventilation systems

    International Nuclear Information System (INIS)

    Yadav, D.H.; Datta, D.; Malhotra, P.K.; Ghadge, S.G.; Bajaj, S.S.

    2005-01-01

    Full text of publication follows: Standard Indian PHWRs are provided with a Primary Containment Filtration and Pump-Back System (PCFPB) incorporating charcoal filters in the ventilation circuit to remove radioactive iodine that may be released from reactor core into the containment during LOCA+ECCS failure which is a Design Basis Accident for containment of radioactive release. This system is provided with two identical air circulation loops, each having 2 full capacity fans (1 operating and 1 standby) for a bank of four combined charcoal and High Efficiency Particulate Activity (HEPA) filters, in addition to other filters. While the filtration circuit is designed to operate under forced flow conditions, it is of interest to understand the performance of the charcoal filters, in the event of failure of the fans after operating for some time, i.e., when radio-iodine inventory is at its peak value. It is of interest to check whether the buoyancy driven natural circulation occurring in the filtration circuit is sufficient enough to keep the temperature in the charcoal under safe limits. A computer code TRAFIC (Transient Analysis of Filters in Containment) was developed using conservative one dimensional model to analyze the system. Suitable parametric studies were carried out to understand the problem and to identify the safety of existing system. TRAFIC Code has two important components. The first one estimates the heat generation in charcoal filter based on 'Source Term'; while the other one performs thermal-hydraulic computations. In an attempt validate the Code, experimental studies have been carried out. For this purpose, an experimental set up comprising of scaled down model of filtration circuit with heating coils embedded in charcoal for simulating the heating effect due to radio iodine has been constructed. The present work of validation consists of utilizing the results obtained from experiments conducted for different heat loads, elevations and adsorbent

  15. Validity of Cognitive Load Measures in Simulation-Based Training: A Systematic Review.

    Science.gov (United States)

    Naismith, Laura M; Cavalcanti, Rodrigo B

    2015-11-01

    Cognitive load theory (CLT) provides a rich framework to inform instructional design. Despite the applicability of CLT to simulation-based medical training, findings from multimedia learning have not been consistently replicated in this context. This lack of transferability may be related to issues in measuring cognitive load (CL) during simulation. The authors conducted a review of CLT studies across simulation training contexts to assess the validity evidence for different CL measures. PRISMA standards were followed. For 48 studies selected from a search of MEDLINE, EMBASE, PsycInfo, CINAHL, and ERIC databases, information was extracted about study aims, methods, validity evidence of measures, and findings. Studies were categorized on the basis of findings and prevalence of validity evidence collected, and statistical comparisons between measurement types and research domains were pursued. CL during simulation training has been measured in diverse populations including medical trainees, pilots, and university students. Most studies (71%; 34) used self-report measures; others included secondary task performance, physiological indices, and observer ratings. Correlations between CL and learning varied from positive to negative. Overall validity evidence for CL measures was low (mean score 1.55/5). Studies reporting greater validity evidence were more likely to report that high CL impaired learning. The authors found evidence that inconsistent correlations between CL and learning may be related to issues of validity in CL measures. Further research would benefit from rigorous documentation of validity and from triangulating measures of CL. This can better inform CLT instructional design for simulation-based medical training.

  16. A Neural Networks Based Operation Guidance System for Procedure Presentation and Validation

    International Nuclear Information System (INIS)

    Seung, Kun Mo; Lee, Seung Jun; Seong, Poong Hyun

    2006-01-01

    In this paper, a neural network based operator support system is proposed to reduce operator's errors in abnormal situations in nuclear power plants (NPPs). There are many complicated situations, in which regular and suitable operations should be done by operators accordingly. In order to regulate and validate operators' operations, it is necessary to develop an operator support system which includes computer based procedures with the functions for operation validation. Many computerized procedures systems (CPS) have been recently developed. Focusing on the human machine interface (HMI) design and procedures' computerization, most of CPSs used various methodologies to enhance system's convenience, reliability and accessibility. Other than only showing procedures, the proposed system integrates a simple CPS and an operation validation system (OVS) by using artificial neural network (ANN) for operational permission and quantitative evaluation

  17. Estimation of soil erosion for a sustainable land use planning: RUSLE model validation by remote sensing data utilization in the Kalikonto watershed

    Directory of Open Access Journals (Sweden)

    C. Andriyanto

    2015-10-01

    Full Text Available Technology of Geographic Information Systems (GIS and Remote Sensing (RS are increasingly used for planning and natural resources management. GIS and RS is based on pixels is used as a tool of spatial modeling for predicting the erosion. One of the methods developed for predicting the erosion is a Revised Universal Soil Loss Equation (RUSLE. RUSLE is the method used for predicting the erosion associated with runoff gained from five parameters, namely: rain erosivity (R, soil erodibility (K, length of slopes (L, slope (S, and land management (CP. The main constraint encountered in the process of operating the GIS is the calculation of the slope length factor (L.This study was designed to create a plan of sustainable land use and low erosion through the RULSE erosion modeling by utilizing the remote sensing data. With this approach, this study was divided into three activities, namely (1 the preparation and analysis of spatial data for the determination of the parameters and estimating the erosion by using RUSLE models, (2 the validation and calibration of the model of RUSLE by measuring soil erosion at the scale of plots on the field, and (3 Creating a plan of sustainable land use and low erosion with RUSLE. The validation erosion shows the value of R2 = 0.56 and r = 0.74. Results of this study showed that the RUSLE model could be used in the Kalikonto watershed. The erosions at the value of the actual estimation, spatial Plan (RTRW and land capability class in the Kalikonto watershed were 72t / ha / year, 62 t / ha / year and 58 t / ha / year, respectively.

  18. Estimation of the daily global solar radiation based on the Gaussian process regression methodology in the Saharan climate

    Science.gov (United States)

    Guermoui, Mawloud; Gairaa, Kacem; Rabehi, Abdelaziz; Djafer, Djelloul; Benkaciali, Said

    2018-06-01

    Accurate estimation of solar radiation is the major concern in renewable energy applications. Over the past few years, a lot of machine learning paradigms have been proposed in order to improve the estimation performances, mostly based on artificial neural networks, fuzzy logic, support vector machine and adaptive neuro-fuzzy inference system. The aim of this work is the prediction of the daily global solar radiation, received on a horizontal surface through the Gaussian process regression (GPR) methodology. A case study of Ghardaïa region (Algeria) has been used in order to validate the above methodology. In fact, several combinations have been tested; it was found that, GPR-model based on sunshine duration, minimum air temperature and relative humidity gives the best results in term of mean absolute bias error (MBE), root mean square error (RMSE), relative mean square error (rRMSE), and correlation coefficient ( r) . The obtained values of these indicators are 0.67 MJ/m2, 1.15 MJ/m2, 5.2%, and 98.42%, respectively.

  19. 48 CFR 2452.216-70 - Estimated cost, base fee and award fee.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Estimated cost, base fee... Provisions and Clauses 2452.216-70 Estimated cost, base fee and award fee. As prescribed in 2416.406(e)(1), insert the following clause in all cost-plus-award-fee contracts: Estimated Cost, Base Fee and Award Fee...

  20. Spectrum estimation method based on marginal spectrum

    International Nuclear Information System (INIS)

    Cai Jianhua; Hu Weiwen; Wang Xianchun

    2011-01-01

    FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)

  1. Estimation of vegetation photosynthetic capacity from space-based measurements of chlorophyll fluorescence for terrestrial biosphere models.

    Science.gov (United States)

    Zhang, Yongguang; Guanter, Luis; Berry, Joseph A; Joiner, Joanna; van der Tol, Christiaan; Huete, Alfredo; Gitelson, Anatoly; Voigt, Maximilian; Köhler, Philipp

    2014-12-01

    Photosynthesis simulations by terrestrial biosphere models are usually based on the Farquhar's model, in which the maximum rate of carboxylation (Vcmax ) is a key control parameter of photosynthetic capacity. Even though Vcmax is known to vary substantially in space and time in response to environmental controls, it is typically parameterized in models with tabulated values associated to plant functional types. Remote sensing can be used to produce a spatially continuous and temporally resolved view on photosynthetic efficiency, but traditional vegetation observations based on spectral reflectance lack a direct link to plant photochemical processes. Alternatively, recent space-borne measurements of sun-induced chlorophyll fluorescence (SIF) can offer an observational constraint on photosynthesis simulations. Here, we show that top-of-canopy SIF measurements from space are sensitive to Vcmax at the ecosystem level, and present an approach to invert Vcmax from SIF data. We use the Soil-Canopy Observation of Photosynthesis and Energy (SCOPE) balance model to derive empirical relationships between seasonal Vcmax and SIF which are used to solve the inverse problem. We evaluate our Vcmax estimation method at six agricultural flux tower sites in the midwestern US using spaced-based SIF retrievals. Our Vcmax estimates agree well with literature values for corn and soybean plants (average values of 37 and 101 μmol m(-2)  s(-1) , respectively) and show plausible seasonal patterns. The effect of the updated seasonally varying Vcmax parameterization on simulated gross primary productivity (GPP) is tested by comparing to simulations with fixed Vcmax values. Validation against flux tower observations demonstrate that simulations of GPP and light use efficiency improve significantly when our time-resolved Vcmax estimates from SIF are used, with R(2) for GPP comparisons increasing from 0.85 to 0.93, and for light use efficiency from 0.44 to 0.83. Our results support the use of

  2. Development of Deep Learning Based Data Fusion Approach for Accurate Rainfall Estimation Using Ground Radar and Satellite Precipitation Products

    Science.gov (United States)

    Chen, H.; Chandra, C. V.; Tan, H.; Cifelli, R.; Xie, P.

    2016-12-01

    Rainfall estimation based on onboard satellite measurements has been an important topic in satellite meteorology for decades. A number of precipitation products at multiple time and space scales have been developed based upon satellite observations. For example, NOAA Climate Prediction Center has developed a morphing technique (i.e., CMORPH) to produce global precipitation products by combining existing space based rainfall estimates. The CMORPH products are essentially derived based on geostationary satellite IR brightness temperature information and retrievals from passive microwave measurements (Joyce et al. 2004). Although the space-based precipitation products provide an excellent tool for regional and global hydrologic and climate studies as well as improved situational awareness for operational forecasts, its accuracy is limited due to the sampling limitations, particularly for extreme events such as very light and/or heavy rain. On the other hand, ground-based radar is more mature science for quantitative precipitation estimation (QPE), especially after the implementation of dual-polarization technique and further enhanced by urban scale radar networks. Therefore, ground radars are often critical for providing local scale rainfall estimation and a "heads-up" for operational forecasters to issue watches and warnings as well as validation of various space measurements and products. The CASA DFW QPE system, which is based on dual-polarization X-band CASA radars and a local S-band WSR-88DP radar, has demonstrated its excellent performance during several years of operation in a variety of precipitation regimes. The real-time CASA DFW QPE products are used extensively for localized hydrometeorological applications such as urban flash flood forecasting. In this paper, a neural network based data fusion mechanism is introduced to improve the satellite-based CMORPH precipitation product by taking into account the ground radar measurements. A deep learning system is

  3. Single-snapshot DOA estimation by using Compressed Sensing

    Science.gov (United States)

    Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin

    2014-12-01

    This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.

  4. Small UAS-Based Wind Feature Identification System Part 1: Integration and Validation

    Directory of Open Access Journals (Sweden)

    Leopoldo Rodriguez Salazar

    2016-12-01

    Full Text Available This paper presents a system for identification of wind features, such as gusts and wind shear. These are of particular interest in the context of energy-efficient navigation of Small Unmanned Aerial Systems (UAS. The proposed system generates real-time wind vector estimates and a novel algorithm to generate wind field predictions. Estimations are based on the integration of an off-the-shelf navigation system and airspeed readings in a so-called direct approach. Wind predictions use atmospheric models to characterize the wind field with different statistical analyses. During the prediction stage, the system is able to incorporate, in a big-data approach, wind measurements from previous flights in order to enhance the approximations. Wind estimates are classified and fitted into a Weibull probability density function. A Genetic Algorithm (GA is utilized to determine the shaping and scale parameters of the distribution, which are employed to determine the most probable wind speed at a certain position. The system uses this information to characterize a wind shear or a discrete gust and also utilizes a Gaussian Process regression to characterize continuous gusts. The knowledge of the wind features is crucial for computing energy-efficient trajectories with low cost and payload. Therefore, the system provides a solution that does not require any additional sensors. The system architecture presents a modular decentralized approach, in which the main parts of the system are separated in modules and the exchange of information is managed by a communication handler to enhance upgradeability and maintainability. Validation is done providing preliminary results of both simulations and Software-In-The-Loop testing. Telemetry data collected from real flights, performed in the Seville Metropolitan Area in Andalusia (Spain, was used for testing. Results show that wind estimation and predictions can be calculated at 1 Hz and a wind map can be updated at 0.4 Hz

  5. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  6. Head pose estimation algorithm based on deep learning

    Science.gov (United States)

    Cao, Yuanming; Liu, Yijun

    2017-05-01

    Head pose estimation has been widely used in the field of artificial intelligence, pattern recognition and intelligent human-computer interaction and so on. Good head pose estimation algorithm should deal with light, noise, identity, shelter and other factors robustly, but so far how to improve the accuracy and robustness of attitude estimation remains a major challenge in the field of computer vision. A method based on deep learning for pose estimation is presented. Deep learning with a strong learning ability, it can extract high-level image features of the input image by through a series of non-linear operation, then classifying the input image using the extracted feature. Such characteristics have greater differences in pose, while they are robust of light, identity, occlusion and other factors. The proposed head pose estimation is evaluated on the CAS-PEAL data set. Experimental results show that this method is effective to improve the accuracy of pose estimation.

  7. Thermodynamic properties calculation of the flue gas based on its composition estimation for coal-fired power plants

    International Nuclear Information System (INIS)

    Xu, Liang; Yuan, Jingqi

    2015-01-01

    Thermodynamic properties of the working fluid and the flue gas play an important role in the thermodynamic calculation for the boiler design and the operational optimization in power plants. In this study, a generic approach to online calculate the thermodynamic properties of the flue gas is proposed based on its composition estimation. It covers the full operation scope of the flue gas, including the two-phase state when the temperature becomes lower than the dew point. The composition of the flue gas is online estimated based on the routinely offline assays of the coal samples and the online measured oxygen mole fraction in the flue gas. The relative error of the proposed approach is found less than 1% when the standard data set of the dry and humid air and the typical flue gas is used for validation. Also, the sensitivity analysis of the individual component and the influence of the measurement error of the oxygen mole fraction on the thermodynamic properties of the flue gas are presented. - Highlights: • Flue gas thermodynamic properties in coal-fired power plants are online calculated. • Flue gas composition is online estimated using the measured oxygen mole fraction. • The proposed approach covers full operation scope, including two-phase flue gas. • Component sensitivity to the thermodynamic properties of flue gas is presented.

  8. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    Science.gov (United States)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  9. On validation of the rain climatic zone designations for Nigeria

    Science.gov (United States)

    Obiyemi, O. O.; Ibiyemi, T. S.; Ojo, J. S.

    2017-07-01

    In this paper, validation of rain climatic zone classifications for Nigeria is presented based on global radio-climatic models by the International Telecommunication Union-Radiocommunication (ITU-R) and Crane. Rain rate estimates deduced from several ground-based measurements and those earlier estimated from the precipitation index on the Tropical Rain Measurement Mission (TRMM) were employed for the validation exercise. Although earlier classifications indicated that Nigeria falls into zones P, Q, N, and K for the ITU-R designations, and zones E and H for Crane's climatic zone designations, the results however confirmed that the rain climatic zones across Nigeria can only be classified into four, namely P, Q, M, and N for the ITU-R designations, while the designations by Crane exhibited only three zones, namely E, G, and H. The ITU-R classification was found to be more suitable for planning microwave and millimeter wave links across Nigeria. The research outcomes are vital in boosting the confidence level of system designers in using the ITU-R designations as presented in the map developed for the rain zone designations for estimating the attenuation induced by rain along satellite and terrestrial microwave links over Nigeria.

  10. Numerosity estimation in visual stimuli in the absence of luminance-based cues.

    Directory of Open Access Journals (Sweden)

    Peter Kramer

    2011-02-01

    Full Text Available Numerosity estimation is a basic preverbal ability that humans share with many animal species and that is believed to be foundational of numeracy skills. It is notoriously difficult, however, to establish whether numerosity estimation is based on numerosity itself, or on one or more non-numerical cues like-in visual stimuli-spatial extent and density. Frequently, different non-numerical cues are held constant on different trials. This strategy, however, still allows numerosity estimation to be based on a combination of non-numerical cues rather than on any particular one by itself.Here we introduce a novel method, based on second-order (contrast-based visual motion, to create stimuli that exclude all first-order (luminance-based cues to numerosity. We show that numerosities can be estimated almost as well in second-order motion as in first-order motion.The results show that numerosity estimation need not be based on first-order spatial filtering, first-order density perception, or any other processing of luminance-based cues to numerosity. Our method can be used as an effective tool to control non-numerical variables in studies of numerosity estimation.

  11. Estimation of High-Frequency Earth-Space Radio Wave Signals via Ground-Based Polarimetric Radar Observations

    Science.gov (United States)

    Bolen, Steve; Chandrasekar, V.

    2002-01-01

    Expanding human presence in space, and enabling the commercialization of this frontier, is part of the strategic goals for NASA's Human Exploration and Development of Space (HEDS) enterprise. Future near-Earth and planetary missions will support the use of high-frequency Earth-space communication systems. Additionally, increased commercial demand on low-frequency Earth-space links in the S- and C-band spectra have led to increased interest in the use of higher frequencies in regions like Ku and Ka-band. Attenuation of high-frequency signals, due to a precipitating medium, can be quite severe and can cause considerable disruptions in a communications link that traverses such a medium. Previously, ground radar measurements were made along the Earth-space path and compared to satellite beacon data that was transmitted to a ground station. In this paper, quantitative estimation of the attenuation along the propagation path is made via inter-comparisons of radar data taken from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) and ground-based polarimetric radar observations. Theoretical relationships between the expected specific attenuation (k) of spaceborne measurements with ground-based measurements of reflectivity (Zh) and differential propagation phase shift (Kdp) are developed for various hydrometeors that could be present along the propagation path, which are used to estimate the two-way path-integrated attenuation (PIA) on the PR return echo. Resolution volume matching and alignment of the radar systems is performed, and a direct comparison of PR return echo with ground radar attenuation estimates is made directly on a beam-by-beam basis. The technique is validated using data collected from the TExas and Florida UNderflights (TEFLUN-B) experiment and the TRMM large Biosphere-Atmosphere experiment in Amazonia (LBA) campaign. Attenuation estimation derived from this method can be used for strategiC planning of communication systems for

  12. Impact of entrainment and impingement on fish populations in the Hudson River estuary. Volume III. An analysis of the validity of the utilities' stock-recruitment curve-fitting exercise and prior estimation of beta technique. Environmental Sciences Division publication No. 1792

    International Nuclear Information System (INIS)

    Christensen, S.W.; Goodyear, C.P.; Kirk, B.L.

    1982-03-01

    This report addresses the validity of the utilities' use of the Ricker stock-recruitment model to extrapolate the combined entrainment-impingement losses of young fish to reductions in the equilibrium population size of adult fish. In our testimony, a methodology was developed and applied to address a single fundamental question: if the Ricker model really did apply to the Hudson River striped bass population, could the utilities' estimates, based on curve-fitting, of the parameter alpha (which controls the impact) be considered reliable. In addition, an analysis is included of the efficacy of an alternative means of estimating alpha, termed the technique of prior estimation of beta (used by the utilities in a report prepared for regulatory hearings on the Cornwall Pumped Storage Project). This validation methodology should also be useful in evaluating inferences drawn in the literature from fits of stock-recruitment models to data obtained from other fish stocks

  13. State of Charge Estimation Using the Extended Kalman Filter for Battery Management Systems Based on the ARX Battery Model

    Directory of Open Access Journals (Sweden)

    Hongjie Wu

    2013-01-01

    Full Text Available State of charge (SOC is a critical factor to guarantee that a battery system is operating in a safe and reliable manner. Many uncertainties and noises, such as fluctuating current, sensor measurement accuracy and bias, temperature effects, calibration errors or even sensor failure, etc. pose a challenge to the accurate estimation of SOC in real applications. This paper adds two contributions to the existing literature. First, the auto regressive exogenous (ARX model is proposed here to simulate the battery nonlinear dynamics. Due to its discrete form and ease of implemention, this straightforward approach could be more suitable for real applications. Second, its order selection principle and parameter identification method is illustrated in detail in this paper. The hybrid pulse power characterization (HPPC cycles are implemented on the 60AH LiFePO4 battery module for the model identification and validation. Based on the proposed ARX model, SOC estimation is pursued using the extended Kalman filter. Evaluation of the adaptability of the battery models and robustness of the SOC estimation algorithm are also verified. The results indicate that the SOC estimation method using the Kalman filter based on the ARX model shows great performance. It increases the model output voltage accuracy, thereby having the potential to be used in real applications, such as EVs and HEVs.

  14. TEMIS UV product validation using NILU-UV ground-based measurements in Thessaloniki, Greece

    Science.gov (United States)

    Zempila, Melina-Maria; van Geffen, Jos H. G. M.; Taylor, Michael; Fountoulakis, Ilias; Koukouli, Maria-Elissavet; van Weele, Michiel; van der A, Ronald J.; Bais, Alkiviadis; Meleti, Charikleia; Balis, Dimitrios

    2017-06-01

    This study aims to cross-validate ground-based and satellite-based models of three photobiological UV effective dose products: the Commission Internationale de l'Éclairage (CIE) erythemal UV, the production of vitamin D in the skin, and DNA damage, using high-temporal-resolution surface-based measurements of solar UV spectral irradiances from a synergy of instruments and models. The satellite-based Tropospheric Emission Monitoring Internet Service (TEMIS; version 1.4) UV daily dose data products were evaluated over the period 2009 to 2014 with ground-based data from a Norsk Institutt for Luftforskning (NILU)-UV multifilter radiometer located at the northern midlatitude super-site of the Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki (LAP/AUTh), in Greece. For the NILU-UV effective dose rates retrieval algorithm, a neural network (NN) was trained to learn the nonlinear functional relation between NILU-UV irradiances and collocated Brewer-based photobiological effective dose products. Then the algorithm was subjected to sensitivity analysis and validation. The correlation of the NN estimates with target outputs was high (r = 0. 988 to 0.990) and with a very low bias (0.000 to 0.011 in absolute units) proving the robustness of the NN algorithm. For further evaluation of the NILU NN-derived products, retrievals of the vitamin D and DNA-damage effective doses from a collocated Yankee Environmental Systems (YES) UVB-1 pyranometer were used. For cloud-free days, differences in the derived UV doses are better than 2 % for all UV dose products, revealing the reference quality of the ground-based UV doses at Thessaloniki from the NILU-UV NN retrievals. The TEMIS UV doses used in this study are derived from ozone measurements by the SCIAMACHY/Envisat and GOME2/MetOp-A satellite instruments, over the European domain in combination with SEVIRI/Meteosat-based diurnal cycle of the cloud cover fraction per 0. 5° × 0. 5° (lat × long) grid cells. TEMIS

  15. TEMIS UV product validation using NILU-UV ground-based measurements in Thessaloniki, Greece

    Directory of Open Access Journals (Sweden)

    M.-M. Zempila

    2017-06-01

    Full Text Available This study aims to cross-validate ground-based and satellite-based models of three photobiological UV effective dose products: the Commission Internationale de l'Éclairage (CIE erythemal UV, the production of vitamin D in the skin, and DNA damage, using high-temporal-resolution surface-based measurements of solar UV spectral irradiances from a synergy of instruments and models. The satellite-based Tropospheric Emission Monitoring Internet Service (TEMIS; version 1.4 UV daily dose data products were evaluated over the period 2009 to 2014 with ground-based data from a Norsk Institutt for Luftforskning (NILU-UV multifilter radiometer located at the northern midlatitude super-site of the Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki (LAP/AUTh, in Greece. For the NILU-UV effective dose rates retrieval algorithm, a neural network (NN was trained to learn the nonlinear functional relation between NILU-UV irradiances and collocated Brewer-based photobiological effective dose products. Then the algorithm was subjected to sensitivity analysis and validation. The correlation of the NN estimates with target outputs was high (r = 0. 988 to 0.990 and with a very low bias (0.000 to 0.011 in absolute units proving the robustness of the NN algorithm. For further evaluation of the NILU NN-derived products, retrievals of the vitamin D and DNA-damage effective doses from a collocated Yankee Environmental Systems (YES UVB-1 pyranometer were used. For cloud-free days, differences in the derived UV doses are better than 2 % for all UV dose products, revealing the reference quality of the ground-based UV doses at Thessaloniki from the NILU-UV NN retrievals. The TEMIS UV doses used in this study are derived from ozone measurements by the SCIAMACHY/Envisat and GOME2/MetOp-A satellite instruments, over the European domain in combination with SEVIRI/Meteosat-based diurnal cycle of the cloud cover fraction per 0. 5° × 0. 5

  16. Decay in blood loss estimation skills after web-based didactic training.

    Science.gov (United States)

    Toledo, Paloma; Eosakul, Stanley T; Goetz, Kristopher; Wong, Cynthia A; Grobman, William A

    2012-02-01

    Accuracy in blood loss estimation has been shown to improve immediately after didactic training. The objective of this study was to evaluate retention of blood loss estimation skills 9 months after a didactic web-based training. Forty-four participants were recruited from a cohort that had undergone web-based training and testing in blood loss estimation. The web-based posttraining test, consisting of pictures of simulated blood loss, was repeated 9 months after the initial training and testing. The primary outcome was the difference in accuracy of estimated blood loss (percent error) at 9 months compared with immediately posttraining. At the 9-month follow-up, the median error in estimation worsened to -34.6%. Although better than the pretraining error of -47.8% (P = 0.003), the 9-month error was significantly less accurate than the immediate posttraining error of -13.5% (P = 0.01). Decay in blood loss estimation skills occurs by 9 months after didactic training.

  17. Relative Validity and Reproducibility of an Interviewer Administered 14-Item FFQ to Estimate Flavonoid Intake Among Older Adults with Mild-Moderate Dementia.

    Science.gov (United States)

    Kent, Katherine; Charlton, Karen

    2017-01-01

    There is a large burden on researchers and participants when attempting to accurately measure dietary flavonoid intake using dietary assessment. Minimizing participant and researcher burden when collecting dietary data may improve the validity of the results, especially in older adults with cognitive impairment. A short 14-item food frequency questionnaire (FFQ) to measure flavonoid intake, and flavonoid subclasses (anthocyanins, flavan-3-ols, flavones, flavonols, and flavanones) was developed and assessed for validity and reproducibility against a 24-hour recall. Older adults with mild-moderate dementia (n = 49) attended two interviews 12 weeks apart. With the assistance of a family carer, a 24-h recall was collected at the first interview, and the flavonoid FFQ was interviewer-administered at both time-points. Validity and reproducibility was assessed using the Wilcoxon signed-rank sum test, Spearman's correlation coefficient, Bland-Altman Plots, and Cohen's kappa. Mean flavonoid intake was determined (FFQ1 = 795 ± 492.7 mg/day, 24-h recall = 515.6 ± 384.3 mg/day). Tests of validity indicated the FFQ was better at estimating total flavonoid intake than individual flavonoid subclasses compared with the 24-h recall. There was a significant difference in total flavonoid intake estimates between the FFQ and the 24-h recall (Wilcoxon signed-rank sum p Wilcoxon signed-rank sum test showed no significant difference, Spearman's correlation coefficient indicated excellent reliability (r = 0.75, p < 0.001), Bland-Altman plots visually showed small, nonsignificant bias and wide limits of agreement, and Cohen's kappa indicated fair agreement (κ = 0.429, p < 0.001). A 14-item FFQ developed to easily measure flavonoid intake in older adults with dementia demonstrates fair validity against a 24-h recall and good reproducibility.

  18. Development and Validation of a Mobile Device-based External Ventricular Drain Simulator.

    Science.gov (United States)

    Morone, Peter J; Bekelis, Kimon; Root, Brandon K; Singer, Robert J

    2017-10-01

    Multiple external ventricular drain (EVD) simulators have been created, yet their cost, bulky size, and nonreusable components limit their accessibility to residency programs. To create and validate an animated EVD simulator that is accessible on a mobile device. We developed a mobile-based EVD simulator that is compatible with iOS (Apple Inc., Cupertino, California) and Android-based devices (Google, Mountain View, California) and can be downloaded from the Apple App and Google Play Store. Our simulator consists of a learn mode, which teaches users the procedure, and a test mode, which assesses users' procedural knowledge. Twenty-eight participants, who were divided into expert and novice categories, completed the simulator in test mode and answered a postmodule survey. This was graded using a 5-point Likert scale, with 5 representing the highest score. Using the survey results, we assessed the module's face and content validity, whereas construct validity was evaluated by comparing the expert and novice test scores. Participants rated individual survey questions pertaining to face and content validity a median score of 4 out of 5. When comparing test scores, generated by the participants completing the test mode, the experts scored higher than the novices (mean, 71.5; 95% confidence interval, 69.2 to 73.8 vs mean, 48; 95% confidence interval, 44.2 to 51.6; P mobile-based EVD simulator that is inexpensive, reusable, and accessible. Our results demonstrate that this simulator is face, content, and construct valid. Copyright © 2017 by the Congress of Neurological Surgeons

  19. A multi-timescale estimator for battery state of charge and capacity dual estimation based on an online identified model

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Zhao, Jiyun; Ji, Dongxu; Tseng, King Jet

    2017-01-01

    Highlights: •SOC and capacity are dually estimated with online adapted battery model. •Model identification and state dual estimate are fully decoupled. •Multiple timescales are used to improve estimation accuracy and stability. •The proposed method is verified with lab-scale experiments. •The proposed method is applicable to different battery chemistries. -- Abstract: Reliable online estimation of state of charge (SOC) and capacity is critically important for the battery management system (BMS). This paper presents a multi-timescale method for dual estimation of SOC and capacity with an online identified battery model. The model parameter estimator and the dual estimator are fully decoupled and executed with different timescales to improve the model accuracy and stability. Specifically, the model parameters are online adapted with the vector-type recursive least squares (VRLS) to address the different variation rates of them. Based on the online adapted battery model, the Kalman filter (KF)-based SOC estimator and RLS-based capacity estimator are formulated and integrated in the form of dual estimation. Experimental results suggest that the proposed method estimates the model parameters, SOC, and capacity in real time with fast convergence and high accuracy. Experiments on both lithium-ion battery and vanadium redox flow battery (VRB) verify the generality of the proposed method on multiple battery chemistries. The proposed method is also compared with other existing methods on the computational cost to reveal its superiority for practical application.

  20. Fuel Burn Estimation Model

    Science.gov (United States)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.