WorldWideScience

Sample records for percent valid correct

  1. Development and Cross-Validation of Equation for Estimating Percent Body Fat of Korean Adults According to Body Mass Index

    Directory of Open Access Journals (Sweden)

    Hoyong Sung

    2017-06-01

    Full Text Available Background : Using BMI as an independent variable is the easiest way to estimate percent body fat. Thus far, few studies have investigated the development and cross-validation of an equation for estimating the percent body fat of Korean adults according to the BMI. The goals of this study were the development and cross-validation of an equation for estimating the percent fat of representative Korean adults using the BMI. Methods : Samples were obtained from the Korea National Health and Nutrition Examination Survey between 2008 and 2011. The samples from 2008-2009 and 2010-2011 were labeled as the validation group (n=10,624 and the cross-validation group (n=8,291, respectively. The percent fat was measured using dual-energy X-ray absorptiometry, and the body mass index, gender, and age were included as independent variables to estimate the measured percent fat. The coefficient of determination (R², standard error of estimation (SEE, and total error (TE were calculated to examine the accuracy of the developed equation. Results : The cross-validated R² was 0.731 for Model 1 and 0.735 for Model 2. The SEE was 3.978 for Model 1 and 3.951 for Model 2. The equations developed in this study are more accurate for estimating percent fat of the cross-validation group than those previously published by other researchers. Conclusion : The newly developed equations are comparatively accurate for the estimation of the percent fat of Korean adults.

  2. Position Error Covariance Matrix Validation and Correction

    Science.gov (United States)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  3. Validation of experimental molecular crystal structures with dispersion-corrected density functional theory calculations

    International Nuclear Information System (INIS)

    Streek, Jacco van de; Neumann, Marcus A.

    2010-01-01

    The accuracy of a dispersion-corrected density functional theory method is validated against 241 experimental organic crystal structures from Acta Cryst. Section E. This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 Å either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect

  4. Validating strengths use and deficit correction behaviour scales for South African first-year students

    Directory of Open Access Journals (Sweden)

    Karina Mostert

    2017-01-01

    Research purpose: To examine the validity, measurement invariance and reliability of the proactive strengths use and deficit correction scales for South African first-year university students. Motivation for the study: In order to cope in the demanding university environment, first-year university students need to develop and apply proactive strategies, including using their strengths and developing in their areas of weaknesses. Several studies have indicated that proactive behaviour, specifically strengths use and deficit correction behaviour, lead to favourable outcomes such as higher engagement, lower burnout and more life satisfaction. Therefore, it is important to validate scales that measure these constructs for first-year students. Research design, approach and method: A cross-sectional research approach was used. A sample of South African first-year university students aged between 18 and 23 years (N = 776 was collected. The two scales were tested for their factor structure, measurement invariance, reliability, and convergent and criterion validity. Main findings: A two-factor structure was found for the strengths use and deficit correction behaviour scales. Measurement invariance testing showed that the two scales were interpreted similarly by participants from different campuses and language groups. Cronbach’s alpha coefficients (α ≥ 0.70 indicated that both scales were reliable. In addition, the scales demonstrated convergent validity (comparing them with a general strengths use and proactive behaviour scale. Strengths use and deficit correction behaviour both predicted student burnout, student engagement and life satisfaction, with varying strengths of the relationships for strengths use and deficit correction behaviour. Practical implications: Strengths use and deficit correction behaviour could enable students to manage study demands and enhance well-being. Students will experience favourable outcomes from proactively using strengths and

  5. Validation and empirical correction of MODIS AOT and AE over ocean

    Directory of Open Access Journals (Sweden)

    N. A. J. Schutgens

    2013-09-01

    Full Text Available We present a validation study of Collection 5 MODIS level 2 Aqua and Terra AOT (aerosol optical thickness and AE (Ångström exponent over ocean by comparison to coastal and island AERONET (AErosol RObotic NETwork sites for the years 2003–2009. We show that MODIS (MODerate-resolution Imaging Spectroradiometer AOT exhibits significant biases due to wind speed and cloudiness of the observed scene, while MODIS AE, although overall unbiased, exhibits less spatial contrast on global scales than the AERONET observations. The same behaviour can be seen when MODIS AOT is compared against Maritime Aerosol Network (MAN data, suggesting that the spatial coverage of our datasets does not preclude global conclusions. Thus, we develop empirical correction formulae for MODIS AOT and AE that significantly improve agreement of MODIS and AERONET observations. We show these correction formulae to be robust. Finally, we study random errors in the corrected MODIS AOT and AE and show that they mainly depend on AOT itself, although small contributions are present due to wind speed and cloud fraction in AOT random errors and due to AE and cloud fraction in AE random errors. Our analysis yields significantly higher random AOT errors than the official MODIS error estimate (0.03 + 0.05 τ, while random AE errors are smaller than might be expected. This new dataset of bias-corrected MODIS AOT and AE over ocean is intended for aerosol model validation and assimilation studies, but also has consequences as a stand-alone observational product. For instance, the corrected dataset suggests that much less fine mode aerosol is transported across the Pacific and Atlantic oceans.

  6. Validation of experimental molecular crystal structures with dispersion-corrected density functional theory calculations.

    Science.gov (United States)

    van de Streek, Jacco; Neumann, Marcus A

    2010-10-01

    This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 A either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect.

  7. CRED Cumulative Map of Percent Scleractinian Coral Cover at Sarigan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  8. CRED Cumulative Map of Percent Scleractinian Coral Cover at Saipan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  9. CRED Cumulative Map of Percent Scleractinian Coral Cover at Tutuila

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  10. CRED Cumulative Map of Percent Scleractinian Coral Cover at Anatahan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  11. CRED Cumulative Map of Percent Scleractinian Coral Cover at Alamagan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  12. CRED Cumulative Map of Percent Scleractinian Coral Cover at Agrihan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  13. CRED Cumulative Map of Percent Scleractinian Coral Cover at Pagan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  14. CRED Cumulative Map of Percent Scleractinian Coral Cover at Asuncion

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  15. CRED Cumulative Map of Percent Scleractinian Coral Cover at Aguijan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  16. Establishing the Validity of the Personality Assessment Inventory Drug and Alcohol Scales in a Corrections Sample

    Science.gov (United States)

    Patry, Marc W.; Magaletta, Philip R.; Diamond, Pamela M.; Weinman, Beth A.

    2011-01-01

    Although not originally designed for implementation in correctional settings, researchers and clinicians have begun to use the Personality Assessment Inventory (PAI) to assess offenders. A relatively small number of studies have made attempts to validate the alcohol and drug abuse scales of the PAI, and only a very few studies have validated those…

  17. CRED Cumulative Map of Percent Scleractinian Coral Cover at Kauai, 2005

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  18. CRED Cumulative Map of Percent Scleractinian Coral Cover at Niihau, 2005

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  19. CRED Cumulative Map of Percent Scleractinian Coral Cover at Stingray Shoals

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  20. CRED Cumulative Map of Percent Scleractinian Coral Cover at Molokai, 2005

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  1. CRED Cumulative Map of Percent Scleractinian Coral Cover at Ofu & Olosega

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  2. CRED Cumulative Map of Percent Scleractinian Coral Cover at Ta'u

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  3. CRED Cumulative Map of Percent Scleractinian Coral Cover at Guam, 2003

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  4. Validating the standard for the National Board Dental Examination Part II.

    Science.gov (United States)

    Tsai, Tsung-Hsun; Neumann, Laura M; Littlefield, John H

    2012-05-01

    As part of the overall exam validation process, the Joint Commission on National Dental Examinations periodically reviews and validates the pass/fail standard for the National Board Dental Examination (NBDE), Parts I and II. The most recent standard-setting activities for NBDE Part II used the Objective Standard Setting method. This report describes the process used to set the pass/fail standard for the 2009 exam. The failure rate on the NBDE Part II increased from 5.3 percent in 2008 to 13.7 percent in 2009 and then decreased to 10 percent in 2010. This article describes the Objective Standard Setting method and presents the estimated probabilities of classification errors based on the beta binomial mathematical model. The results show that the probability of correct classifications of candidate performance is very high (0.97) and that probabilities of false negative and false positive errors are very small (.03 and <0.001, respectively). The low probability of classification errors supports the conclusion that the pass/fail score on the NBDE Part II is a valid guide for making decisions about candidates for dental licensure.

  5. Gamma ray self-attenuation correction: a simple numerical approach and its validation

    International Nuclear Information System (INIS)

    Agarwal, Chhavi; Poi, Sanhita; Mhatre, Amol; Goswami, A.

    2009-03-01

    A hybrid Monte Carlo method for gamma ray attenuation correction has been developed. The method has been applied to some common counting geometries like cylinder, box, sphere and disc. The method has been validated theoretically and experimentally over a wide range of transmittance and sample-to-detector distances. The advantage of the approach is that it is common to all sample geometries and can be used at all sample-to detector distances. (author)

  6. CRED Cumulative Map of Percent Scleractinian Coral Cover at Raita Bank, 2001

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  7. CRED Cumulative Map of Percent Scleractinian Coral Cover at Eleven-Mile Bank

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  8. CRED Cumulative Map of Percent Scleractinian Coral Cover at Gardner Pinnacles, 2003

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  9. CRED Cumulative Map of Percent Scleractinian Coral Cover at French Frigate Shoals

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  10. Validating the Use of Deep Learning Neural Networks for Correction of Large Hydrometric Datasets

    Science.gov (United States)

    Frazier, N.; Ogden, F. L.; Regina, J. A.; Cheng, Y.

    2017-12-01

    Collection and validation of Earth systems data can be time consuming and labor intensive. In particular, high resolution hydrometric data, including rainfall and streamflow measurements, are difficult to obtain due to a multitude of complicating factors. Measurement equipment is subject to clogs, environmental disturbances, and sensor drift. Manual intervention is typically required to identify, correct, and validate these data. Weirs can become clogged and the pressure transducer may float or drift over time. We typically employ a graphical tool called Time Series Editor to manually remove clogs and sensor drift from the data. However, this process is highly subjective and requires hydrological expertise. Two different people may produce two different data sets. To use this data for scientific discovery and model validation, a more consistent method is needed to processes this field data. Deep learning neural networks have proved to be excellent mechanisms for recognizing patterns in data. We explore the use of Recurrent Neural Networks (RNN) to capture the patterns in the data over time using various gating mechanisms (LSTM and GRU), network architectures, and hyper-parameters to build an automated data correction model. We also explore the required amount of manually corrected training data required to train the network for reasonable accuracy. The benefits of this approach are that the time to process a data set is significantly reduced, and the results are 100% reproducible after training is complete. Additionally, we train the RNN and calibrate a physically-based hydrological model against the same portion of data. Both the RNN and the model are applied to the remaining data using a split-sample methodology. Performance of the machine learning is evaluated for plausibility by comparing with the output of the hydrological model, and this analysis identifies potential periods where additional investigation is warranted.

  11. Multi-jet merged top-pair production including electroweak corrections

    Science.gov (United States)

    Gütschow, Christian; Lindert, Jonas M.; Schönherr, Marek

    2018-04-01

    We present theoretical predictions for the production of top-quark pairs in association with jets at the LHC including electroweak (EW) corrections. First, we present and compare differential predictions at the fixed-order level for t\\bar{t} and t\\bar{t}+ {jet} production at the LHC considering the dominant NLO EW corrections of order O(α_{s}^2 α ) and O(α_{s}^3 α ) respectively together with all additional subleading Born and one-loop contributions. The NLO EW corrections are enhanced at large energies and in particular alter the shape of the top transverse momentum distribution, whose reliable modelling is crucial for many searches for new physics at the energy frontier. Based on the fixed-order results we motivate an approximation of the EW corrections valid at the percent level, that allows us to readily incorporate the EW corrections in the MePs@Nlo framework of Sherpa combined with OpenLoops. Subsequently, we present multi-jet merged parton-level predictions for inclusive top-pair production incorporating NLO QCD + EW corrections to t\\bar{t} and t\\bar{t}+ {jet}. Finally, we compare at the particle-level against a recent 8 TeV measurement of the top transverse momentum distribution performed by ATLAS in the lepton + jet channel. We find very good agreement between the Monte Carlo prediction and the data when the EW corrections are included.

  12. Customized versus population-based growth curves: prediction of low body fat percent at term corrected gestational age following preterm birth.

    Science.gov (United States)

    Law, Tameeka L; Katikaneni, Lakshmi D; Taylor, Sarah N; Korte, Jeffrey E; Ebeling, Myla D; Wagner, Carol L; Newman, Roger B

    2012-07-01

    Compare customized versus population-based growth curves for identification of small-for-gestational-age (SGA) and body fat percent (BF%) among preterm infants. Prospective cohort study of 204 preterm infants classified as SGA or appropriate-for-gestational-age (AGA) by population-based and customized growth curves. BF% was determined by air-displacement plethysmography. Differences between groups were compared using bivariable and multivariable linear and logistic regression analyses. Customized curves reclassified 30% of the preterm infants as SGA. SGA infants identified by customized method only had significantly lower BF% (13.8 ± 6.0) than the AGA (16.2 ± 6.3, p = 0.02) infants and similar to the SGA infants classified by both methods (14.6 ± 6.7, p = 0.51). Customized growth curves were a significant predictor of BF% (p = 0.02), whereas population-based growth curves were not a significant independent predictor of BF% (p = 0.50) at term corrected gestational age. Customized growth potential improves the differentiation of SGA infants and low BF% compared with a standard population-based growth curve among a cohort of preterm infants.

  13. Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds

    Science.gov (United States)

    Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas

    2014-01-01

    A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.

  14. Region of validity of the Thomas–Fermi model with quantum, exchange and shell corrections

    International Nuclear Information System (INIS)

    Dyachkov, S A; Levashov, P R; Minakov, D V

    2016-01-01

    A novel approach to calculate thermodynamically consistent shell corrections in wide range of parameters is used to predict the region of validity of the Thomas-Fermi approach. Calculated thermodynamic functions of electrons at high density are consistent with the more precise density functional theory. It makes it possible to work out a semi-classical model applicable both at low and high density. (paper)

  15. CRED Cumulative Map of Percent Scleractinian Coral Cover at Kure Atoll, 2002-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  16. CRED Cumulative Map of Percent Scleractinian Coral Cover at Laysan Island, 2002-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  17. CRED Cumulative Map of Percent Scleractinian Coral Cover at Palmyra Atoll, 2002-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  18. CRED Cumulative Map of Percent Scleractinian Coral Cover at Lisianski Island, 2001-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  19. CRED Cumulative Map of Percent Scleractinian Coral Cover at Maro Reef, 2001-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  20. CRED Cumulative Map of Percent Scleractinian Coral Cover at Baker Island, 2002-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  1. Experimental validation of gallium production and isotope-dependent positron range correction in PET

    Energy Technology Data Exchange (ETDEWEB)

    Fraile, L.M., E-mail: lmfraile@ucm.es [Grupo de Física Nuclear, Dpto. Física Atómica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Herraiz, J.L.; Udías, J.M.; Cal-González, J.; Corzo, P.M.G.; España, S.; Herranz, E.; Pérez-Liva, M.; Picado, E.; Vicente, E. [Grupo de Física Nuclear, Dpto. Física Atómica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Muñoz-Martín, A. [Centro de Microanálisis de Materiales, Universidad Autónoma de Madrid, E-28049 Madrid (Spain); Vaquero, J.J. [Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid (Spain)

    2016-04-01

    Positron range (PR) is one of the important factors that limit the spatial resolution of positron emission tomography (PET) preclinical images. Its blurring effect can be corrected to a large extent if the appropriate method is used during the image reconstruction. Nevertheless, this correction requires an accurate modelling of the PR for the particular radionuclide and materials in the sample under study. In this work we investigate PET imaging with {sup 68}Ga and {sup 66}Ga radioisotopes, which have a large PR and are being used in many preclinical and clinical PET studies. We produced a {sup 68}Ga and {sup 66}Ga phantom on a natural zinc target through (p,n) reactions using the 9-MeV proton beam delivered by the 5-MV CMAM tandetron accelerator. The phantom was imaged in an ARGUS small animal PET/CT scanner and reconstructed with a fully 3D iterative algorithm, with and without PR corrections. The reconstructed images at different time frames show significant improvement in spatial resolution when the appropriate PR is applied for each frame, by taking into account the relative amount of each isotope in the sample. With these results we validate our previously proposed PR correction method for isotopes with large PR. Additionally, we explore the feasibility of PET imaging with {sup 68}Ga and {sup 66}Ga radioisotopes in proton therapy.

  2. Complete O(α) QED corrections to the process ep→eX in mixed variables

    International Nuclear Information System (INIS)

    Bardin, D.; Joint Inst. of Nuclear Research, Moscow; Christova, P.; Kalinovskaya, L.; Riemann, T.

    1995-04-01

    The complete set of OMIKRON (α) QED corrections with soft photon exponentiation to the process ep→eX in mixed variables (y=y h , Q 2 =Q l 2 ) is calculated in the quark parton model. Compared to earlier attempts, we additionally determine the lepton-quark interference and the quarkonic corrections. The net results are compared to the approximation with only leptonic corrections, which amount to several percent (at large x or y: several dozens of percents). We find that the newly calculated corrections modify this by few percent or less and become negligible at small y. (orig.)

  3. An evaluation of 10 percent and 20 percent benzocaine gels in patients with acute toothaches

    Science.gov (United States)

    Hersh, Elliot V.; Ciancio, Sebastian G.; Kuperstein, Arthur S.; Stoopler, Eric T.; Moore, Paul A.; Boynes, Sean G.; Levine, Steven C.; Casamassimo, Paul; Leyva, Rina; Mathew, Tanya; Shibly, Othman; Creighton, Paul; Jeffers, Gary E.; Corby, Patricia M.A.; Turetzky, Stanley N.; Papas, Athena; Wallen, Jillian; Idzik-Starr, Cynthia; Gordon, Sharon M.

    2013-01-01

    Background The authors evaluated the efficacy and tolerability of 10 percent and 20 percent benzocaine gels compared with those of a vehicle (placebo) gel for the temporary relief of toothache pain. They also assessed the compliance with the label dose administration directions on the part of participants with toothache pain. Methods Under double-masked conditions, 576 participants self-applied study gel to an open tooth cavity and surrounding oral tissues. Participants evaluated their pain intensity and pain relief for 120 minutes. The authors determined the amount of gel the participants applied. Results The responders’ rates (the primary efficacy parameter), defined as the percentage of participants who had an improvement in pain intensity as exhibited by a pain score reduction of at least one unit on the dental pain scale from baseline for two consecutive assessments any time between the five- and 20-minute points, were 87.3 percent, 80.7 percent and 70.4 percent, respectively, for 20 percent benzocaine gel, 10 percent benzocaine gel and vehicle gel. Both benzocaine gels were significantly (P ≤ .05) better than vehicle gel; the 20 percent benzocaine gel also was significantly (P ≤ .05) better than the 10 percent benzocaine gel. The mean amount of gel applied was 235.6 milligrams, with 88.2 percent of participants applying 400 mg or less. Conclusions Both 10 percent and 20 percent benzocaine gels were more efficacious than the vehicle gel, and the 20 percent benzocaine gel was more efficacious than the 10 percent benzocaine gel. All treatments were well tolerated by participants. Practical Implications Patients can use 10 percent and 20 percent benzocaine gels to temporarily treat toothache pain safely. PMID:23633700

  4. Corrections for criterion reliability in validity generalization: The consistency of Hermes, the utility of Midas

    Directory of Open Access Journals (Sweden)

    Jesús F. Salgado

    2016-04-01

    Full Text Available There is criticism in the literature about the use of interrater coefficients to correct for criterion reliability in validity generalization (VG studies and disputing whether .52 is an accurate and non-dubious estimate of interrater reliability of overall job performance (OJP ratings. We present a second-order meta-analysis of three independent meta-analytic studies of the interrater reliability of job performance ratings and make a number of comments and reflections on LeBreton et al.s paper. The results of our meta-analysis indicate that the interrater reliability for a single rater is .52 (k = 66, N = 18,582, SD = .105. Our main conclusions are: (a the value of .52 is an accurate estimate of the interrater reliability of overall job performance for a single rater; (b it is not reasonable to conclude that past VG studies that used .52 as the criterion reliability value have a less than secure statistical foundation; (c based on interrater reliability, test-retest reliability, and coefficient alpha, supervisor ratings are a useful and appropriate measure of job performance and can be confidently used as a criterion; (d validity correction for criterion unreliability has been unanimously recommended by "classical" psychometricians and I/O psychologists as the proper way to estimate predictor validity, and is still recommended at present; (e the substantive contribution of VG procedures to inform HRM practices in organizations should not be lost in these technical points of debate.

  5. Correction for FDG PET dose extravasations: Monte Carlo validation and quantitative evaluation of patient studies

    Energy Technology Data Exchange (ETDEWEB)

    Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Santiago de Compostela, Galicia (Spain); Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Cortés, Julia; Garrido, Miguel [Servicio de Medicina Nuclear, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia, Spain and Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Pombar, Miguel [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia (Spain); Ruibal, Álvaro [Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Fundación Tejerina, 28003, Madrid (Spain)

    2014-05-15

    Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.

  6. CRED Cumulative Map of Percent Scleractinian Coral Cover at Pearl and Hermes Atoll, 2002-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  7. Self-Correcting Electronically-Scanned Pressure Sensor

    Science.gov (United States)

    Gross, C.; Basta, T.

    1982-01-01

    High-data-rate sensor automatically corrects for temperature variations. Multichannel, self-correcting pressure sensor can be used in wind tunnels, aircraft, process controllers and automobiles. Offers data rates approaching 100,000 measurements per second with inaccuracies due to temperature shifts held below 0.25 percent (nominal) of full scale over a temperature span of 55 degrees C.

  8. Construct Validity of the MMPI-2-RF Triarchic Psychopathy Scales in Correctional and Collegiate Samples.

    Science.gov (United States)

    Kutchen, Taylor J; Wygant, Dustin B; Tylicki, Jessica L; Dieter, Amy M; Veltri, Carlo O C; Sellbom, Martin

    2017-01-01

    This study examined the MMPI-2-RF (Ben-Porath & Tellegen, 2008/2011) Triarchic Psychopathy scales recently developed by Sellbom et al. ( 2016 ) in 3 separate groups of male correctional inmates and 2 college samples. Participants were administered a diverse battery of psychopathy specific measures (e.g., Psychopathy Checklist-Revised [Hare, 2003 ], Psychopathic Personality Inventory-Revised [Lilienfeld & Widows, 2005 ], Triarchic Psychopathy Measure [Patrick, 2010 ]), omnibus personality and psychopathology measures such as the Personality Assessment Inventory (Morey, 2007 ) and Personality Inventory for DSM-5 (Krueger, Derringer, Markon, Watson, & Skodol, 2012 ), and narrow-band measures that capture conceptually relevant constructs. Our results generally evidenced strong support for the convergent and discriminant validity for the MMPI-2-RF Triarchic scales. Boldness was largely associated with measures of fearless dominance, social potency, and stress immunity. Meanness showed strong relationships with measures of callousness, aggression, externalizing tendencies, and poor interpersonal functioning. Disinhibition exhibited strong associations with poor impulse control, stimulus seeking, and general externalizing proclivities. Our results provide additional construct validation to both the triarchic model and MMPI-2-RF Triarchic scales. Given the widespread use of the MMPI-2-RF in correctional and forensic settings, our results have important implications for clinical assessment in these 2 areas, where psychopathy is a highly relevant construct.

  9. Validation of corrections for errors in collimation during measurement of gastric emptying of nuclide-labeled meals

    Energy Technology Data Exchange (ETDEWEB)

    Van Deventer, G.; Thomson, J.; Graham, L.S.; Thomasson, D.; Meyer, J.H.

    1983-03-01

    The study was undertaken to validate phantom-derived corrections for errors in collimation due to septal penetration or scatter, which vary with the size of the gastric region of interest (ROI). Six volunteers received 495 ml of 20% glucose labeled with both In-113m DTPA and Tc-99m DTPA. Gastric emptying of each nuclide was monitored by gamma camera as well as by periodic removal and reinstillation of the meal through a gastric tube. Serial aspirates from the gastric tube confirmed parallel emptying of In-113m and Tc-99m, but analyses of gamma-camera data yielded parallel emptying only when adequate corrections were made for errors in collimation. Analyses of ratios of gastric counts from anterior to posterior, as well as analyses of peak-to-scatter ratios, revealed only small, insignificant anteroposterior movement of the tracers within the stomach during emptying. Accordingly, there was no significant improvement in the camera data when corrections were made for attenuation with intragastric depth.

  10. Map of percent scleractinian coral cover and sand along camera tow tracks in west Hawaii

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral and sand overlaid on bathymetry and landsat imagery northwest...

  11. N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method

    Science.gov (United States)

    Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.

    2018-05-01

    Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.

  12. Attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Hosoba, Minoru

    1986-01-01

    Attenuation correction is required for the reconstruction of a quantitative SPECT image. A new method for detecting body contours, which are important for the correction of tissue attenuation, is presented. The effect of body contours, detected by the newly developed method, on the reconstructed images was evaluated using various techniques for attenuation correction. The count rates in the specified region of interest in the phantom image by the Radial Post Correction (RPC) method, the Weighted Back Projection (WBP) method, Chang's method were strongly affected by the accuracy of the contours, as compared to those by Sorenson's method. To evaluate the effect of non-uniform attenuators on the cardiac SPECT, computer simulation experiments were performed using two types of models, the uniform attenuator model (UAM) and the non-uniform attenuator model (NUAM). The RPC method showed the lowest relative percent error (%ERROR) in UAM (11 %). However, 20 to 30 percent increase in %ERROR was observed for NUAM reconstructed with the RPC, WBP, and Chang's methods. Introducing an average attenuation coefficient (0.12/cm for Tc-99m and 0.14/cm for Tl-201) in the RPC method decreased %ERROR to the levels for UAM. Finally, a comparison between images, which were obtained by 180 deg and 360 deg scans and reconstructed from the RPC method, showed that the degree of the distortion of the contour of the simulated ventricles in the 180 deg scan was 15 % higher than that in the 360 deg scan. (Namekawa, K.)

  13. Adaptation and Validation of a Burnout Inventory in a Survey of the Staff of a Correctional Institution in Bulgaria

    Directory of Open Access Journals (Sweden)

    Harizanova Stanislava N.

    2016-12-01

    Full Text Available Background: Burnout syndrome is a phenomenon that seems to be studied globally in relation to all types of populations. The staff in the system of correctional institutions in Bulgaria, however, is oddly left out of this tendency. There is no standardized model in Bulgaria that can be used to detect possible susceptibility to professional burnout. The methods available at present only register the irreversible changes that have already set in the functioning of the individual. V. Boyko’s method for burnout assessment allows clinicians to use individual approach to patients and affords easy comparability of results with data from other psychodiagnostic instruments. Adaptation of the assessment instruments to fit the specificities of a study population (linguistic, ethno-cultural, etc. is obligatory so that the instrument could be correctly used and yield valid results. Validation is one of the most frequently used technique to achieve this.

  14. Validity of anthropometric procedures to estimate body density and body fat percent in military men

    Directory of Open Access Journals (Sweden)

    Ciro Romélio Rodriguez-Añez

    1999-12-01

    Full Text Available The objective of this study was to verify the validity of the Katch e McArdle’s equation (1973,which uses the circumferences of the arm, forearm and abdominal to estimate the body density and the procedure of Cohen (1986 which uses the circumferences of the neck and abdominal to estimate the body fat percent (%F in military men. Therefore data from 50 military men, with mean age of 20.26 ± 2.04 years serving in Santa Maria, RS, was collected. The circumferences were measured according with Katch e McArdle (1973 and Cohen (1986 procedures. The body density measured (Dm obtained under water weighting was used as criteria and its mean value was 1.0706 ± 0.0100 g/ml. The residual lung volume was estimated using the Goldman’s e Becklake’s equation (1959. The %F was obtained with the Siri’s equation (1961 and its mean value was 12.70 ± 4.71%. The validation criterion suggested by Lohman (1992 was followed. The analysis of the results indicated that the procedure developed by Cohen (1986 has concurrent validity to estimate %F in military men or in other samples with similar characteristics with standard error of estimate of 3.45%. . RESUMO Através deste estudo objetivou-se verificar a validade: da equação de Katch e McArdle (1973 que envolve os perímetros do braço, antebraço e abdômen, para estimar a densidade corporal; e, o procedimento de Cohen (1986 que envolve os perímetros do pescoço e abdômen, para estimar o % de gordura (%G; para militares. Para tanto, coletou-se os dados de 50 militares masculinos, com idade média de 20,26 ± 2,04 anos, lotados na cidade de Santa Maria, RS. Mensurou-se os perímetros conforme procedimentos de Katch e McArdle (1973 e Cohen (1986. Utilizou-se a densidade corporal mensurada (Dm através da pesagem hidrostática como critério de validação, cujo valor médio foi de 1,0706 ± 0,0100 g/ml. Estimou-se o volume residual pela equação de Goldman e Becklake (1959. O %G derivado da Dm estimou

  15. Spectral relationships for atmospheric correction. I. Validation of red and near infra-red marine reflectance relationships.

    Science.gov (United States)

    Goyens, C; Jamet, C; Ruddick, K G

    2013-09-09

    The present study provides an extensive overview of red and near infra-red (NIR) spectral relationships found in the literature and used to constrain red or NIR-modeling schemes in current atmospheric correction (AC) algorithms with the aim to improve water-leaving reflectance retrievals, ρw(λ), in turbid waters. However, most of these spectral relationships have been developed with restricted datasets and, subsequently, may not be globally valid, explaining the need of an accurate validation exercise. Spectral relationships are validated here with turbid in situ data for ρw(λ). Functions estimating ρw(λ) in the red were only valid for moderately turbid waters (ρw(λNIR) turbidity ranges presented in the in situ dataset. In the NIR region of the spectrum, the constant NIR reflectance ratio suggested by Ruddick et al. (2006) (Limnol. Oceanogr. 51, 1167-1179), was valid for moderately to very turbid waters (ρw(λNIR) turbid waters (ρw(λNIR) > 10(-2)). The results of this study suggest to use the red bounding equations and the polynomial NIR function to constrain red or NIR-modeling schemes in AC processes with the aim to improve ρw(λ) retrievals where current AC algorithms fail.

  16. QIN DAWG Validation of Gradient Nonlinearity Bias Correction Workflow for Quantitative Diffusion-Weighted Imaging in Multicenter Trials.

    Science.gov (United States)

    Malyarenko, Dariya I; Wilmes, Lisa J; Arlinghaus, Lori R; Jacobs, Michael A; Huang, Wei; Helmer, Karl G; Taouli, Bachir; Yankeelov, Thomas E; Newitt, David; Chenevert, Thomas L

    2016-12-01

    Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, -35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI.

  17. Percent Coverage

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Percent Coverage is a spreadsheet that keeps track of and compares the number of vessels that have departed with and without observers to the numbers of vessels...

  18. Practical Bias Correction in Aerial Surveys of Large Mammals: Validation of Hybrid Double-Observer with Sightability Method against Known Abundance of Feral Horse (Equus caballus) Populations.

    Science.gov (United States)

    Lubow, Bruce C; Ransom, Jason I

    2016-01-01

    Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs.

  19. Validating a virtual source model based in Monte Carlo Method for profiles and percent deep doses calculation

    Energy Technology Data Exchange (ETDEWEB)

    Del Nero, Renata Aline; Yoriyaz, Hélio [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Nakandakari, Marcos Vinicius Nakaoka, E-mail: hyoriyaz@ipen.br, E-mail: marcos.sake@gmail.com [Hospital Beneficência Portuguesa de São Paulo, SP (Brazil)

    2017-07-01

    The Monte Carlo method for radiation transport data has been adapted for medical physics application. More specifically, it has received more attention in clinical treatment planning with the development of more efficient computer simulation techniques. In linear accelerator modeling by the Monte Carlo method, the phase space data file (phsp) is used a lot. However, to obtain precision in the results, it is necessary detailed information about the accelerator's head and commonly the supplier does not provide all the necessary data. An alternative to the phsp is the Virtual Source Model (VSM). This alternative approach presents many advantages for the clinical Monte Carlo application. This is the most efficient method for particle generation and can provide an accuracy similar when the phsp is used. This research propose a VSM simulation with the use of a Virtual Flattening Filter (VFF) for profiles and percent deep doses calculation. Two different sizes of open fields (40 x 40 cm² and 40√2 x 40√2 cm²) were used and two different source to surface distance (SSD) were applied: the standard 100 cm and custom SSD of 370 cm, which is applied in radiotherapy treatments of total body irradiation. The data generated by the simulation was analyzed and compared with experimental data to validate the VSM. This current model is easy to build and test. (author)

  20. Validity of proxies and correction for proxy use when evaluating social determinants of health in stroke patients.

    Science.gov (United States)

    Skolarus, Lesli E; Sánchez, Brisa N; Morgenstern, Lewis B; Garcia, Nelda M; Smith, Melinda A; Brown, Devin L; Lisabeth, Lynda D

    2010-03-01

    The purpose of this study was to evaluate stroke patient-proxy agreement with respect to social determinants of health, including depression, optimism, and spirituality, and to explore approaches to minimize proxy-introduced bias. Stroke patient-proxy pairs from the Brain Attack Surveillance in Corpus Christi Project were interviewed (n=34). Evaluation of agreement between patient-proxy pairs included calculation of intraclass correlation coefficients, linear regression models (ProxyResponse=alpha(0)+alpha(1)PatientResponse+delta, where alpha(0)=0 and alpha(1)=1 denotes no bias) and kappa statistics. Bias introduced by proxies was quantified with simulation studies. In the simulated data, we applied 4 approaches to estimate regression coefficients of stroke outcome social determinants of health associations when only proxy data were available for some patients: (1) substituting proxy responses in place of patient responses; (2) including an indicator variable for proxy use; (3) using regression calibration with external validation; and (4) internal validation. Agreement was fair for depression (intraclass correlation coefficient, 0.41) and optimism (intraclass correlation coefficient, 0.48) and moderate for spirituality (kappa, 0.48 to 0.53). Responses of proxies were a biased measure of the patients' responses for depression, with alpha(0)=4.88 (CI, 2.24 to 7.52) and alpha(1)=0.39 (CI, 0.09 to 0.69), and for optimism, with alpha(0)=3.82 (CI, -1.04 to 8.69) and alpha(1)=0.81 (CI, 0.41 to 1.22). Regression calibration with internal validation was the most accurate method to correct for proxy-induced bias. Fair/moderate patient-proxy agreement was observed for social determinants of health. Stroke researchers who plan to study social determinants of health may consider performing validation studies so corrections for proxy use can be made.

  1. Correcting ligands, metabolites, and pathways

    Directory of Open Access Journals (Sweden)

    Vriend Gert

    2006-11-01

    Full Text Available Abstract Background A wide range of research areas in bioinformatics, molecular biology and medicinal chemistry require precise chemical structure information about molecules and reactions, e.g. drug design, ligand docking, metabolic network reconstruction, and systems biology. Most available databases, however, treat chemical structures more as illustrations than as a datafield in its own right. Lack of chemical accuracy impedes progress in the areas mentioned above. We present a database of metabolites called BioMeta that augments the existing pathway databases by explicitly assessing the validity, correctness, and completeness of chemical structure and reaction information. Description The main bulk of the data in BioMeta were obtained from the KEGG Ligand database. We developed a tool for chemical structure validation which assesses the chemical validity and stereochemical completeness of a molecule description. The validation tool was used to examine the compounds in BioMeta, showing that a relatively small number of compounds had an incorrect constitution (connectivity only, not considering stereochemistry and that a considerable number (about one third had incomplete or even incorrect stereochemistry. We made a large effort to correct the errors and to complete the structural descriptions. A total of 1468 structures were corrected and/or completed. We also established the reaction balance of the reactions in BioMeta and corrected 55% of the unbalanced (stoichiometrically incorrect reactions in an automatic procedure. The BioMeta database was implemented in PostgreSQL and provided with a web-based interface. Conclusion We demonstrate that the validation of metabolite structures and reactions is a feasible and worthwhile undertaking, and that the validation results can be used to trigger corrections and improvements to BioMeta, our metabolite database. BioMeta provides some tools for rational drug design, reaction searches, and

  2. Implementation of WirelessHART in the NS-2 Simulator and Validation of Its Correctness

    Directory of Open Access Journals (Sweden)

    Pouria Zand

    2014-05-01

    Full Text Available One of the first standards in the wireless sensor networks domain,WirelessHART (HART (Highway Addressable Remote Transducer, was introduced to address industrial process automation and control requirements. This standard can be used as a reference point to evaluate other wireless protocols in the domain of industrial monitoring and control. This makes it worthwhile to set up a reliable WirelessHART simulator in order to achieve that reference point in a relatively easy manner. Moreover, it offers an alternative to expensive testbeds for testing and evaluating the performance of WirelessHART. This paper explains our implementation of WirelessHART in the NS-2 network simulator. According to our knowledge, this is the first implementation that supports the WirelessHART network manager, as well as the whole stack (all OSI (Open Systems Interconnection model layers of the WirelessHART standard. It also explains our effort to validate the correctness of our implementation, namely through the validation of the implementation of the WirelessHART stack protocol and of the network manager. We use sniffed traffic from a realWirelessHART testbed installed in the Idrolab plant for these validations. This confirms the validity of our simulator. Empirical analysis shows that the simulated results are nearly comparable to the results obtained from real networks. We also demonstrate the versatility and usability of our implementation by providing some further evaluation results in diverse scenarios. For example, we evaluate the performance of the WirelessHART network by applying incremental interference in a multi-hop network.

  3. Validation of the AMSU-B Bias Corrections Based on Satellite Measurements from SSM/T-2

    Science.gov (United States)

    Kolodner, Marc A.

    1999-01-01

    The NOAA-15 Advanced Microwave Sounding Unit-B (AMSU-B) was designed in the same spirit as the Special Sensor Microwave Water Vapor Profiler (SSM/T-2) on board the DMSP F11-14 satellites, to perform remote sensing of spatial and temporal variations in mid and upper troposphere humidity. While the SSM/T-2 instruments have a 48 km spatial resolution at nadir and 28 beam positions per scan, AMSU-B provides an improvement with a 16 km spatial resolution at nadir and 90 beam positions per scan. The AMSU-B instrument, though, has been experiencing radio frequency interference (RFI) contamination from the NOAA-15 transmitters whose effect is dependent upon channel, geographic location, and current spacecraft antenna configuration. This has lead to large cross-track biases reaching as high as 100 Kelvin for channel 17 (150 GHz) and 50 Kelvin for channel 19 (183 +/-3 GHz). NOAA-NESDIS has recently provided a series of bias corrections for AMSU-B data starting from March, 1999. These corrections are available for each of the five channels, for every third field of view, and for three cycles within an eight second period. There is also a quality indicator in each data record to indicate whether or not the bias corrections should be applied. As a precursor to performing retrievals of mid and upper troposphere humidity, a validation study is performed by statistically analyzing the differences between the F14 SSM/T-2 and the bias corrected AMSU-B brightness temperatures for three months in the spring of 1999.

  4. Radiative corrections of O(α) to B{sup -} → V{sup 0}l{sup -} anti ν{sub l} decays

    Energy Technology Data Exchange (ETDEWEB)

    Tostado, S.L. [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional, Departamento de Fisica, Mexico, D.F. (Mexico); Castro, G.L. [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional, Departamento de Fisica, Mexico, D.F. (Mexico); CSIC- Universitat de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain)

    2016-09-15

    The O(α) electromagnetic radiative corrections to the B{sup -} → V{sup 0}l{sup -} anti ν{sub l} (V is a vector meson and l a charged lepton) decay rates are evaluated using the cutoff method to regularize virtual corrections and incorporating intermediate resonance states in the real-photon amplitude to extend the region of validity of the soft-photon approximation. The electromagnetic and weak form factors of hadrons are assumed to vary smoothly over the energies of virtual and real photons under consideration. The cutoff dependence of radiative corrections upon the scale Λ that separates the long- and short-distance regimes is found to be mild and is considered as an uncertainty of the calculation. Owing to partial cancellations of electromagnetic corrections evaluated over the three- and four-body regions of phase space, the photon-inclusive corrected rates are found to be dominated by the short-distance contribution. These corrections will be relevant for a precise determination of the b quark mixing angles by testing isospin symmetry when measurements of semileptonic rates of charged and neutral B mesons at the few percent level become available. For completeness, we also provide numerical values of radiative corrections in the three-body region of the Dalitz plot distributions of these decays. (orig.)

  5. Percent Wetland Cover

    Data.gov (United States)

    U.S. Environmental Protection Agency — Wetlands act as filters, removing or diminishing the amount of pollutants that enter surface water. Higher values for percent of wetland cover (WETLNDSPCT) may be...

  6. Map of percent scleractinian coral cover and sand along camera tows and ROV tracks of West Maui, Hawaii

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral and sand overlaid on bathymetry and landsat imagery. Optical...

  7. Validation of molecular crystal structures from powder diffraction data with dispersion-corrected density functional theory (DFT-D)

    International Nuclear Information System (INIS)

    Streek, Jacco van de; Neumann, Marcus A.

    2014-01-01

    The accuracy of 215 experimental organic crystal structures from powder diffraction data is validated against a dispersion-corrected density functional theory method. In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published in an IUCr journal were energy-minimized with DFT-D and compared to the SX benchmark. The on average slightly less accurate atomic coordinates of XRPD structures do lead to systematically higher root mean square Cartesian displacement (RMSCD) values upon energy minimization than for SX structures, but the RMSCD value is still a good indicator for the detection of structures that deserve a closer look. The upper RMSCD limit for a correct structure must be increased from 0.25 Å for SX structures to 0.35 Å for XRPD structures; the grey area must be extended from 0.30 to 0.40 Å. Based on the energy minimizations, three structures are re-refined to give more precise atomic coordinates. For six structures our calculations provide the missing positions for the H atoms, for five structures they provide corrected positions for some H atoms. Seven crystal structures showed a minor error for a non-H atom. For five structures the energy minimizations suggest a higher space-group symmetry. For the 225 SX structures, the only deviations observed upon energy minimization were three minor H-atom related issues. Preferred orientation is the most important cause of problems. A preferred-orientation correction is the only correction where the experimental data are modified to fit the model. We conclude that molecular crystal structures determined from powder diffraction data that are published in IUCr journals are of high quality, with less than 4% containing an error in a non-H atom

  8. Inspiration: One Percent and Rising

    Science.gov (United States)

    Walling, Donovan R.

    2009-01-01

    Inventor Thomas Edison once famously declared, "Genius is one percent inspiration and ninety-nine percent perspiration." If that's the case, then the students the author witnessed at the International Student Media Festival (ISMF) last November in Orlando, Florida, are geniuses and more. The students in the ISMF pre-conference workshop…

  9. Automatic color preference correction for color reproduction

    Science.gov (United States)

    Tsukada, Masato; Funayama, Chisato; Tajima, Johji

    2000-12-01

    The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.

  10. Vision impairment and corrective considerations of civil airmen.

    Science.gov (United States)

    Nakagawara, V B; Wood, K J; Montgomery, R W

    1995-08-01

    Civil aviation is a major commercial and technological industry in the United States. The Federal Aviation Administration (FAA) is responsible for the regulation and promotion of aviation safety in the National Airspace System. To guide FAA policy changes and educational programs for aviation personnel about vision impairment and the use of corrective ophthalmic devices, the demographics of the civil airman population were reviewed. Demographic data from 1971-1991 were extracted from FAA publications and databases. Approximately 48 percent of the civil airman population is equal to or older than 40 years of age (average age = 39.8 years). Many of these aviators are becoming presbyopic and will need corrective devices for near and intermediate vision. In fact, there has been approximately a 12 percent increase in the number of aviators with near vision restrictions during the past decade. Ophthalmic considerations for prescribing and dispensing eyewear for civil aviators are discussed. The correction of near and intermediate vision conditions for older pilots will be a major challenge for eye care practitioners in the next decade. Knowledge of the unique vision and environmental requirements of the civilian airman can assist clinicians in suggesting alternative vision corrective devices better suited for a particular aviation activity.

  11. Validation of phenol red versus gravimetric method for water reabsorption correction and study of gender differences in Doluisio's absorption technique.

    Science.gov (United States)

    Tuğcu-Demiröz, Fatmanur; Gonzalez-Alvarez, Isabel; Gonzalez-Alvarez, Marta; Bermejo, Marival

    2014-10-01

    The aim of the present study was to develop a method for water flux reabsorption measurement in Doluisio's Perfusion Technique based on the use of phenol red as a non-absorbable marker and to validate it by comparison with gravimetric procedure. The compounds selected for the study were metoprolol, atenolol, cimetidine and cefadroxil in order to include low, intermediate and high permeability drugs absorbed by passive diffusion and by carrier mediated mechanism. The intestinal permeabilities (Peff) of the drugs were obtained in male and female Wistar rats and calculated using both methods of water flux correction. The absorption rate coefficients of all the assayed compounds did not show statistically significant differences between male and female rats consequently all the individual values were combined to compare between reabsorption methods. The absorption rate coefficients and permeability values did not show statistically significant differences between the two strategies of concentration correction. The apparent zero order water absorption coefficients were also similar in both correction procedures. In conclusion gravimetric and phenol red method for water reabsorption correction are accurate and interchangeable for permeability estimation in closed loop perfusion method. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Guide to Using the WIND Toolkit Validation Code

    Energy Technology Data Exchange (ETDEWEB)

    Lieberman-Cribbin, W.; Draxl, C.; Clifton, A.

    2014-12-01

    In response to the U.S. Department of Energy's goal of using 20% wind energy by 2030, the Wind Integration National Dataset (WIND) Toolkit was created to provide information on wind speed, wind direction, temperature, surface air pressure, and air density on more than 126,000 locations across the United States from 2007 to 2013. The numerical weather prediction model output, gridded at 2-km and at a 5-minute resolution, was further converted to detail the wind power production time series of existing and potential wind facility sites. For users of the dataset it is important that the information presented in the WIND Toolkit is accurate and that errors are known, as then corrective steps can be taken. Therefore, we provide validation code written in R that will be made public to provide users with tools to validate data of their own locations. Validation is based on statistical analyses of wind speed, using error metrics such as bias, root-mean-square error, centered root-mean-square error, mean absolute error, and percent error. Plots of diurnal cycles, annual cycles, wind roses, histograms of wind speed, and quantile-quantile plots are created to visualize how well observational data compares to model data. Ideally, validation will confirm beneficial locations to utilize wind energy and encourage regional wind integration studies using the WIND Toolkit.

  13. Simplified correction of g-value measurements

    DEFF Research Database (Denmark)

    Duer, Karsten

    1998-01-01

    been carried out using a detailed physical model based on ISO9050 and prEN410 but using polarized data for non-normal incidence. This model is only valid for plane, clear glazings and therefor not suited for corrections of measurements performed on complex glazings. To investigate a more general...... correction procedure the results from the measurements on the Interpane DGU have been corrected using the principle outlined in (Rosenfeld, 1996). This correction procedure is more general as corrections can be carried out without a correct physical model of the investigated glazing. On the other hand...... the way this “general” correction procedure is used is not always in accordance to the physical conditions....

  14. Map of percent scleractinian coral cover along camera tows and ROV tracks in the Auau Channel, Island of Maui, Hawaii

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry and landsat imagery. Optical data were...

  15. A correction method for systematic error in (1)H-NMR time-course data validated through stochastic cell culture simulation.

    Science.gov (United States)

    Sokolenko, Stanislav; Aucoin, Marc G

    2015-09-04

    The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small

  16. Beyond Marbles: Percent Change and Social Justice

    Science.gov (United States)

    Denny, Flannery

    2013-01-01

    In the author's eighth year of teaching, she hit a wall teaching percent change. Percent change is one of the few calculations taught in math classes that shows up regularly in the media, and one that she often does in her head to make sense of the world around her. Despite this, she had been teaching percent change using textbook problems about…

  17. Percent Wetland Cover (Future)

    Data.gov (United States)

    U.S. Environmental Protection Agency — Wetlands act as filters, removing or diminishing the amount of pollutants that enter surface water. Higher values for percent of wetland cover (WETLNDSPCT) may be...

  18. The Functional Assessment of Cancer Therapy – General (FACT-G) is valid for monitoring quality of life in non-Hodgkin lymphoma patients

    OpenAIRE

    Yost, KJ; Thompson, CA; Eton, DT; Allmer, C; Ehlers, SL; Habermann, TM; Shanafelt, TD; Maurer, MJ; Slager, SL; Link, BK; Cerhan, JR

    2012-01-01

    Quality of life (QoL) is an important outcome in patients with non-Hodgkin lymphoma (NHL). We assessed the validity of administering the Functional Assessment of Cancer Therapy – General (FACT-G) at 12-month intervals over 3 years in a longitudinal study of 611 prospectively enrolled, newly diagnosed NHL patients. We evaluated corrected item-total correlation and percent missing to identify items that may be less useful in certain NHL patient subgroups. The FACT-G subscales and total score de...

  19. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  20. Validation of a spectral correction procedure for sun and sky reflections in above-water reflectance measurements.

    Science.gov (United States)

    Groetsch, Philipp M M; Gege, Peter; Simis, Stefan G H; Eleveld, Marieke A; Peters, Steef W M

    2017-08-07

    A three-component reflectance model (3C) is applied to above-water radiometric measurements to derive remote-sensing reflectance Rrs (λ). 3C provides a spectrally resolved offset Δ(λ) to correct for residual sun and sky radiance (Rayleigh- and aerosol-scattered) reflections on the water surface that were not represented by sky radiance measurements. 3C is validated with a data set of matching above- and below-water radiometric measurements collected in the Baltic Sea, and compared against a scalar offset correction Δ. Correction with Δ(λ) instead of Δ consistently reduced the (mean normalized root-mean-square) deviation between Rrs (λ) and reference reflectances to comparable levels for clear (Δ: 14.3 ± 2.5 %, Δ(λ): 8.2 ± 1.7 %), partly clouded (Δ: 15.4 ± 2.1 %, Δ(λ): 6.5 ± 1.4 %), and completely overcast (Δ: 10.8 ± 1.7 %, Δ(λ): 6.3 ± 1.8 %) sky conditions. The improvement was most pronounced under inhomogeneous sky conditions when measurements of sky radiance tend to be less representative of surface-reflected radiance. Accounting for both sun glint and sky reflections also relaxes constraints on measurement geometry, which was demonstrated based on a semi-continuous daytime data set recorded in a eutrophic freshwater lake in the Netherlands. Rrs (λ) that were derived throughout the day varied spectrally by less than 2 % relative standard deviation. Implications on measurement protocols are discussed. An open source software library for processing reflectance measurements was developed and is made publicly available.

  1. Development and validation of an online interactive, multimedia wound care algorithms program.

    Science.gov (United States)

    Beitz, Janice M; van Rijswijk, Lia

    2012-01-01

    To provide education based on evidence-based and validated wound care algorithms we designed and implemented an interactive, Web-based learning program for teaching wound care. A mixed methods quantitative pilot study design with qualitative components was used to test and ascertain the ease of use, validity, and reliability of the online program. A convenience sample of 56 RN wound experts (formally educated, certified in wound care, or both) participated. The interactive, online program consists of a user introduction, interactive assessment of 15 acute and chronic wound photos, user feedback about the percentage correct, partially correct, or incorrect algorithm and dressing choices and a user survey. After giving consent, participants accessed the online program, provided answers to the demographic survey, and completed the assessment module and photographic test, along with a posttest survey. The construct validity of the online interactive program was strong. Eighty-five percent (85%) of algorithm and 87% of dressing choices were fully correct even though some programming design issues were identified. Online study results were consistently better than previously conducted comparable paper-pencil study results. Using a 5-point Likert-type scale, participants rated the program's value and ease of use as 3.88 (valuable to very valuable) and 3.97 (easy to very easy), respectively. Similarly the research process was described qualitatively as "enjoyable" and "exciting." This digital program was well received indicating its "perceived benefits" for nonexpert users, which may help reduce barriers to implementing safe, evidence-based care. Ongoing research using larger sample sizes may help refine the program or algorithms while identifying clinician educational needs. Initial design imperfections and programming problems identified also underscored the importance of testing all paper and Web-based programs designed to educate health care professionals or guide

  2. Correcting Fallacies in Validity, Reliability, and Classification

    Science.gov (United States)

    Sijtsma, Klaas

    2009-01-01

    This article reviews three topics from test theory that continue to raise discussion and controversy and capture test theorists' and constructors' interest. The first topic concerns the discussion of the methodology of investigating and establishing construct validity; the second topic concerns reliability and its misuse, alternative definitions…

  3. Weak interaction corrections to hadronic top quark pair production; Korrekturen der schwachen Wechselwirkung zur hadronischen Topquark-Paarproduktion

    Energy Technology Data Exchange (ETDEWEB)

    Fuecker, M.

    2007-05-15

    This thesis presents the calculation of the Standard Model weak-interaction corrections of order {alpha}{sub s}{sup 2}{alpha} to hadronic top-quark pair production. The one-loop weak corrections to top antitop production due to gluon fusion and uark antiquark annihilation are computed. Also the order {alpha}{sub s}{sup 2}{alpha} corrections to top antitop production due to quark gluon and antiquark gluon scattering in the Standard Model are calculated. In this complete weak-corrections of order {alpha}{sub s}{sup 2}{alpha} to gg, q anti q, gq, and g anti q induced hadronic t anti t production the top and antitop polarizations and spin-correlations are fully taken into account. For the Tevatron and the LHC the weak contributions to the cross section, to the transverse top-momentum (p{sub T}) distributions, and to the top antitop invariant mass (M{sub t} {sub anti} {sub t}) distributions are analyzed. At the LHC the corrections to the distributions can be of the order of -10 percent compared with the leading-order results, for p{sub T}>1500 GeV and M{sub t} {sub anti} {sub t}>3000 GeV, respectively. At the Tevatron the corrections are -4 percent for p{sub T}>600 GeV and M{sub t} {sub anti} {sub t}>1000 GeV. This thesis also considers parity-even top antitop spin correlations of the form d{sigma}(++)+d{sigma}(--)-d{sigma}(+-)-d{sigma}(-+), where the first and second argument denotes the top and antitop spin projection onto a given reference axis. This spin asymmetries are computed as a function of M{sub t} {sub anti} {sub t}. At the LHC the weak corrections are of order of -10 percent for M{sub t} {sub anti} {sub t}>1000 GeV for all analyzed reference axes. At the Tevatron the corrections are in the range of 5 percent at threshold and -5 percent for M{sub t} {sub anti} {sub t}>1000 GeV. Apart from parity-even spin asymmetries also the Standard Model predictions for parity violating effects in topquark pair production are calculated. This thesis analyzes parity

  4. Towards clinical application of RayStretch for heterogeneity corrections in LDR permanent 125I prostate brachytherapy.

    Science.gov (United States)

    Hueso-González, Fernando; Ballester, Facundo; Perez-Calatayud, Jose; Siebert, Frank-André; Vijande, Javier

    RayStretch is a simple algorithm proposed for heterogeneity corrections in low-dose-rate brachytherapy. It is built on top of TG-43 consensus data, and it has been validated with Monte Carlo (MC) simulations. In this study, we take a real clinical prostate implant with 71 125 I seeds as reference and we apply RayStretch to analyze its performance in worst-case scenarios. To do so, we design two cases where large calcifications are located in the prostate lobules. RayStretch resilience under various calcification density values is also explored. Comparisons against MC calculations are performed. Dose-volume histogram-related parameters like prostate D 90 , rectum D 2cc , or urethra D 10 obtained with RayStretch agree within a few percent with the detailed MC results for all cases considered. The robustness and compatibility of RayStretch with commercial treatment planning systems indicate its applicability in clinical practice for dosimetric corrections in prostate calcifications. Its use during intraoperative ultrasound planning is foreseen. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  5. 3D Super-Resolution Motion-Corrected MRI: Validation of Fetal Posterior Fossa Measurements.

    Science.gov (United States)

    Pier, Danielle B; Gholipour, Ali; Afacan, Onur; Velasco-Annis, Clemente; Clancy, Sean; Kapur, Kush; Estroff, Judy A; Warfield, Simon K

    2016-09-01

    Current diagnosis of fetal posterior fossa anomalies by sonography and conventional MRI is limited by fetal position, motion, and by two-dimensional (2D), rather than three-dimensional (3D), representation. In this study, we aimed to validate the use of a novel magnetic resonance imaging (MRI) technique, 3D super-resolution motion-corrected MRI, to image the fetal posterior fossa. From a database of pregnant women who received fetal MRIs at our institution, images of 49 normal fetal brains were reconstructed. Six measurements of the cerebellum, vermis, and pons were obtained for all cases on 2D conventional and 3D reconstructed MRI, and the agreement between the two methods was determined using concordance correlation coefficients. Concordance of axial and coronal measurements of the transcerebellar diameter was also assessed within each method. Between the two methods, the concordance of measurements was high for all six structures (P fetal motion and orthogonal slice acquisition. This technique will facilitate further study of fetal abnormalities of the posterior fossa. Copyright © 2016 by the American Society of Neuroimaging.

  6. Experimental Validation of Advanced Dispersed Fringe Sensing (ADFS) Algorithm Using Advanced Wavefront Sensing and Correction Testbed (AWCT)

    Science.gov (United States)

    Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver

    2012-01-01

    Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.

  7. Validity of the CT to attenuation coefficient map conversion methods

    International Nuclear Information System (INIS)

    Faghihi, R.; Ahangari Shahdehi, R.; Fazilat Moadeli, M.

    2004-01-01

    The most important commercialized methods of attenuation correction in SPECT are based on attenuation coefficient map from a transmission imaging method. The transmission imaging system can be the linear source of radioelement or a X-ray CT system. The image of transmission imaging system is not useful unless to replacement of the attenuation coefficient or CT number with the attenuation coefficient in SPECT energy. In this paper we essay to evaluate the validity and estimate the error of the most used method of this transformation. The final result shows that the methods which use a linear or multi-linear curve accept a error in their estimation. The value of mA is not important but the patient thickness is very important and it can introduce a error more than 10 percent in the final result

  8. A Conceptual Model for Solving Percent Problems.

    Science.gov (United States)

    Bennett, Albert B., Jr.; Nelson, L. Ted

    1994-01-01

    Presents an alternative method to teaching percent problems which uses a 10x10 grid to help students visualize percents. Offers a means of representing information and suggests different approaches for finding solutions. Includes reproducible student worksheet. (MKR)

  9. The Algebra of the Cumulative Percent Operation.

    Science.gov (United States)

    Berry, Andrew J.

    2002-01-01

    Discusses how to help students avoid some pervasive reasoning errors in solving cumulative percent problems. Discusses the meaning of ."%+b%." the additive inverse of ."%." and other useful applications. Emphasizes the operational aspect of the cumulative percent concept. (KHR)

  10. Investigation of the range of validity of the pairwise summation method applied to the calculation of the surface roughness correction to the van der Waals force

    Science.gov (United States)

    Gusso, André; Burnham, Nancy A.

    2016-09-01

    It has long been recognized that stochastic surface roughness can considerably change the van der Waals (vdW) force between interacting surfaces and particles. However, few analytical expressions for the vdW force between rough surfaces have been presented in the literature. Because they have been derived using perturbative methods or the proximity force approximation the expressions are valid when the roughness correction is small and for a limited range of roughness parameters and surface separation. In this work, a nonperturbative approach, the effective density method (EDM) is proposed to circumvent some of these limitations. The method simplifies the calculations of the roughness correction based on pairwise summation (PWS), and allows us to derive simple expressions for the vdW force and energy between two semispaces covered with stochastic rough surfaces. Because the range of applicability of PWS and, therefore, of our results, are not known a priori, we compare the predictions based on the EDM with those based on the multilayer effective medium model, whose range of validity can be defined more properly and which is valid when the roughness correction is comparatively large. We conclude that the PWS can be used for roughness characterized by a correlation length of the order of its rms amplitude, when this amplitude is of the order of or smaller than a few nanometers, and only for typically insulating materials such as silicon dioxide, silicon nitride, diamond, and certain glasses, polymers and ceramics. The results are relevant for the correct modeling of systems where the vdW force can play a significant role such as micro and nanodevices, for the calculation of the tip-sample force in atomic force microscopy, and in problems involving adhesion.

  11. 77 FR 59139 - Prompt Corrective Action, Requirements for Insurance, and Promulgation of NCUA Rules and Regulations

    Science.gov (United States)

    2012-09-26

    ... accounting principles and voluntary audits; prompt corrective action for new credit unions; and assistance... in assets accounted for only 18 percent of losses, although accounting for 222, or over 84 percent... to adhere to fundamental federalism principles. This proposed rule and IRPS would not have a...

  12. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  13. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set—Effect of Pasteurization

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-01-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169

  14. Knowledge and application of correct car seat head restraint usage among chiropractic college interns: a cross-sectional study.

    Science.gov (United States)

    Taylor, John Am; Burke, Jeanmarie; Gavencak, John; Panwar, Pervinder

    2005-03-01

    Cervical spine injuries sustained in rear-end crashes cost at least $7 billion in insurance claims annually in the United States alone. When positioned correctly, head restraint systems have been proven effective in reducing the risk of whiplash associated disorders. Chiropractors should be knowledgeable about the correct use of head restraint systems to educate their patients and thereby prevent or minimize such injuries. The primary objective of this study was to determine the prevalence of correct positioning of car seat head restraints among the interns at our institution. The secondary objective was to determine the same chiropractic interns' knowledge of the correct positioning of car seat head restraints. It was hypothesized that 100 percent of interns would have their head restraint correctly positioned within an acceptable range and that all interns would possess the knowledge to instruct patients in the correct positioning of head restraints. Cross-sectional study of a convenient sample of 30 chiropractic interns from one institution. Interns driving into the parking lot of our health center were asked to volunteer to have measurements taken and to complete a survey. Vertical and horizontal positions of the head restraint were measured using a beam compass. A survey was administered to determine knowledge of correct head restraint position. The results were recorded, entered into a spreadsheet, and analyzed. 13.3 percent of subjects knew the recommended vertical distance and only 20 percent of subjects knew the recommended horizontal distance. Chi Square analyses substantiated that the majority of subjects were unaware of guidelines set forth by the National Highway Traffic Safety Administration (NHTSA) for the correct positioning of the head restraint (chi(2) (vertical) = 16.13, chi(2) (horizontal) = 10.80, p .05). Interestingly, the 13.3 percent of the subjects who were aware of the vertical plane recommendations did not correctly position their own

  15. THE SECONDARY EXTINCTION CORRECTION

    Energy Technology Data Exchange (ETDEWEB)

    Zachariasen, W. H.

    1963-03-15

    It is shown that Darwin's formula for the secondary extinction correction, which has been universally accepted and extensively used, contains an appreciable error in the x-ray diffraction case. The correct formula is derived. As a first order correction for secondary extinction, Darwin showed that one should use an effective absorption coefficient mu + gQ where an unpolarized incident beam is presumed. The new derivation shows that the effective absorption coefficient is mu + 2gQ(1 + cos/sup 4/2 theta )/(1 plus or minus cos/sup 2/2 theta )/s up 2/, which gives mu + gQ at theta =0 deg and theta = 90 deg , but mu + 2gQ at theta = 45 deg . Darwin's theory remains valid when applied to neutron diffraction. (auth)

  16. Validation of attenuation-corrected equilibrium radionuclide angiographic determinations of right ventricular volume: comparison with cast-validated biplane cineventriculography

    International Nuclear Information System (INIS)

    Dell'Italia, L.J.; Starling, M.R.; Walsh, R.A.; Badke, F.R.; Lasher, J.C.; Blumhardt, R.

    1985-01-01

    To determine the accuracy of attenuation-corrected equilibrium radionuclide angiographic determinations of right ventricular volumes, the authors initially studied 14 postmortem human right ventricular casts by water displacement and biplane cineventriculography. Biplane cineventriculographic right ventricular cast volumes, calculated by a modification of Simpson's rule algorithm, correlated well with right ventricular cast volumes measured by water displacement (r = .97, y = 8 + 0.88x, SEE = 6 ml). Moreover, the mean volumes obtained by both methods were no different (73 +/- 28 vs 73 +/- 25 ml). Subsequently, they studied 16 patients by both biplane cineventriculography and equilibrium radionuclide angiography. The uncorrected radionuclide right ventricular volumes were calculated by normalizing background corrected end-diastolic and end-systolic counts from hand-drawn regions of interest obtained by phase analysis for cardiac cycles processed, frame rate, and blood sample counts. Attenuation correction was performed by a simple geometric method. The attenuation-corrected radionuclide right ventricular end-diastolic volumes correlated with the cineventriculographic end-diastolic volumes (r = .91, y = 3 + 0.92x, SEE = 27 ml). Similarly, the attenuation-corrected radionuclide right ventricular end-systolic volumes correlated with the cineventriculographic end-systolic volumes (r = .93, y = - 1 + 0.91x, SEE = 16 ml). Also, the mean attenuation-corrected radionuclide end-diastolic and end-systolic volumes were no different than the average cineventriculographic end-diastolic and end-systolic volumes (160 +/- 61 and 83 +/- 44 vs 170 +/- 61 and 86 +/- 43 ml, respectively)

  17. Radiative corrections due to a heavy Higgs-particle

    International Nuclear Information System (INIS)

    Van der Bij, J.J.

    1984-01-01

    The leading two-loop corrections to the rho parameter and to the vectorboson masses was calculated in the limit of large Higgs-mass. The corrections appear to be too small to be measured, of the order of a few tenths of a percent. For rho perturbation theory breaks down for a Higgs-mass of 11 TeV and larger, for the vectorboson mass this happens for a Higgs-mass of a 4 TeV or larger. There is no direct correspondence between these results and the poles at n=3 in the gauged non-linear σ-model

  18. Correction of gynecomastia in body builders and patients with good physique.

    Science.gov (United States)

    Blau, Mordcai; Hazani, Ron

    2015-02-01

    Temporary gynecomastia in the form of breast buds is a common finding in young male subjects. In adults, permanent gynecomastia is an aesthetic impairment that may result in interest in surgical correction. Gynecomastia in body builders creates an even greater distress for patients seeking surgical treatment because of the demands of professional competition. The authors present their experience with gynecomastia in body builders as the largest study of such a group in the literature. Between the years 1980 and 2013, 1574 body builders were treated surgically for gynecomastia. Of those, 1073 were followed up for a period of 1 to 5 years. Ages ranged from 18 to 51 years. Subtotal excision in the form of subcutaneous mastectomy with removal of at least 95 percent of the glandular tissue was used in virtually all cases. In cases where body fat was extremely low, liposuction was performed in fewer than 2 percent of the cases. Aesthetically pleasing results were achieved in 98 percent of the cases based on the authors' patient satisfaction survey. The overall rate of hematomas was 9 percent in the first 15 years of the series and 3 percent in the final 15 years. There were no infections, contour deformities, or recurrences. This study demonstrates the importance of direct excision of the glandular tissue over any other surgical technique when correcting gynecomastia deformities in body builders. The novice surgeon is advised to proceed with cases that are less challenging, primarily with patients that require excision of small to medium glandular tissue. Therapeutic, IV.

  19. Validation of functional calibration and strap-down joint drift correction for computing 3D joint angles of knee, hip, and trunk in alpine skiing.

    Science.gov (United States)

    Fasel, Benedikt; Spörri, Jörg; Schütz, Pascal; Lorenzetti, Silvio; Aminian, Kamiar

    2017-01-01

    To obtain valid 3D joint angles with inertial sensors careful sensor-to-segment calibration (i.e. functional or anatomical calibration) is required and measured angular velocity at each sensor needs to be integrated to obtain segment and joint orientation (i.e. joint angles). Existing functional and anatomical calibration procedures were optimized for gait analysis and calibration movements were impractical to perform in outdoor settings. Thus, the aims of this study were 1) to propose and validate a set of calibration movements that were optimized for alpine skiing and could be performed outdoors and 2) to validate the 3D joint angles of the knee, hip, and trunk during alpine skiing. The proposed functional calibration movements consisted of squats, trunk rotations, hip ad/abductions, and upright standing. The joint drift correction previously proposed for alpine ski racing was improved by adding a second step to reduce separately azimuth drift. The system was validated indoors on a skiing carpet at the maximum belt speed of 21 km/h and for measurement durations of 120 seconds. Calibration repeatability was on average boots. Joint angle precision was <4.9° for all angles and accuracy ranged from -10.7° to 4.2° where the presence of an athlete-specific bias was observed especially for the flexion angle. The improved joint drift correction reduced azimuth drift from over 25° to less than 5°. In conclusion, the system was valid for measuring 3D joint angles during alpine skiing and could be used outdoors. Errors were similar to the values reported in other studies for gait. The system may be well suited for within-athlete analysis but care should be taken for between-athlete analysis because of a possible athlete-specific joint angle bias.

  20. Determination of percent calcium carbonate in calcium chromate

    International Nuclear Information System (INIS)

    Middleton, H.W.

    1979-01-01

    The precision, accuracy and reliability of the macro-combustion method is superior to the Knorr alkalimetric method, and it is faster. It also significantly reduces the calcium chromate waste accrual problem. The macro-combustion method has been adopted as the official method for determination of percent calcium carbonate in thermal battery grade anhydrous calcium chromate and percent calcium carbonate in quicklime used in the production of calcium chromate. The apparatus and procedure can be used to measure the percent carbonate in inorganic materials other than calcium chromate. With simple modifications in the basic apparatus and procedure, the percent carbon and hydrogen can be measured in many organic material, including polymers and polymeric formulations. 5 figures, 5 tables

  1. Convergent validity test, construct validity test and external validity test of the David Liberman algorithm

    Directory of Open Access Journals (Sweden)

    David Maldavsky

    2013-08-01

    Full Text Available The author first exposes a complement of a previous test about convergent validity, then a construct validity test and finally an external validity test of the David Liberman algorithm.  The first part of the paper focused on a complementary aspect, the differential sensitivity of the DLA 1 in an external comparison (to other methods, and 2 in an internal comparison (between two ways of using the same method, the DLA.  The construct validity test exposes the concepts underlined to DLA, their operationalization and some corrections emerging from several empirical studies we carried out.  The external validity test examines the possibility of using the investigation of a single case and its relation with the investigation of a more extended sample.

  2. Near vision correction and work productivity among textile workers

    Directory of Open Access Journals (Sweden)

    Kovin S. Naidoo

    2016-11-01

    Full Text Available Purpose: Uncorrected presbyopia (near vision impairment is prevalent in approximately 517 million people worldwide; this prevalence ranges from 30% to 80% in Africa. Good near vision is needed for a range of tasks; therefore, uncorrected presbyopia can negatively affect the quality of life of individuals, impact families and society, and potentially have negative implications on employment and labour work productivity. This study aimed to determine the impact of near vision correction on the work productivity of clothing factory workers. Methods: We conducted a cross-sectional study and sampled all workers who were aged 40 years and older and who performed near vision tasks (e.g. machinist, cutter, zip sewer, clothing pressers and quality controllers in seven clothing factories. We included workers who were employed for at least 3 months and whose uncorrected near visual acuity could be improved and corrected to better than 6/9 with spectacle correction. Workers were provided with near vision spectacles, and changes in their work productivity were evaluated after 6 months, using the factories’ output records as an indicator for measurement. Results: The final sample comprised 268 individuals, with 56% of African origin (n = 151 and 49% (n = 115 Indian origin. There were mainly females (94% in the sample, and the average age was 48 years (± 5.5 years, range 40–62 years. The overall post-correction mean production score (70.5 [SD ± 19.9] was significantly higher than the overall pre-correction mean production score (67.0 [SD ±20.3] (p < 0.001. The average change in production score was 3.5 (95% confidence interval [CI] 2.7–4.3, and the percent difference was 6.4% (95% CI 5.2–7.7. The increase in work productivity was significant for individuals of African (p < 0.001 and Indian origins (p < 0.001 but not for those of mixed race (p = 0.364; n = 2. Post-correction, the production scores of women increased significantly by 6.6% (95% CI 5.3

  3. Temperature Data Assimilation with Salinity Corrections: Validation for the NSIPP Ocean Data Assimilation System in the Tropical Pacific Ocean, 1993-1998

    Science.gov (United States)

    Troccoli, Alberto; Rienecker, Michele M.; Keppenne, Christian L.; Johnson, Gregory C.

    2003-01-01

    The NASA Seasonal-to-Interannual Prediction Project (NSIPP) has developed an Ocean data assimilation system to initialize the quasi-isopycnal ocean model used in our experimental coupled-model forecast system. Initial tests of the system have focused on the assimilation of temperature profiles in an optimal interpolation framework. It is now recognized that correction of temperature only often introduces spurious water masses. The resulting density distribution can be statically unstable and also have a detrimental impact on the velocity distribution. Several simple schemes have been developed to try to correct these deficiencies. Here the salinity field is corrected by using a scheme which assumes that the temperature-salinity relationship of the model background is preserved during the assimilation. The scheme was first introduced for a zlevel model by Troccoli and Haines (1999). A large set of subsurface observations of salinity and temperature is used to cross-validate two data assimilation experiments run for the 6-year period 1993-1998. In these two experiments only subsurface temperature observations are used, but in one case the salinity field is also updated whenever temperature observations are available.

  4. Meson-exchange-current corrections to magnetic moments in quantum hadrodynamics

    International Nuclear Information System (INIS)

    Morse, T.M.

    1990-01-01

    Corrections to the magnetic moments of the non-relativistic shell model (Schmidt lines) have a long history. In the early fifties calculations of pion exchange and core polarization contributions to nuclear magnetic moments were initiated. These calculations matured by the early eighties to include other mesons and the delta isobar. Relativistic nuclear shell model calculations are relatively recent. Meson exchange and the delta isobar current contributions to the magnetic moments of the relativistic shell model have remained largely unexplored. The disagreement between the valence values of spherical relativistic mean-field models and experiment was a major problem with early (1975-1985) quantum hydrodynamics (QHD) calculations of magnetic moments. Core polarization calculations (1986-1988) have been found to resolve the large discrepancy, predicting isoscalar magnetic moments to within typically five percent of experiment. The isovector magnetic moments, however, are about twice as far from experiment with an average discrepancy of about ten percent. The pion, being the lightest of the mesons, has historically been expected to dominate isovector corrections. Because this has been found to be true in non-relativistic calculations, the author calculated the pion corrections in the framework of QHD. The seagull and in-flight pion exchange current diagram corrections to the magnetic moments of eight finite nuclei (plus or minus one valence nucleon from the magic A = 16 and A = 40 doubly closed shell systems) are calculated in the framework of QHD, and compared with earlier non-relativistic calculations and experiment

  5. Caracterisation, modelisation et validation du transfert radiatif d'atmospheres non standard; impact sur les corrections atmospheriques d'images de teledetection

    Science.gov (United States)

    Zidane, Shems

    This study is based on data acquired with an airborne multi-altitude sensor on July 2004 during a nonstandard atmospheric event in the region of Saint-Jean-sur-Richelieu, Quebec. By non-standard atmospheric event we mean an aerosol atmosphere that does not obey the typical monotonic, scale height variation employed in virtually all atmospheric correction codes. The surfaces imaged during this field campaign included a diverse variety of targets : agricultural land, water bodies, urban areas and forests. The multi-altitude approach employed in this campaign allowed us to better understand the altitude dependent influence of the atmosphere over the array of ground targets and thus to better characterize the perturbation induced by a non-standard (smoke) plume. The transformation of the apparent radiance at 3 different altitudes into apparent reflectance and the insertion of the plume optics into an atmospheric correction model permitted an atmospheric correction of the apparent reflectance at the two higher altitudes. The results showed consistency with the apparent validation reflectances derived from the lowest altitude radiances. This approach effectively confirmed the accuracy of our non-standard atmospheric correction approach. This test was particularly relevant at the highest altitude of 3.17 km : the apparent reflectances at this altitude were above most of the plume and therefore represented a good test of our ability to adequately correct for the influence of the perturbation. Standard atmospheric disturbances are obviously taken into account in most atmospheric correction models, but these are based on monotonically decreasing aerosol variations with increasing altitude. When the atmospheric radiation is affected by a plume or a local, non-standard pollution event, one must adapt the existing models to the radiative transfer constraints of the local perturbation and to the reality of the measurable parameters available for ingestion into the model. The

  6. Principles of Proper Validation

    DEFF Research Database (Denmark)

    Esbensen, Kim; Geladi, Paul

    2010-01-01

    to suffer from the same deficiencies. The PPV are universal and can be applied to all situations in which the assessment of performance is desired: prediction-, classification-, time series forecasting-, modeling validation. The key element of PPV is the Theory of Sampling (TOS), which allow insight......) is critically necessary for the inclusion of the sampling errors incurred in all 'future' situations in which the validated model must perform. Logically, therefore, all one data set re-sampling approaches for validation, especially cross-validation and leverage-corrected validation, should be terminated...

  7. 26 CFR 301.6226(b)-1 - 5-percent group.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 18 2010-04-01 2010-04-01 false 5-percent group. 301.6226(b)-1 Section 301.6226... ADMINISTRATION PROCEDURE AND ADMINISTRATION Assessment In General § 301.6226(b)-1 5-percent group. (a) In general. All members of a 5-percent group shall join in filing any petition for judicial review. The...

  8. Reliability and Validity of the Dutch Physical Activity Questionnaires for Children (PAQ-C) and Adolescents (PAQ-A).

    Science.gov (United States)

    Bervoets, Liene; Van Noten, Caroline; Van Roosbroeck, Sofie; Hansen, Dominique; Van Hoorenbeeck, Kim; Verheyen, Els; Van Hal, Guido; Vankerckhoven, Vanessa

    2014-01-01

    This study was designed to validate the Dutch Physical Activity Questionnaires for Children (PAQ-C) and Adolescents (PAQ-A). After adjustment of the original Canadian PAQ-C and PAQ-A (i.e. translation/back-translation and evaluation by expert committee), content validity of both PAQs was assessed and calculated using item-level (I-CVI) and scale-level (S-CVI) content validity indexes. Inter-item and inter-rater reliability of 196 PAQ-C and 95 PAQ-A filled in by both children or adolescents and their parent, were evaluated. Inter-item reliability was calculated by Cronbach's alpha (α) and inter-rater reliability was examined by percent observed agreement and weighted kappa (κ). Concurrent validity of PAQ-A was examined in a subsample of 28 obese and 16 normal-weight children by comparing it with concurrently measured physical activity using a maximal cardiopulmonary exercise test for the assessment of peak oxygen uptake (VO2 peak). For both PAQs, I-CVI ranged 0.67-1.00. S-CVI was 0.89 for PAQ-C and 0.90 for PAQ-A. A total of 192 PAQ-C and 94 PAQ-A were fully completed by both child and parent. Cronbach's α was 0.777 for PAQ-C and 0.758 for PAQ-A. Percent agreement ranged 59.9-74.0% for PAQ-C and 51.1-77.7% for PAQ-A, and weighted κ ranged 0.48-0.69 for PAQ-C and 0.51-0.68 for PAQ-A. The correlation between total PAQ-A score and VO2 peak - corrected for age, gender, height and weight - was 0.516 (p = 0.001). Both PAQs have an excellent content validity, an acceptable inter-item reliability and a moderate to good strength of inter-rater agreement. In addition, total PAQ-A score showed a moderate positive correlation with VO2 peak. Both PAQs have an acceptable to good reliability and validity, however, further validity testing is recommended to provide a more complete assessment of both PAQs.

  9. Addendum to: Corrective Action Decision Document/Corrective Action Plan (CADD/CAP) for Corrective Action Unit (CAU) 443: Central Nevada Test Area (CNTA)-Subsurface Central Nevada Test Area, DOE/NV-977

    International Nuclear Information System (INIS)

    2008-01-01

    The environmental remediation closure process for the nuclear test at the Central Nevada Test Area (CNTA) has progressed from the approved Corrective Action Decision Document/Corrective Action Plan (CADD/CAP) to this addendum. The closure process required the installation of three monitoring/validation (MV) wells and validation analysis of the flow and transport model. The model validation analysis led to the conclusion that the hydraulic heads simulated by the flow model did not adequately predict observed heads at the MV-1, MV-2, and MV-3 validation points (wells and piezometers). The observed heads from screened intervals near the test horizon were higher than the model predicted and are believed to be the result of detonation-related effects that have persisted since the nuclear test. These effects, which include elevated heads out from the detonation zone and lower heads in the immediate vicinity of the detonation, are seen at other nuclear tests and typically dissipate within a few years. These effects were not included in the initial head distribution of the model. The head variations at CNTA are believed to have persisted due to the very low permeability of the material at the detonation level.

  10. Characterization of the uranium--2 weight percent molybdenum alloy

    International Nuclear Information System (INIS)

    Hemperly, V.C.

    1976-01-01

    The uranium-2 wt percent molybdenum alloy was prepared, processed, and age hardened to meet a minimum 930-MPa yield strength (0.2 percent) with a minimum of 10 percent elongation. These mechanical properties were obtained with a carbon level up to 300 ppM in the alloy. The tensile-test ductility is lowered by the humidity of the laboratory atmosphere

  11. Validation of Physical Activity Tracking via Android Smartphones Compared to ActiGraph Accelerometer: Laboratory-Based and Free-Living Validation Studies.

    Science.gov (United States)

    Hekler, Eric B; Buman, Matthew P; Grieco, Lauren; Rosenberger, Mary; Winter, Sandra J; Haskell, William; King, Abby C

    2015-04-15

    There is increasing interest in using smartphones as stand-alone physical activity monitors via their built-in accelerometers, but there is presently limited data on the validity of this approach. The purpose of this work was to determine the validity and reliability of 3 Android smartphones for measuring physical activity among midlife and older adults. A laboratory (study 1) and a free-living (study 2) protocol were conducted. In study 1, individuals engaged in prescribed activities including sedentary (eg, sitting), light (sweeping), moderate (eg, walking 3 mph on a treadmill), and vigorous (eg, jogging 5 mph on a treadmill) activity over a 2-hour period wearing both an ActiGraph and 3 Android smartphones (ie, HTC MyTouch, Google Nexus One, and Motorola Cliq). In the free-living study, individuals engaged in usual daily activities over 7 days while wearing an Android smartphone (Google Nexus One) and an ActiGraph. Study 1 included 15 participants (age: mean 55.5, SD 6.6 years; women: 56%, 8/15). Correlations between the ActiGraph and the 3 phones were strong to very strong (ρ=.77-.82). Further, after excluding bicycling and standing, cut-point derived classifications of activities yielded a high percentage of activities classified correctly according to intensity level (eg, 78%-91% by phone) that were similar to the ActiGraph's percent correctly classified (ie, 91%). Study 2 included 23 participants (age: mean 57.0, SD 6.4 years; women: 74%, 17/23). Within the free-living context, results suggested a moderate correlation (ie, ρ=.59, PAndroid smartphone can provide comparable estimates of physical activity to an ActiGraph in both a laboratory-based and free-living context for estimating sedentary and MVPA and that different Android smartphones may reliably confer similar estimates.

  12. Weighted divergence correction scheme and its fast implementation

    Science.gov (United States)

    Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun

    2017-05-01

    Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.

  13. The correction of the littlest Higgs model to the Higgs production process e+e-→e+e-H at the ILC

    International Nuclear Information System (INIS)

    Wang, Xuelei; Liu, Yaobei; Chen, Jihong; Yang, Hua

    2007-01-01

    The littlest Higgs model is the most economical one among various little Higgs models. In the context of the littlest Higgs model, we study the process e + e - →e + e - H at the ILC and calculate the correction of the littlest Higgs model to the cross section of this process. The results show that, in the favorable parameter spaces preferred by the electroweak precision data, the value of the relative correction is in the range from a few percent to tens percent. In most cases, the correction is large enough to reach the measurement precision of the ILC. Therefore, the correction of the littlest Higgs model to the process e + e - →e + e - H might be detected at the ILC, which will give an ideal way to test the model. (orig.)

  14. The development of a statistical procedure to correct the effects of restriction of range on validity coefficients

    Directory of Open Access Journals (Sweden)

    J. M. Scheepers

    1996-06-01

    Full Text Available In the validation of tests used for selection purposes, the obtained validity coefficients are invariably underestimates of the true validities, due to explicit and implicit selection in respect of the relevant variables. Both explicit and implicit selection leads to restriction of range of the relevant variables, and this in turn reduces the obtained validites. A formal proof for this is given. A number of researchers have developed formulae for correcting sample validities in order to get better estimates of the true validities (Pearson/ 1903; Thorndike, 1949; Gulliksen, 1950; Rydberg, 1962 and Lord & Novick, 1968. It is, however, virtually impossible to obtain a complete view of the problem of restriction of range in this way. In the present paper a different approach has been followed: Population correlations have been computed for various degrees of truncation of the explicit selection variable. This has been done for population correlations ranging from 0,10 to 0,99. A graphical display, indicating the shrinkage of the population correlations for various truncation ratios, has been prepared. Opsomming In die geldigheidsbepaling van toetse wat vir keuringsdoeleindes gebruik word, is die verkree geldigheidskoeffisiente sender uitsondering onderskattings van die ware geldighede as gevolg van eksplisiete en implisiete keuring ten opsigte van die tersaaklike veranderlikes. Sowel eksplisiete as implisiete keuring lei tot inperking van die variasiewydte van die relevante veranderiikes, en dit reduseer om die beurt weer die verkree geldighede. 'n Formele bewys hiervoor word in die referaat gegee. 'n Aantal navorsers het formules ontwikkel om steekproefgeldighede te korrigeer ten einde beter beramings van die ware geldighede te verkry (Pearson/ 1903; horndike, 1949: Gulliksen, 1950; Rygberg, 1962 en Lord & Novick, 1968. Dit is egter bykans onmoontlik om op hierdie wyse 'n geheelbeeld van die probleem van inperking van variasiewydte te vorm. In die

  15. The relationships between percent body fat and other ...

    African Journals Online (AJOL)

    The relationships between percent body fat and other anthropometric nutritional predictors among male and female children in Nigeria. ... A weak significant positive correlation was observed between the percent body fat and height – armspan ratio ... There was evidence of overweight and obesity in both children. The mid ...

  16. Analysis of corrections to the eikonal approximation

    Science.gov (United States)

    Hebborn, C.; Capel, P.

    2017-11-01

    Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.

  17. Reed-Solomon error-correction as a software patch mechanism.

    Energy Technology Data Exchange (ETDEWEB)

    Pendley, Kevin D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-11-01

    This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.

  18. Motion-corrected whole-heart PET-MR for the simultaneous visualisation of coronary artery integrity and myocardial viability: an initial clinical validation.

    Science.gov (United States)

    Munoz, Camila; Kunze, Karl P; Neji, Radhouene; Vitadello, Teresa; Rischpler, Christoph; Botnar, René M; Nekolla, Stephan G; Prieto, Claudia

    2018-05-12

    Cardiac PET-MR has shown potential for the comprehensive assessment of coronary heart disease. However, image degradation due to physiological motion remains a challenge that could hinder the adoption of this technology in clinical practice. The purpose of this study was to validate a recently proposed respiratory motion-corrected PET-MR framework for the simultaneous visualisation of myocardial viability ( 18 F-FDG PET) and coronary artery anatomy (coronary MR angiography, CMRA) in patients with chronic total occlusion (CTO). A cohort of 14 patients was scanned with the proposed PET-CMRA framework. PET and CMRA images were reconstructed with and without the proposed motion correction approach for comparison purposes. Metrics of image quality including visible vessel length and sharpness were obtained for CMRA for both the right and left anterior descending coronary arteries (RCA, LAD), and relative increase in 18 F-FDG PET signal after motion correction for standard 17-segment polar maps was computed. Resulting coronary anatomy by CMRA and myocardial integrity by PET were visually compared against X-ray angiography and conventional Late Gadolinium Enhancement (LGE) MRI, respectively. Motion correction increased CMRA visible vessel length by 49.9% and 32.6% (RCA, LAD) and vessel sharpness by 12.3% and 18.9% (RCA, LAD) on average compared to uncorrected images. Coronary lumen delineation on motion-corrected CMRA images was in good agreement with X-ray angiography findings. For PET, motion correction resulted in an average 8% increase in 18 F-FDG signal in the inferior and inferolateral segments of the myocardial wall. An improved delineation of myocardial viability defects and reduced noise in the 18 F-FDG PET images was observed, improving correspondence to subendocardial LGE-MRI findings compared to uncorrected images. The feasibility of the PET-CMRA framework for simultaneous cardiac PET-MR imaging in a short and predictable scan time (~11 min) has been

  19. CRED Cumulative Map of Percent Scleractinian Coral Cover along towed camera sled tracks and AUV dive tracks at Rota Island, Commonwealth of the Northern Mariana Islands

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry. Optical data were collected by CRED...

  20. Real Time MRI Motion Correction with Markerless Tracking

    DEFF Research Database (Denmark)

    Benjaminsen, Claus; Jensen, Rasmus Ramsbøl; Wighton, Paul

    Prospective motion correction for MRI neuroimaging has been demonstrated using MR navigators and external tracking systems using markers. The drawbacks of these two motion estimation methods include prolonged scan time plus lack of compatibility with all image acquisitions, and difficulties...... validating marker attachment resulting in uncertain estimation of the brain motion respectively. We have developed a markerless tracking system, and in this work we demonstrate the use of our system for prospective motion correction, and show that despite being computationally demanding, markerless tracking...... can be implemented for real time motion correction....

  1. NLO supersymmetric QCD corrections to tt-bar h0 associated production at hadron colliders

    International Nuclear Information System (INIS)

    Wu Peng; Ma Wengan; Hou Hongsheng; Zhang Renyou; Han Liang; Jiang Yi

    2005-01-01

    We calculate NLO QCD corrections to the lightest neutral Higgs boson production associated with top quark pair at hadron colliders in the minimal supersymmetric standard model (MSSM). Our calculation shows that the total QCD correction significantly reduces its dependence on the renormalization/factorization scale. The relative correction from the SUSY QCD part approaches to be a constant, if either M S or m g- bar is heavy enough. The corrections are generally moderate (in the range of few percent to 20%) and under control in most of the SUSY parameter space. The relative correction is obviously related to m g- bar , A t and μ, but not very sensitive to tanβ, M S at both the Tevatron and the LHC with our specified parameters

  2. Visual impairment secondary to congenital glaucoma in children: visual responses, optical correction and use of low vision AIDS

    Directory of Open Access Journals (Sweden)

    Maria Aparecida Onuki Haddad

    2009-01-01

    Full Text Available INTRODUCTION: Congenital glaucoma is frequently associated with visual impairment due to optic nerve damage, corneal opacities, cataracts and amblyopia. Poor vision in childhood is related to global developmental problems, and referral to vision habilitation/rehabilitation services should be without delay to promote efficient management of the impaired vision. OBJECTIVE: To analyze data concerning visual response, the use of optical correction and prescribed low vision aids in a population of children with congenital glaucoma. METHOD: The authors analyzed data from 100 children with congenital glaucoma to assess best corrected visual acuity, prescribed optical correction and low vision aids. RESULTS: Fifty-five percent of the sample were male, 43% female. The mean age was 6.3 years. Two percent presented normal visual acuity levels, 29% mild visual impairment, 28% moderate visual impairment, 15% severe visual impairment, 11% profound visual impairment, and 15% near blindness. Sixty-eight percent received optical correction for refractive errors. Optical low vision aids were adopted for distance vision in 34% of the patients and for near vision in 6%. A manual monocular telescopic system with 2.8 × magnification was the most frequently prescribed low vision aid for distance, and for near vision a +38 diopter illuminated stand magnifier was most frequently prescribed. DISCUSSION AND CONCLUSION: Careful low vision assessment and the appropriate prescription of optical corrections and low vision aids are mandatory in children with congenital glaucoma, since this will assist their global development, improving efficiency in daily life activities and promoting social and educational inclusion.

  3. Validation of geotechnical software for repository performance assessment

    International Nuclear Information System (INIS)

    LeGore, T.; Hoover, J.D.; Khaleel, R.; Thornton, E.C.; Anantatmula, R.P.; Lanigan, D.C.

    1989-01-01

    An important step in the characterization of a high level nuclear waste repository is to demonstrate that geotechnical software, used in performance assessment, correctly models validation. There is another type of validation, called software validation. It is based on meeting the requirements of specifications documents (e.g. IEEE specifications) and does not directly address the correctness of the specifications. The process of comparing physical experimental results with the predicted results should incorporate an objective measure of the level of confidence regarding correctness. This paper reports on a methodology developed that allows the experimental uncertainties to be explicitly included in the comparison process. The methodology also allows objective confidence levels to be associated with the software. In the event of a poor comparison, the method also lays the foundation for improving the software

  4. Attenuation correction for the collimated gamma ray assay of cylindrical samples

    International Nuclear Information System (INIS)

    Patra, Sabyasachi; Agarwal, Chhavi; Goswami, A.; Gathibandhe, M.

    2015-01-01

    The Hybrid Monte Carlo (HMC) method developed earlier for attenuation correction of non-collimated samples [Agarwal et al., 2008, Nucl. Instrum. Methods A 597, 198], has been extended to the segmented gamma ray assay of cylindrical samples. The method has been validated both experimentally and theoretically. For experimental validation, the results of HMC calculation have been compared with the experimentally obtained attenuation correction factors. The HMC attenuation correction factors have also been compared with the results obtained from literature available near-field and far-field formulae at two sample-to-detector distances (10.3 cm and 20.4 cm). The method has been found to be valid at all sample-to-detector distances over a wide range of transmittance. On the other hand, the literature available near-field and far-field formulae have been found to work over a limited range of sample-to detector distances and transmittances. The HMC method has been further extended to circular collimated geometries where analytical formula for attenuation correction does not exist. - Highlights: • Hybrid Monte Carlo method for attenuation correction developed for SGA system. • Method found to work for all sample-detector geometries for all transmittances. • The near-field formula applicable only after certain sample-detector distance. • The far-field formula applicable only for higher transmittances (>18%). • Hybrid Monte Carlo method further extended to circular collimated geometry

  5. Phased Acoustic Array Measurements of a 5.75 Percent Hybrid Wing Body Aircraft

    Science.gov (United States)

    Burnside, Nathan J.; Horne, William C.; Elmer, Kevin R.; Cheng, Rui; Brusniak, Leon

    2016-01-01

    Detailed acoustic measurements of the noise from the leading-edge Krueger flap of a 5.75 percent Hybrid Wing Body (HWB) aircraft model were recently acquired with a traversing phased microphone array in the AEDC NFAC (Arnold Engineering Development Complex, National Full Scale Aerodynamics Complex) 40- by 80-Foot Wind Tunnel at NASA Ames Research Center. The spatial resolution of the array was sufficient to distinguish between individual support brackets over the full-scale frequency range of 100 to 2875 Hertz. For conditions representative of landing and take-off configuration, the noise from the brackets dominated other sources near the leading edge. Inclusion of flight-like brackets for select conditions highlights the importance of including the correct number of leading-edge high-lift device brackets with sufficient scale and fidelity. These measurements will support the development of new predictive models.

  6. 7 CFR 762.129 - Percent of guarantee and maximum loss.

    Science.gov (United States)

    2010-01-01

    ... loss. (a) General. The percent of guarantee will not exceed 90 percent based on the credit risk to the lender and the Agency both before and after the transaction. The Agency will determine the percentage of... PLP lenders will not be less than 80 percent. (d) Maximum loss. The maximum amount the Agency will pay...

  7. Higher order QCD corrections in small x physics

    International Nuclear Information System (INIS)

    Chachamis, G.

    2006-11-01

    We study higher order QCD corrections in small x Physics. The numerical implementation of the full NLO photon impact factor is the remaining necessary piece for the testing of the NLO BFKL resummation against data from physical processes, such as γ * γ * collisions. We perform the numerical integration over phase space for the virtual corrections to the NLO photon impact factor. This, along with the previously calculated real corrections, makes feasible in the near future first estimates for the γ*γ* total cross section, since the convolution of the full impact factor with the NLO BFKL gluon Green's function is now straightforward. The NLO corrections for the photon impact factor are sizeable and negative. In the second part of this thesis, we estimate higher order correction to the BK equation. We are mainly interested in whether partonic saturation delays or not in rapidity when going beyond the leading order. In our investigation, we use the so called 'rapidity veto' which forbid two emissions to be very close in rapidity, to 'switch on' higher order corrections to the BK equation. From analytic and numerical analysis, we conclude that indeed saturation does delay in rapidity when higher order corrections are taken into account. In the last part, we investigate higher order QCD corrections as additional corrections to the Electroweak (EW) sector. The question of whether BFKL corrections are of any importance in the Regge limit for the EW sector seems natural; although they arise in higher loop level, the accumulation of logarithms in energy s at high energies, cannot be dismissed without an investigation. We focus on the process γγ→ZZ. We calculate the pQCD corrections in the forward region at leading logarithmic (LL) BFKL accuracy, which are of the order of few percent at the TeV energy scale. (orig.)

  8. Higher order QCD corrections in small x physics

    Energy Technology Data Exchange (ETDEWEB)

    Chachamis, G.

    2006-11-15

    We study higher order QCD corrections in small x Physics. The numerical implementation of the full NLO photon impact factor is the remaining necessary piece for the testing of the NLO BFKL resummation against data from physical processes, such as {gamma}{sup *}{gamma}{sup *} collisions. We perform the numerical integration over phase space for the virtual corrections to the NLO photon impact factor. This, along with the previously calculated real corrections, makes feasible in the near future first estimates for the {gamma}*{gamma}* total cross section, since the convolution of the full impact factor with the NLO BFKL gluon Green's function is now straightforward. The NLO corrections for the photon impact factor are sizeable and negative. In the second part of this thesis, we estimate higher order correction to the BK equation. We are mainly interested in whether partonic saturation delays or not in rapidity when going beyond the leading order. In our investigation, we use the so called 'rapidity veto' which forbid two emissions to be very close in rapidity, to 'switch on' higher order corrections to the BK equation. From analytic and numerical analysis, we conclude that indeed saturation does delay in rapidity when higher order corrections are taken into account. In the last part, we investigate higher order QCD corrections as additional corrections to the Electroweak (EW) sector. The question of whether BFKL corrections are of any importance in the Regge limit for the EW sector seems natural; although they arise in higher loop level, the accumulation of logarithms in energy s at high energies, cannot be dismissed without an investigation. We focus on the process {gamma}{gamma}{yields}ZZ. We calculate the pQCD corrections in the forward region at leading logarithmic (LL) BFKL accuracy, which are of the order of few percent at the TeV energy scale. (orig.)

  9. Quantum gravitational corrections for spinning particles

    International Nuclear Information System (INIS)

    Fröb, Markus B.

    2016-01-01

    We calculate the quantum corrections to the gauge-invariant gravitational potentials of spinning particles in flat space, induced by loops of both massive and massless matter fields of various types. While the corrections to the Newtonian potential induced by massless conformal matter for spinless particles are well known, and the same corrections due to massless minimally coupled scalars http://dx.doi.org/10.1088/0264-9381/27/24/245008, massless non-conformal scalars http://dx.doi.org/10.1103/PhysRevD.87.104027 and massive scalars, fermions and vector bosons http://dx.doi.org/10.1103/PhysRevD.91.064047 have been recently derived, spinning particles receive additional corrections which are the subject of the present work. We give both fully analytic results valid for all distances from the particle, and present numerical results as well as asymptotic expansions. At large distances from the particle, the corrections due to massive fields are exponentially suppressed in comparison to the corrections from massless fields, as one would expect. However, a surprising result of our analysis is that close to the particle itself, on distances comparable to the Compton wavelength of the massive fields running in the loops, these corrections can be enhanced with respect to the massless case.

  10. Validation of model-based brain shift correction in neurosurgery via intraoperative magnetic resonance imaging: preliminary results

    Science.gov (United States)

    Luo, Ma; Frisken, Sarah F.; Weis, Jared A.; Clements, Logan W.; Unadkat, Prashin; Thompson, Reid C.; Golby, Alexandra J.; Miga, Michael I.

    2017-03-01

    The quality of brain tumor resection surgery is dependent on the spatial agreement between preoperative image and intraoperative anatomy. However, brain shift compromises the aforementioned alignment. Currently, the clinical standard to monitor brain shift is intraoperative magnetic resonance (iMR). While iMR provides better understanding of brain shift, its cost and encumbrance is a consideration for medical centers. Hence, we are developing a model-based method that can be a complementary technology to address brain shift in standard resections, with resource-intensive cases as referrals for iMR facilities. Our strategy constructs a deformation `atlas' containing potential deformation solutions derived from a biomechanical model that account for variables such as cerebrospinal fluid drainage and mannitol effects. Volumetric deformation is estimated with an inverse approach that determines the optimal combinatory `atlas' solution fit to best match measured surface deformation. Accordingly, preoperative image is updated based on the computed deformation field. This study is the latest development to validate our methodology with iMR. Briefly, preoperative and intraoperative MR images of 2 patients were acquired. Homologous surface points were selected on preoperative and intraoperative scans as measurement of surface deformation and used to drive the inverse problem. To assess the model accuracy, subsurface shift of targets between preoperative and intraoperative states was measured and compared to model prediction. Considering subsurface shift above 3 mm, the proposed strategy provides an average shift correction of 59% across 2 cases. While further improvements in both the model and ability to validate with iMR are desired, the results reported are encouraging.

  11. Detection of overreported psychopathology with the MMPI-2-RF [corrected] validity scales.

    Science.gov (United States)

    Sellbom, Martin; Bagby, R Michael

    2010-12-01

    We examined the utility of the validity scales on the recently released Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2 RF; Ben-Porath & Tellegen, 2008) to detect overreported psychopathology. This set of validity scales includes a newly developed scale and revised versions of the original MMPI-2 validity scales. We used an analogue, experimental simulation in which MMPI-2 RF responses (derived from archived MMPI-2 protocols) of undergraduate students instructed to overreport psychopathology (in either a coached or noncoached condition) were compared with those of psychiatric inpatients who completed the MMPI-2 under standardized instructions. The MMPI-2 RF validity scale Infrequent Psychopathology Responses best differentiated the simulation groups from the sample of patients, regardless of experimental condition. No other validity scale added consistent incremental predictive utility to Infrequent Psychopathology Responses in distinguishing the simulation groups from the sample of patients. Classification accuracy statistics confirmed the recommended cut scores in the MMPI-2 RF manual (Ben-Porath & Tellegen, 2008).

  12. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    International Nuclear Information System (INIS)

    Calderon, E; Siergiej, D

    2014-01-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detector (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement

  13. Using administrative data to estimate time to breast cancer diagnosis and percent of screen-detected breast cancers – a validation study in Alberta, Canada.

    Science.gov (United States)

    Yuan, Y; Li, M; Yang, J; Winget, M

    2015-05-01

    Appropriate use of administrative data enables the assessment of care quality at the population level. Our objective was to develop/validate methods for assessing quality of breast cancer diagnostic care using administrative data, specifically by identifying relevant medical tests to estimate the percentage screen/symptom-detected cancers and time to diagnosis. Two databases were created for all women diagnosed with a first-ever breast cancer in years 2007-2010 in Alberta, Canada, with dates of medical tests received in years 2006-2010. One purchased database had test results and was used to determine the 'true' first relevant test of a cancer diagnosis. The other free administrative database had test types but no test results. Receiver operating characteristic curves and concordance rates were used to assess estimates of percent screen/symptom-detected breast cancers; Log-rank test was used to assess time to diagnosis obtained from the two databases. Using a look-back period of 4-6 months from cancer diagnosis to identify relevant tests resulted in over 94% concordance, sensitivity and specificity for classifying patients into screen/symptom-detected group; good agreement between the distributions of time to diagnosis was also achieved. Our findings support the use of administrative data to accurately identify relevant tests for assessing the quality of breast cancer diagnostic care. © 2014 John Wiley & Sons Ltd.

  14. A project of X-ray hardening correction in large ICT

    International Nuclear Information System (INIS)

    Fang Min; Liu Yinong; Ni Jianping

    2005-01-01

    This paper presents a means of polychromatic X-ray beam hardening correction using a standard function to transform the polychromatic projection to monochromatic projection in large Industrial Computed Tomography (ICT). Some parameters were defined to verify the validity of hardening correction in large ICT and optimized. Simulated experiments were used to prove that without prior knowledge of the composition of the scanned object, the correction method using monochromatic reconstruction arithmetic could remove beam hardening artifact greatly. (authors)

  15. Generalized radiative corrections for hadronic targets

    International Nuclear Information System (INIS)

    Calan, C. de; Navelet, H.; Picard, J.

    1990-02-01

    Besides the radiative corrections theory at the order α 2 for reactions involving an arbitrary number of particles, this report gives the complete formula for the correction factor δ in dσ = dσ Born (1 + δ). The only approximation made here - unavoidable in this formulation - is to assume that the Born amplitude can be factorized. This calculation is valid for spin zero bosons. In the 1/2 fermion case, an extra contribution appears which has been analytically computed using a minor approximation. Special care has been devoted to the 1/v divergence of the amplitude near thresholds [fr

  16. Remote Sensing of Tropical Ecosystems: Atmospheric Correction and Cloud Masking Matter

    Science.gov (United States)

    Hilker, Thomas; Lyapustin, Alexei I.; Tucker, Compton J.; Sellers, Piers J.; Hall, Forrest G.; Wang, Yujie

    2012-01-01

    Tropical rainforests are significant contributors to the global cycles of energy, water and carbon. As a result, monitoring of the vegetation status over regions such as Amazonia has been a long standing interest of Earth scientists trying to determine the effect of climate change and anthropogenic disturbance on the tropical ecosystems and its feedback on the Earth's climate. Satellite-based remote sensing is the only practical approach for observing the vegetation dynamics of regions like the Amazon over useful spatial and temporal scales, but recent years have seen much controversy over satellite-derived vegetation states in Amazônia, with studies predicting opposite feedbacks depending on data processing technique and interpretation. Recent results suggest that some of this uncertainty could stem from a lack of quality in atmospheric correction and cloud screening. In this paper, we assess these uncertainties by comparing the current standard surface reflectance products (MYD09, MYD09GA) and derived composites (MYD09A1, MCD43A4 and MYD13A2 - Vegetation Index) from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Aqua satellite to results obtained from the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm. MAIAC uses a new cloud screening technique, and novel aerosol retrieval and atmospheric correction procedures which are based on time-series and spatial analyses. Our results show considerable improvements of MAIAC processed surface reflectance compared to MYD09/MYD13 with noise levels reduced by a factor of up to 10. Uncertainties in the current MODIS surface reflectance product were mainly due to residual cloud and aerosol contamination which affected the Normalized Difference Vegetation Index (NDVI): During the wet season, with cloud cover ranging between 90 percent and 99 percent, conventionally processed NDVI was significantly depressed due to undetected clouds. A smaller reduction in NDVI due to increased

  17. Sci—Fri AM: Mountain — 01: Validation of a new formulism and the related correction factors on output factor determination for small photon fields

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yizhen; Younge, Kelly; Nielsen, Michelle; Mutanga, Theodore [Peel Regional Cancer Center, Trillium Health Partners, Mississauga, ON (Canada); Cui, Congwu [Peel Regional Cancer Center, Trillium Health Partners, Mississauga, ON (Canada); Department of Radiation Oncology, University of Toronto, Toronto, ON (Canada); Das, Indra J. [Radiation Oncology Dept., Indiana University- School of Medicine, Indianapolis, IN (United States)

    2014-08-15

    Small field dosimetry measurements including output factors are difficult due to lack of charged-particle equilibrium, occlusion of the radiation source, the finite size of detectors, and non-water equivalence of detector components. With available detectors significant variations could be measured that will lead to incorrect delivered dose to patients. IAEA/AAPM have provided a framework and formulation to correct the detector response in small photon fields. Monte Carlo derived correction factors for some commonly used small field detectors are now available, however validation has not been performed prior to this study. An Exradin A16 chamber, EDGE detector and SFD detector were used to perform the output factor measurement for a series of conical fields (5–30mm) on a Varian iX linear accelerator. Discrepancies up to 20%, 10% and 6% were observed for 5, 7.5 and 10 mm cones between the initial output factors measured by the EDGE detector and the A16 ion chamber, while the discrepancies for the conical fields larger than 10 mm were less than 4%. After the application of the correction, the output factors agree well with each other to within 1%. Caution is needed when determining the output factors for small photon fields, especially for fields 10 mm in diameter or smaller. More than one type of detector should be used, each with proper corrections applied to the measurement results. It is concluded that with the application of correction factors to appropriately chosen detectors, output can be measured accurately for small fields.

  18. Repeat-aware modeling and correction of short read errors.

    Science.gov (United States)

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  19. SRTC Spreadsheet to Determine Relative Percent Difference (RPD) for Duplicate Waste Assay Results and to Perform the RPD Acceptance Test

    International Nuclear Information System (INIS)

    Casella, V.R.

    2002-01-01

    This report documents the calculations and logic used for the Microsoft(R) Excel spreadsheet that is used at the 773-A Solid Waste Assay Facility for evaluating duplicate analyses, and validates that the spreadsheet is performing these functions correctly

  20. Does Asset Allocation Policy Explain 40, 90, 100 Percent of Performance?

    OpenAIRE

    Roger G. Ibbotson; Paul D. Kaplan

    2001-01-01

    Does asset allocation policy explain 40 percent, 90 percent, or 100 percent of performance? According to some well-known studies, more than 90 percent of the variability of a typical plan sponsor's performance over time is attributable to asset allocation. However, few people want to explain variability over time. Instead, an analyst might want to know how important it is in explaining the differences in return from one fund to another, or what percentage of the level of a typical fund's retu...

  1. Development and Validation of a Data-Based Food Frequency Questionnaire for Adults in Eastern Rural Area of Rwanda

    Directory of Open Access Journals (Sweden)

    Ayumi Yanagisawa

    2016-01-01

    Full Text Available This study aimed to develop and evaluate the validity of a food frequency questionnaire (FFQ for rural Rwandans. Since our FFQ was developed to assess malnutrition, it measured energy, protein, vitamin A, and iron intakes only. We collected 260 weighed food records (WFRs from a total of 162 Rwandans. Based on the WFR data, we developed a tentative FFQ and examined the food list by percent contribution to energy and nutrient intakes. To assess the validity, nutrient intakes estimated from the FFQ were compared with those calculated from three-day WFRs by correlation coefficient and cross-classification for 17 adults. Cumulative contributions of the 18-item FFQ to the total intakes of energy and nutrients reached nearly 100%. Crude and energy-adjusted correlation coefficients ranged from -0.09 (vitamin A to 0.58 (protein and from -0.19 (vitamin A to 0.68 (iron, respectively. About 50%-60% of the participants were classified into the same tertile. Our FFQ provided acceptable validity for energy and iron intakes and could rank Rwandan adults in eastern rural area correctly according to their energy and iron intakes.

  2. Blast Load Simulator Experiments for Computational Model Validation: Report 2

    Science.gov (United States)

    2017-02-01

    simulations of these explosive events and their effects . These codes are continuously improving, but still require validation against experimental data to...contents of this report are not to be used for advertising , publication, or promotional purposes. Citation of trade names does not constitute an...12 Figure 18. Ninety-five percent confidence intervals on measured peak pressure. ............................ 14 Figure 19. Ninety-five percent

  3. Differential Weighting of Items to Improve University Admission Test Validity

    Directory of Open Access Journals (Sweden)

    Eduardo Backhoff Escudero

    2001-05-01

    Full Text Available This paper gives an evaluation of different ways to increase university admission test criterion-related validity, by differentially weighting test items. We compared four methods of weighting multiple-choice items of the Basic Skills and Knowledge Examination (EXHCOBA: (1 punishing incorrect responses by a constant factor, (2 weighting incorrect responses, considering the levels of error, (3 weighting correct responses, considering the item’s difficulty, based on the Classic Measurement Theory, and (4 weighting correct responses, considering the item’s difficulty, based on the Item Response Theory. Results show that none of these methods increased the instrument’s predictive validity, although they did improve its concurrent validity. It was concluded that it is appropriate to score the test by simply adding up correct responses.

  4. Residual volume on land and when immersed in water: effect on percent body fat.

    Science.gov (United States)

    Demura, Shinichi; Yamaji, Shunsuke; Kitabayashi, Tamotsu

    2006-08-01

    There is a large residual volume (RV) error when assessing percent body fat by means of hydrostatic weighing. It has generally been measured before hydrostatic weighing. However, an individual's maximal exhalations on land and in the water may not be identical. The aims of this study were to compare residual volumes and vital capacities on land and when immersed to the neck in water, and to examine the influence of the measurement error on percent body fat. The participants were 20 healthy Japanese males and 20 healthy Japanese females. To assess the influence of the RV error on percent body fat in both conditions and to evaluate the cross-validity of the prediction equation, another 20 males and 20 females were measured using hydrostatic weighing. Residual volume was measured on land and in the water using a nitrogen wash-out technique based on an open-circuit approach. In water, residual volume was measured with the participant sitting on a chair while the whole body, except the head, was submerged . The trial-to-trial reliabilities of residual volume in both conditions were very good (intraclass correlation coefficient > 0.98). Although residual volume measured under the two conditions did not agree completely, they showed a high correlation (males: 0.880; females: 0.853; P body fat computed using residual volume measured in both conditions was very good for both sexes (males: r = 0.902; females: r = 0.869, P body fat: -3.4 to 2.2% for males; -6.3 to 4.4% for females). We conclude that if these errors are of no importance, residual volume measured on land can be used when assessing body composition.

  5. Quantum loop corrections of a charged de Sitter black hole

    Science.gov (United States)

    Naji, J.

    2018-03-01

    A charged black hole in de Sitter (dS) space is considered and logarithmic corrected entropy used to study its thermodynamics. Logarithmic corrections of entropy come from thermal fluctuations, which play a role of quantum loop correction. In that case we are able to study the effect of quantum loop on black hole thermodynamics and statistics. As a black hole is a gravitational object, it helps to obtain some information about the quantum gravity. The first and second laws of thermodynamics are investigated for the logarithmic corrected case and we find that it is only valid for the charged dS black hole. We show that the black hole phase transition disappears in the presence of logarithmic correction.

  6. Correction procedures for C-14 dates

    International Nuclear Information System (INIS)

    McKerrell, H.

    1975-01-01

    There are two quite separate criteria to satisfy before accepting as valid the corrections to C-14 dates which have been indicated for some years now by the bristlecone pine calibration. Firstly the correction figures have to be based upon all the available tree-ring data and derived in a manner that is mathematically sound, and secondly the correction figures have to produce accurate results on C-14 dates from archaeological test samples of known historical date, these covering as wide a period as possible. Neither of these basic prerequisites has yet been fully met. Thus the two-fold purpose of this paper is to bring together, and to compare with an independently based procedure, the various correction curves or tables that have been published up to Spring 1974, as well as to detail the correction results on reliable, historically dated Egyptian, Helladic and Minoan test samples from 3100 B.C. The nomenclature followed is strictly that adopted by the primary dating journal Radiocarbon, all C-14 dates quoted thus relate to the 5568 year half-life and the standard AD/BC system. (author)

  7. Contraception services for incarcerated women: a national survey of correctional health providers.

    Science.gov (United States)

    Sufrin, Carolyn B; Creinin, Mitchell D; Chang, Judy C

    2009-12-01

    Incarcerated women have had limited access to health care prior to their arrest. Although their incarceration presents an opportunity to provide them with health care, their reproductive health needs have been overlooked. We performed a cross-sectional study of a nationally representative sample of 950 correctional health providers who are members of the Academy of Correctional Health Providers. A total of 405 surveys (43%) were returned, and 286 (30%) were eligible for analysis. Most ineligible surveys were from clinicians at male-only facilities. Of eligible respondents, 70% reported some degree of contraception counseling for women at their facilities. Only 11% provided routine counseling prior to release. Seventy percent said that their institution had no formal policy on contraception. Thirty-eight percent of clinicians provided birth control methods at their facilities. Although the most frequently counseled and prescribed method was oral contraceptive pills, only 50% of providers rated their oral contraceptive counseling ability as good or very good. Contraception counseling was associated with working at a juvenile facility, and with screening for sexually transmitted infections. Contraception does not appear to be integrated into the routine delivery of clinical services to incarcerated women. Because the correctional health care system can provide important clinical and public health interventions to traditionally marginalized populations, services for incarcerated women should include access to contraception.

  8. The Texas Ten Percent Plan's Impact on College Enrollment

    Science.gov (United States)

    Daugherty, Lindsay; Martorell, Paco; McFarlin, Isaac, Jr.

    2014-01-01

    The Texas Ten Percent Plan (TTP) provides students in the top 10 percent of their high-school class with automatic admission to any public university in the state, including the two flagship schools, the University of Texas at Austin and Texas A&M. Texas created the policy in 1997 after a federal appellate court ruled that the state's previous…

  9. Surgery for the correction of hallux valgus: minimum five-year results with a validated patient-reported outcome tool and regression analysis.

    Science.gov (United States)

    Chong, A; Nazarian, N; Chandrananth, J; Tacey, M; Shepherd, D; Tran, P

    2015-02-01

    This study sought to determine the medium-term patient-reported and radiographic outcomes in patients undergoing surgery for hallux valgus. A total of 118 patients (162 feet) underwent surgery for hallux valgus between January 2008 and June 2009. The Manchester-Oxford Foot Questionnaire (MOXFQ), a validated tool for the assessment of outcome after surgery for hallux valgus, was used and patient satisfaction was sought. The medical records and radiographs were reviewed retrospectively. At a mean of 5.2 years (4.7 to 6.0) post-operatively, the median combined MOXFQ score was 7.8 (IQR:0 to 32.8). The median domain scores for pain, walking/standing, and social interaction were 10 (IQR: 0 to 45), 0 (IQR: 0 to 32.1) and 6.3 (IQR: 0 to 25) respectively. A total of 119 procedures (73.9%, in 90 patients) were reported as satisfactory but only 53 feet (32.7%, in 43 patients) were completely asymptomatic. The mean (SD) correction of hallux valgus, intermetatarsal, and distal metatarsal articular angles was 18.5° (8.8°), 5.7° (3.3°), and 16.6° (8.8°), respectively. Multivariable regression analysis identified that an American Association of Anesthesiologists grade of >1 (Incident Rate Ratio (IRR) = 1.67, p-value = 0.011) and recurrent deformity (IRR = 1.77, p-value = 0.003) were associated with significantly worse MOXFQ scores. No correlation was found between the severity of deformity, the type, or degree of surgical correction and the outcome. When using a validated outcome score for the assessment of outcome after surgery for hallux valgus, the long-term results are worse than expected when compared with the short- and mid-term outcomes, with 25.9% of patients dissatisfied at a mean follow-up of 5.2 years. ©2015 The British Editorial Society of Bone & Joint Surgery.

  10. Experimental Validation Of An Innovative Procedure For The Rolling Noise Correction

    Directory of Open Access Journals (Sweden)

    Viscardi Massimo

    2017-01-01

    Full Text Available Among the wide contest of the train vehicles rolling noise evaluation, the aim of the paper is the development, implementation and experimental testing of a new method for roughness calculation according to FprCEN/TR 16891:2015 and the successive evaluation of the correction parameters of the measured rolling noise due to the presence of not compliant rail roughness. It is, in-fact, a very often operative condition, the execution of rolling noise tests over standard in-operation rails that are characterized by roughness profiles very different from standard one as those prescribed within the ISO 3095 procedure. Very often, this difference lead to the presence of an exceeding noise that needs to be evaluated and revised for a correct definition of the phenomena. Within the paper, the procedure implementation is presented and later on verified in operative experimental contest; forecasted and measured data are compared and successively commented.

  11. Environmental education curriculum evaluation questionnaire: A reliability and validity study

    Science.gov (United States)

    Minner, Daphne Diane

    The intention of this research project was to bridge the gap between social science research and application to the environmental domain through the development of a theoretically derived instrument designed to give educators a template by which to evaluate environmental education curricula. The theoretical base for instrument development was provided by several developmental theories such as Piaget's theory of cognitive development, Developmental Systems Theory, Life-span Perspective, as well as curriculum research within the area of environmental education. This theoretical base fueled the generation of a list of components which were then translated into a questionnaire with specific questions relevant to the environmental education domain. The specific research question for this project is: Can a valid assessment instrument based largely on human development and education theory be developed that reliably discriminates high, moderate, and low quality in environmental education curricula? The types of analyses conducted to answer this question were interrater reliability (percent agreement, Cohen's Kappa coefficient, Pearson's Product-Moment correlation coefficient), test-retest reliability (percent agreement, correlation), and criterion-related validity (correlation). Face validity and content validity were also assessed through thorough reviews. Overall results indicate that 29% of the questions on the questionnaire demonstrated a high level of interrater reliability and 43% of the questions demonstrated a moderate level of interrater reliability. Seventy-one percent of the questions demonstrated a high test-retest reliability and 5% a moderate level. Fifty-five percent of the questions on the questionnaire were reliable (high or moderate) both across time and raters. Only eight questions (8%) did not show either interrater or test-retest reliability. The global overall rating of high, medium, or low quality was reliable across both coders and time, indicating

  12. Analysis association of milk fat and protein percent in quantitative ...

    African Journals Online (AJOL)

    Analysis association of milk fat and protein percent in quantitative trait locus ... African Journal of Biotechnology ... Protein and fat percent as content of milk are high-priority criteria for financial aims and selection of programs in dairy cattle.

  13. Study of tip loss corrections using CFD rotor computations

    DEFF Research Database (Denmark)

    Shen, Wen Zhong; Zhu, Wei Jun; Sørensen, Jens Nørkær

    2014-01-01

    Tip loss correction is known to play an important role for engineering prediction of wind turbine performance. There are two different types of tip loss corrections: tip corrections on momentum theory and tip corrections on airfoil data. In this paper, we study the latter using detailed CFD...... computations for wind turbines with sharp tip. Using the technique of determination of angle of attack and the CFD results for a NordTank 500 kW rotor, airfoil data are extracted and a new tip loss function on airfoil data is derived. To validate, BEM computations with the new tip loss function are carried out...... and compared with CFD results for the NordTank 500 kW turbine and the NREL 5 MW turbine. Comparisons show that BEM with the new tip loss function can predict correctly the loading near the blade tip....

  14. Serum Predictors of Percent Lean Mass in Young Adults.

    Science.gov (United States)

    Lustgarten, Michael S; Price, Lori L; Phillips, Edward M; Kirn, Dylan R; Mills, John; Fielding, Roger A

    2016-08-01

    Lustgarten, MS, Price, LL, Phillips, EM, Kirn, DR, Mills, J, and Fielding, RA. Serum predictors of percent lean mass in young adults. J Strength Cond Res 30(8): 2194-2201, 2016-Elevated lean (skeletal muscle) mass is associated with increased muscle strength and anaerobic exercise performance, whereas low levels of lean mass are associated with insulin resistance and sarcopenia. Therefore, studies aimed at obtaining an improved understanding of mechanisms related to the quantity of lean mass are of interest. Percent lean mass (total lean mass/body weight × 100) in 77 young subjects (18-35 years) was measured with dual-energy x-ray absorptiometry. Twenty analytes and 296 metabolites were evaluated with the use of the standard chemistry screen and mass spectrometry-based metabolomic profiling, respectively. Sex-adjusted multivariable linear regression was used to determine serum analytes and metabolites significantly (p ≤ 0.05 and q ≤ 0.30) associated with the percent lean mass. Two enzymes (alkaline phosphatase and serum glutamate oxaloacetate aminotransferase) and 29 metabolites were found to be significantly associated with the percent lean mass, including metabolites related to microbial metabolism, uremia, inflammation, oxidative stress, branched-chain amino acid metabolism, insulin sensitivity, glycerolipid metabolism, and xenobiotics. Use of sex-adjusted stepwise regression to obtain a final covariate predictor model identified the combination of 5 analytes and metabolites as overall predictors of the percent lean mass (model R = 82.5%). Collectively, these data suggest that a complex interplay of various metabolic processes underlies the maintenance of lean mass in young healthy adults.

  15. Gamma ray auto absorption correction evaluation methodology

    International Nuclear Information System (INIS)

    Gugiu, Daniela; Roth, Csaba; Ghinescu, Alecse

    2010-01-01

    Neutron activation analysis (NAA) is a well established nuclear technique, suited to investigate the microstructural or elemental composition and can be applied to studies of a large variety of samples. The work with large samples involves, beside the development of large irradiation devices with well know neutron field characteristics, the knowledge of perturbing phenomena and adequate evaluation of correction factors like: neutron self shielding, extended source correction, gamma ray auto absorption. The objective of the works presented in this paper is to validate an appropriate methodology for gamma ray auto absorption correction evaluation for large inhomogeneous samples. For this purpose a benchmark experiment has been defined - a simple gamma ray transmission experiment, easy to be reproduced. The gamma ray attenuation in pottery samples has been measured and computed using MCNP5 code. The results show a good agreement between the computed and measured values, proving that the proposed methodology is able to evaluate the correction factors. (authors)

  16. Correction of the horizontal closed orbit at all energies

    International Nuclear Information System (INIS)

    Degueurce, L.; Nakach, A.

    The method followed is accomplished in two steps. At average energy, the closed orbit is corrected by a remote realignment of the focusing quadrupoles by a known quantity. This closed orbit, created by the position adjustment of the quadrupoles, is valid during the whole cycle; but at low energy level, a closed orbit is added because of constant currents or parasitic fields whose effects decrease as the energy level increases. This residual orbit is corrected during the injection by dipolar correction fields, located on the inside of the quadrupoles and fed by direct currents. Therefore, the closed orbit resulting from the superposition of the two types of corrections and defects is brought back to +- 2.5 mm with respect to the center of the quadrupoles

  17. Practical aspects of data-driven motion correction approach for brain SPECT

    International Nuclear Information System (INIS)

    Kyme, A.Z.; Hutton, B.F.; Hatton, R.L.; Skerrett, D.; Barnden, L.

    2002-01-01

    Full text: Patient motion can cause image artifacts in SPECT despite restraining measures. Data-driven detection and correction of motion can be achieved by comparison of acquired data with the forward-projections. By optimising the orientation of a partial reconstruction, parameters can be obtained for each misaligned projection and applied to update this volume using a 3D reconstruction algorithm. Phantom validation was performed to explore practical aspects of this approach. Noisy projection datasets simulating a patient undergoing at least one fully 3D movement during acquisition were compiled from various projections of the digital Hoffman brain phantom. Motion correction was then applied to the reconstructed studies. Correction success was assessed visually and quantitatively. Resilience with respect to subset order and missing data in the reconstruction and updating stages, detector geometry considerations, and the need for implementing an iterated correction were assessed in the process. Effective correction of the corrupted studies was achieved. Visually, artifactual regions in the reconstructed slices were suppressed and/or removed. Typically the ratio of mean square difference between the corrected and reference studies compared to that between the corrupted and reference studies was > 2. Although components of the motions are missed using a single-head implementation, improvement was still evident in the correction. The need for multiple iterations in the approach was small due to the bulk of misalignment errors being corrected in the first pass. Dispersion of subsets for reconstructing and updating the partial reconstruction appears to give optimal correction. Further validation is underway using triple-head physical phantom data. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  18. Validation of the Intelligibility in Context Scale for Jamaican Creole-Speaking Preschoolers.

    Science.gov (United States)

    Washington, Karla N; McDonald, Megan M; McLeod, Sharynne; Crowe, Kathryn; Devonish, Hubert

    2017-08-15

    To describe validation of the Intelligibility in Context Scale (ICS; McLeod, Harrison, & McCormack, 2012a) and ICS-Jamaican Creole (ICS-JC; McLeod, Harrison, & McCormack, 2012b) in a sample of typically developing 3- to 6-year-old Jamaicans. One-hundred and forty-five preschooler-parent dyads participated in the study. Parents completed the 7-item ICS (n = 145) and ICS-JC (n = 98) to rate children's speech intelligibility (5-point scale) across communication partners (parents, immediate family, extended family, friends, acquaintances, strangers). Preschoolers completed the Diagnostic Evaluation of Articulation and Phonology (DEAP; Dodd, Hua, Crosbie, Holm, & Ozanne, 2006) in English and Jamaican Creole to establish speech-sound competency. For this sample, we examined validity and reliability (interrater, test-rest, internal consistency) evidence using measures of speech-sound production: (a) percentage of consonants correct, (b) percentage of vowels correct, and (c) percentage of phonemes correct. ICS and ICS-JC ratings showed preschoolers were always (5) to usually (4) understood across communication partners (ICS, M = 4.43; ICS-JC, M = 4.50). Both tools demonstrated excellent internal consistency (α = .91), high interrater, and test-retest reliability. Significant correlations between the two tools and between each measure and language-specific percentage of consonants correct, percentage of vowels correct, and percentage of phonemes correct provided criterion-validity evidence. A positive correlation between the ICS and age further strengthened validity evidence for that measure. Both tools show promising evidence of reliability and validity in describing functional speech intelligibility for this group of typically developing Jamaican preschoolers.

  19. Electroweak corrections in the hadronic production of heavy quarks; Elektroschwache Korrekturen bei der hadronischen Erzeugung schwerer Quarks

    Energy Technology Data Exchange (ETDEWEB)

    Scharf, Andreas Bernhard

    2008-06-27

    In this thesis the electroweak corrections to the top-quark pair production and to the production of bottom-quark jets were studied. especially mixed one-loop amplitudes as well as the interferences of electroweak Born amplitudes and one-loop QCD corrections were calculated. These corrections are of great importance for the experimental analyses at the LHC. For both processes compact analytical results for the virtual and real corrections were calculated. For the Tevatron and the LHC the corrections to the total cross section for the top-quark pair production were determined. At the Tevatron these corrections are only some permille large and therefore concerning the total cross section presumably negligible. For the LHC these corrections are some percent large and by this of the same order of magnitude as the QCD corrections in next-to-leading order to the total cross section to be expected. For the differential distributions in M{sub t} {sub anti} {sub t} and p{sub T} the relative corrections lie in dependence on the Higgs mass between +4% and -6%. A comparison between the integrated distributions in p{sub T} respectively M{sub t} {sub anti} {sub t} and the estimated statistical error shows that these corrections are presently not of importance. At the LHC for the M{sub t} {sub anti} {sub t} respectively p{sub T} distribution in dependence of the Higgs mass large negative corrections of up to -15% respectively -20% were found for M{sub t} {sub anti} {sub t}=5 TeV (p{sub T}=2 TeV). The comparison between the integrated distributions and the statistical error shows that the weak O({alpha}) corrections at the LHC are phenomenologically relevant. This is especially valid for the search for new physics at large M{sub t} {sub anti} {sub t}. For the bottom-jet production the weak O({alpha}) corrections for the differential and integrated p{sub T} distribution were calculated for a simple and two-fold b-tag. At the Tevatron the corrections for a simple b-tag for the

  20. Gamma camera correction system and method for using the same

    International Nuclear Information System (INIS)

    Inbar, D.; Gafni, G.; Grimberg, E.; Bialick, K.; Koren, J.

    1986-01-01

    A gamma camera is described which consists of: (a) a detector head that includes photodetectors for producing output signals in response to radiation stimuli which are emitted by a radiation field and which interact with the detector head and produce an event; (b) signal processing circuitry responsive to the output signals of the photodetectors for producing a sum signal that is a measure of the total energy of the event; (c) an energy discriminator having a relatively wide window for comparison with the sum signal; (d) the signal processing circuitry including coordinate computation circuitry for operating on the output signals, and calculating an X,Y coordinate of an event when the sum signal lies within the window of the energy discriminator; (e) an energy correction table containing spatially dependent energy windows for producing a validation signal if the total energy of an event lies within the window associated with the X,Y coordinates of the event; (f) the signal processing circuitry including a dislocation correction table containing spatially dependent correction factors for converting the X,Y coordinates of an event to relocated coordinates in accordance with correction factors determined by the X,Y coordinates; (g) a digital memory for storing a map of the radiation field; and (h) means for recording an event at its relocated coordinates in the memory if the energy correction table produces a validation signal

  1. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    Science.gov (United States)

    Sattarivand, Mike; Kusano, Maggie; Poon, Ian; Caldwell, Curtis

    2012-11-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  2. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    International Nuclear Information System (INIS)

    Sattarivand, Mike; Caldwell, Curtis; Kusano, Maggie; Poon, Ian

    2012-01-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  3. CTF Void Drift Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gosdin, Chris [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gergar, Marcus [Pennsylvania State Univ., University Park, PA (United States)

    2015-10-26

    This milestone report is a summary of work performed in support of expansion of the validation and verification (V&V) matrix for the thermal-hydraulic subchannel code, CTF. The focus of this study is on validating the void drift modeling capabilities of CTF and verifying the supporting models that impact the void drift phenomenon. CTF uses a simple turbulent-diffusion approximation to model lateral cross-flow due to turbulent mixing and void drift. The void drift component of the model is based on the Lahey and Moody model. The models are a function of two-phase mass, momentum, and energy distribution in the system; therefore, it is necessary to correctly model the ow distribution in rod bundle geometry as a first step to correctly calculating the void distribution due to void drift.

  4. Power corrections to the HTL effective Lagrangian of QED

    Science.gov (United States)

    Carignano, Stefano; Manuel, Cristina; Soto, Joan

    2018-05-01

    We present compact expressions for the power corrections to the hard thermal loop (HTL) Lagrangian of QED in d space dimensions. These are corrections of order (L / T) 2, valid for momenta L ≪ T, where T is the temperature. In the limit d → 3 we achieve a consistent regularization of both infrared and ultraviolet divergences, which respects the gauge symmetry of the theory. Dimensional regularization also allows us to witness subtle cancellations of infrared divergences. We also discuss how to generalize our results in the presence of a chemical potential, so as to obtain the power corrections to the hard dense loop (HDL) Lagrangian.

  5. Fission track dating of volcanic glass: experimental evidence for the validity of the Size-Correction Method

    International Nuclear Information System (INIS)

    Bernardes, C.; Hadler Neto, J.C.; Lattes, C.M.G.; Araya, A.M.O.; Bigazzi, G.; Cesar, M.F.

    1986-01-01

    Two techniques may be employed for correcting thermally lowered fission track ages on glass material: the so called 'size-correcting method' and 'Plateau method'. Several results from fission track dating on obsidian were analysed in order to compare the model rising size-correction method with experimental evidences. The results from this work can be summarized as follows: 1) The assumption that mean size of spontaneous and induced etched tracks are equal on samples unaffected by partial fading is supported by experimental results. If reactor effects such as an enhancing of the etching rate in the irradiated fraction due to the radiation damage and/or to the fact that induced fission releases a quantity of energy slightly greater than spontaneous one exist, their influence on size-correction method is very small. 2) The above two correction techniques produce concordant results. 3) Several samples from the same obsidian, affected by 'instantaneous' as well as 'continuous' natural fading to different degrees were analysed: the curve showing decreasing of spontaneous track mean-size vs. fraction of spontaneous tracks lost by fading is in close agreement with the correction curve constructed for the same obsidian by imparting artificial thermal treatements on induced tracks. By the above points one can conclude that the assumptions on which size-correction method is based are well supported, at least in first approximation. (Author) [pt

  6. Atmospheric correction at AERONET locations: A new science and validation data set

    Science.gov (United States)

    Wang, Y.; Lyapustin, A.I.; Privette, J.L.; Morisette, J.T.; Holben, B.

    2009-01-01

    This paper describes an Aerosol Robotic Network (AERONET)-based Surface Reflectance Validation Network (ASRVN) and its data set of spectral surface bidirectional reflectance and albedo based on Moderate Resolution Imaging Spectroradiometer (MODIS) TERRA and AQUA data. The ASRVN is an operational data collection and processing system. It receives 50 ?? 50 km2; subsets of MODIS level 1B (L1B) data from MODIS adaptive processing system and AERONET aerosol and water-vapor information. Then, it performs an atmospheric correction (AC) for about 100 AERONET sites based on accurate radiative-transfer theory with complex quality control of the input data. The ASRVN processing software consists of an L1B data gridding algorithm, a new cloud-mask (CM) algorithm based on a time-series analysis, and an AC algorithm using ancillary AERONET aerosol and water-vapor data. The AC is achieved by fitting the MODIS top-of-atmosphere measurements, accumulated for a 16-day interval, with theoretical reflectance parameterized in terms of the coefficients of the Li SparseRoss Thick (LSRT) model of the bidirectional reflectance factor (BRF). The ASRVN takes several steps to ensure high quality of results: 1) the filtering of opaque clouds by a CM algorithm; 2) the development of an aerosol filter to filter residual semitransparent and subpixel clouds, as well as cases with high inhomogeneity of aerosols in the processing area; 3) imposing the requirement of the consistency of the new solution with previously retrieved BRF and albedo; 4) rapid adjustment of the 16-day retrieval to the surface changes using the last day of measurements; and 5) development of a seasonal backup spectral BRF database to increase data coverage. The ASRVN provides a gapless or near-gapless coverage for the processing area. The gaps, caused by clouds, are filled most naturally with the latest solution for a given pixel. The ASRVN products include three parameters of the LSRT model (kL, kG, and kV), surface albedo

  7. Validating year 2000 compliance

    NARCIS (Netherlands)

    A. van Deursen (Arie); P. Klint (Paul); M.P.A. Sellink

    1997-01-01

    textabstractValidating year 2000 compliance involves the assessment of the correctness and quality of a year 2000 conversion. This entails inspecting both the quality of the conversion emph{process followed, and of the emph{result obtained, i.e., the converted system. This document provides an

  8. 12 CFR 741.4 - Insurance premium and one percent deposit.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Insurance premium and one percent deposit. 741... Insurance premium and one percent deposit. (a) Scope. This section implements the requirements of Section... payment of an insurance premium. (b) Definitions. For purposes of this section: (1) Available assets ratio...

  9. Modelling and experimental validation for off-design performance of the helical heat exchanger with LMTD correction taken into account

    Energy Technology Data Exchange (ETDEWEB)

    Phu, Nguyen Minh; Trinh, Nguyen Thi Minh [Vietnam National University, Ho Chi Minh City (Viet Nam)

    2016-07-15

    Today the helical coil heat exchanger is being employed widely due to its dominant advantages. In this study, a mathematical model was established to predict off-design works of the helical heat exchanger. The model was based on the LMTD and e-NTU methods, where a LMTD correction factor was taken into account to increase accuracy. An experimental apparatus was set-up to validate the model. Results showed that errors of thermal duty, outlet hot fluid temperature, outlet cold fluid temperature, shell-side pressure drop, and tube-side pressure drop were respectively +-5%, +-1%, +-1%, +-5% and +-2%. Diagrams of dimensionless operating parameters and a regression function were also presented as design-maps, a fast calculator for usage in design and operation of the exchanger. The study is expected to be a good tool to estimate off-design conditions of the single-phase helical heat exchangers.

  10. Higher dimensional operator corrections to the goldstino Goldberger-Treiman vertices

    International Nuclear Information System (INIS)

    Lee, T.

    2000-01-01

    The goldstino-matter interactions given by the Goldberger-Treiman relations can receive higher dimensional operator corrections of O(q 2 /M 2 ), where M denotes the mass of the mediators through which SUSY breaking is transmitted. These corrections in the gauge mediated SUSY breaking models arise from loop diagrams, and an explicit calculation of such corrections is presented. It is emphasized that the Goldberger-Treiman vertices are valid only below the mediator scale, and at higher energies goldstinos decouple from the MSSM fields. The implication of this fact for gravitino cosmology in GMSB models is mentioned. (orig.)

  11. Error correcting circuit design with carbon nanotube field effect transistors

    Science.gov (United States)

    Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong

    2018-03-01

    In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.

  12. Two Challenges of Correct Validation in Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Thomas eNowotny

    2014-09-01

    Full Text Available Supervised pattern recognition is the process of mapping patterns to class labelsthat define their meaning. The core methods for pattern recognitionhave been developed by machine learning experts but due to their broadsuccess an increasing number of non-experts are now employing andrefining them. In this perspective I will discuss the challenge ofcorrect validation of supervised pattern recognition systems, in particular whenemployed by non-experts. To illustrate the problem I will give threeexamples of common errors that I have encountered in the lastyear. Much of this challenge can be addressed by strict procedure invalidation but there are remaining problems of correctlyinterpreting comparative work on exemplary data sets, which I willelucidate on the example of the well-used MNIST data set of handwrittendigits.

  13. Matter power spectrum and the challenge of percent accuracy

    International Nuclear Information System (INIS)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Reed, Darren S.; Onions, Julian; Pearce, Frazer R.; Smith, Robert E.; Springel, Volker; Scoccimarro, Roman

    2016-01-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisation techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k ≤1 h Mpc −1 and to within three percent at k ≤10 h Mpc −1 . We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k ≤ 2 h Mpc −1 . In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L =0.5 h −1 Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M p =10 9 h −1 M ⊙ is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.

  14. Validation of missed space-group symmetry in X-ray powder diffraction structures with dispersion-corrected density functional theory

    DEFF Research Database (Denmark)

    Hempler, Daniela; Schmidt, Martin U.; Van De Streek, Jacco

    2017-01-01

    More than 600 molecular crystal structures with correct, incorrect and uncertain space-group symmetry were energy-minimized with dispersion-corrected density functional theory (DFT-D, PBE-D3). For the purpose of determining the correct space-group symmetry the required tolerance on the atomic...... with missed symmetry were investigated by dispersion-corrected density functional theory. In 98.5% of the cases the correct space group is found....

  15. 49 CFR 173.182 - Barium azide-50 percent or more water wet.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Barium azide-50 percent or more water wet. 173.182 Section 173.182 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS... Class 1 and Class 7 § 173.182 Barium azide—50 percent or more water wet. Barium azide—50 percent or more...

  16. Higher order corrections in quantum electrodynamics

    International Nuclear Information System (INIS)

    Rafael, E.

    1977-01-01

    Theoretical contributions to high-order corrections in purely leptonic systems, such as electrons and muons, muonium (μ + e - ) and positronium (e + e - ), are reviewed to establish the validity of quantum electrodynamics (QED). Two types of QED contributions to the anomalous magnetic moments are considered, from diagrams with one fermion type lines and those witn two fermion type lines. The contributions up to eighth order are compared to the data available with a different accuracy. Good agreement is stated within the experimental errors. The experimental accuracy of the muonium hyperfine structure and of the radiative corrections to the decay of positronium are compared to the one attainable in theoretical calculations. The need for a higher precision in both experimental data and theoretical calculations is stated

  17. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  18. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  19. Data-driven motion correction in brain SPECT

    International Nuclear Information System (INIS)

    Kyme, A.Z.; Hutton, B.F.; Hatton, R.L.; Skerrett, D.W.

    2002-01-01

    Patient motion can cause image artifacts in SPECT despite restraining measures. Data-driven detection and correction of motion can be achieved by comparison of acquired data with the forward-projections. By optimising the orientation of the reconstruction, parameters can be obtained for each misaligned projection and applied to update this volume using a 3D reconstruction algorithm. Digital and physical phantom validation was performed to investigate this approach. Noisy projection data simulating at least one fully 3D patient head movement during acquisition were constructed by projecting the digital Huffman brain phantom at various orientations. Motion correction was applied to the reconstructed studies. The importance of including attenuation effects in the estimation of motion and the need for implementing an iterated correction were assessed in the process. Correction success was assessed visually for artifact reduction, and quantitatively using a mean square difference (MSD) measure. Physical Huffman phantom studies with deliberate movements introduced during the acquisition were also acquired and motion corrected. Effective artifact reduction in the simulated corrupt studies was achieved by motion correction. Typically the MSD ratio between the corrected and reference studies compared to the corrupted and reference studies was > 2. Motion correction could be achieved without inclusion of attenuation effects in the motion estimation stage, providing simpler implementation and greater efficiency. Moreover the additional improvement with multiple iterations of the approach was small. Improvement was also observed in the physical phantom data, though the technique appeared limited here by an object symmetry. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  20. SU-F-T-67: Correction Factors for Monitor Unit Verification of Clinical Electron Beams

    Energy Technology Data Exchange (ETDEWEB)

    Haywood, J [Mercy Health Partners, Muskegon, MI (United States)

    2016-06-15

    Purpose: Monitor units calculated by electron Monte Carlo treatment planning systems are often higher than TG-71 hand calculations for a majority of patients. Here I’ve calculated tables of geometry and heterogeneity correction factors for correcting electron hand calculations. Method: A flat water phantom with spherical volumes having radii ranging from 3 to 15 cm was created. The spheres were centered with respect to the flat water phantom, and all shapes shared a surface at 100 cm SSD. D{sub max} dose at 100 cm SSD was calculated for each cone and energy on the flat phantom and for the spherical volumes in the absence of the flat phantom. The ratio of dose in the sphere to dose in the flat phantom defined the geometrical correction factor. The heterogeneity factors were then calculated from the unrestricted collisional stopping power for tissues encountered in electron beam treatments. These factors were then used in patient second check calculations. Patient curvature was estimated by the largest sphere that aligns to the patient contour, and appropriate tissue density was read from the physical properties provided by the CT. The resulting MU were compared to those calculated by the treatment planning system and TG-71 hand calculations. Results: The geometry and heterogeneity correction factors range from ∼(0.8–1.0) and ∼(0.9–1.01) respectively for the energies and cones presented. Percent differences for TG-71 hand calculations drop from ∼(3–14)% to ∼(0–2)%. Conclusion: Monitor units calculated with the correction factors typically decrease the percent difference to under actionable levels, < 5%. While these correction factors work for a majority of patients, there are some patient anatomies that do not fit the assumptions made. Using these factors in hand calculations is a first step in bringing the verification monitor units into agreement with the treatment planning system MU.

  1. Matter power spectrum and the challenge of percent accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Reed, Darren S. [Institute for Computational Science, University of Zurich, Winterthurerstrasse 190, 8057 Zurich (Switzerland); Onions, Julian; Pearce, Frazer R. [School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD (United Kingdom); Smith, Robert E. [Department of Physics and Astronomy, University of Sussex, Brighton, BN1 9QH (United Kingdom); Springel, Volker [Heidelberger Institut für Theoretische Studien, 69118 Heidelberg (Germany); Scoccimarro, Roman, E-mail: aurel@physik.uzh.ch, E-mail: teyssier@physik.uzh.ch, E-mail: dpotter@physik.uzh.ch, E-mail: stadel@physik.uzh.ch, E-mail: julian.onions@nottingham.ac.uk, E-mail: reed@physik.uzh.ch, E-mail: r.e.smith@sussex.ac.uk, E-mail: volker.springel@h-its.org, E-mail: Frazer.Pearce@nottingham.ac.uk, E-mail: rs123@nyu.edu [Center for Cosmology and Particle Physics, Department of Physics, New York University, NY 10003, New York (United States)

    2016-04-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisation techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k ≤1 h Mpc{sup −1} and to within three percent at k ≤10 h Mpc{sup −1}. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k ≤ 2 h Mpc{sup −1}. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L =0.5 h {sup −1}Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M {sub p}=10{sup 9} h {sup −1}M{sub ⊙} is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.

  2. United States home births increase 20 percent from 2004 to 2008.

    Science.gov (United States)

    MacDorman, Marian F; Declercq, Eugene; Mathews, T J

    2011-09-01

    After a gradual decline from 1990 to 2004, the percentage of births occurring at home increased from 2004 to 2008 in the United States. The objective of this report was to examine the recent increase in home births and the factors associated with this increase from 2004 to 2008. United States birth certificate data on home births were analyzed by maternal demographic and medical characteristics. In 2008, there were 28,357 home births in the United States. From 2004 to 2008, the percentage of births occurring at home increased by 20 percent from 0.56 percent to 0.67 percent of United States births. This rise was largely driven by a 28 percent increase in the percentage of home births for non-Hispanic white women, for whom more than 1 percent of births occur at home. At the same time, the risk profile for home births has been lowered, with substantial drops in the percentage of home births of infants who are born preterm or at low birthweight, and declines in the percentage of home births that occur to teen and unmarried mothers. Twenty-seven states had statistically significant increases in the percentage of home births from 2004 to 2008; only four states had declines. The 20 percent increase in United States home births from 2004 to 2008 is a notable development that will be of interest to practitioners and policymakers. (BIRTH 38:3 September 2011). © 2011, Copyright the Authors. Journal compilation © 2011, Wiley Periodicals, Inc.

  3. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  4. Continuous quantum error correction for non-Markovian decoherence

    International Nuclear Information System (INIS)

    Oreshkov, Ognyan; Brun, Todd A.

    2007-01-01

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximately follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics

  5. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    Science.gov (United States)

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  6. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  7. Wall correction model for wind tunnels with open test section

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Shen, Wen Zhong; Mikkelsen, Robert Flemming

    2006-01-01

    In the paper we present a correction model for wall interference on rotors of wind turbines or propellers in wind tunnels. The model, which is based on a one-dimensional momentum approach, is validated against results from CFD computations using a generalized actuator disc principle. In the model...... good agreement with the CFD computations, demonstrating that one-dimensional momentum theory is a reliable way of predicting corrections for wall interference in wind tunnels with closed as well as open cross sections....

  8. Developing a model for validation and prediction of bank customer ...

    African Journals Online (AJOL)

    Credit risk is the most important risk of banks. The main approaches of the bank to reduce credit risk are correct validation using the final status and the validation model parameters. High fuel of bank reserves and lost or outstanding facilities of banks indicate the lack of appropriate validation models in the banking network.

  9. External validation of multivariable prediction models: a systematic review of methodological conduct and reporting

    Science.gov (United States)

    2014-01-01

    Background Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models. Methods We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures. Results 11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models. Conclusions The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling

  10. Well Completion Report for Corrective Action Unit 443 Central Nevada Test Area Nye County, Nevada

    International Nuclear Information System (INIS)

    2009-01-01

    The drilling program described in this report is part of a new corrective action strategy for Corrective Action Unit (CAU) 443 at the Central Nevada Test Area (CNTA). The drilling program included drilling two boreholes, geophysical well logging, construction of two monitoring/validation (MV) wells with piezometers (MV-4 and MV-5), development of monitor wells and piezometers, recompletion of two existing wells (HTH-1 and UC-1-P-1S), removal of pumps from existing wells (MV-1, MV-2, and MV-3), redevelopment of piezometers associated with existing wells (MV-1, MV-2, and MV-3), and installation of submersible pumps. The new corrective action strategy includes initiating a new 5-year proof-of-concept monitoring period to validate the compliance boundary at CNTA (DOE 2007). The new 5-year proof-of-concept monitoring period begins upon completion of the new monitor wells and collection of samples for laboratory analysis. The new strategy is described in the Corrective Action Decision Document/Corrective Action Plan addendum (DOE 2008a) that the Nevada Division of Environmental Protection approved (NDEP 2008)

  11. Well Completion Report for Corrective Action Unit 443 Central Nevada Test Area Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    None

    2009-12-01

    The drilling program described in this report is part of a new corrective action strategy for Corrective Action Unit (CAU) 443 at the Central Nevada Test Area (CNTA). The drilling program included drilling two boreholes, geophysical well logging, construction of two monitoring/validation (MV) wells with piezometers (MV-4 and MV-5), development of monitor wells and piezometers, recompletion of two existing wells (HTH-1 and UC-1-P-1S), removal of pumps from existing wells (MV-1, MV-2, and MV-3), redevelopment of piezometers associated with existing wells (MV-1, MV-2, and MV-3), and installation of submersible pumps. The new corrective action strategy includes initiating a new 5-year proof-of-concept monitoring period to validate the compliance boundary at CNTA (DOE 2007). The new 5-year proof-of-concept monitoring period begins upon completion of the new monitor wells and collection of samples for laboratory analysis. The new strategy is described in the Corrective Action Decision Document/Corrective Action Plan addendum (DOE 2008a) that the Nevada Division of Environmental Protection approved (NDEP 2008).

  12. How I Love My 80 Percenters

    Science.gov (United States)

    Maturo, Anthony J.

    2002-01-01

    Don't ever take your support staff for granted. By support staff, I mean the people in personnel, logistics, and finance; the ones who can make things happen with a phone call or a signature, or by the same token frustrate you to no end by their inaction; these are people you must depend on. I've spent a lot of time thinking about how to cultivate relationships with my support staff that work to the advantage of both of us. The most important thing that have learned working with people, any people--and I will tell you how I learned this in a minute--is there are some folks you just can't motivate, so forget it, don't try; others you certainly can with a little psychology and some effort; and the best of the bunch, what I call the 80 percenters, you don't need to motivate because they're already on the team and performing beautifully. The ones you can't change are rocks. Face up to it, and just kick them out of your way. I have a reputation with the people who don't want to perform or be part of the team. They don't come near me. If someone's a rock, I pick up on it right away, and I will walk around him or her to find someone better. The ones who can be motivated I take time to nurture. I consider them my projects. A lot of times these wannabes are people who want to help but don't know how. Listen, you can work with them. Lots of people in organizations have the mindset that all that matters are the regulations. God forbid if you ever work outside those regulations. They've got one foot on that regulation and they're holding it tight like a baby holds a blanket. What you're looking for is that first sign that their minds are opening. Usually you hear it in their vocabulary. What used to sound like "We can't do that ... the regulations won't allow it ... we have never done this before," well, suddenly that changes to "We have options ... let's take a look at the options ... let me research this and get back to you." The 80 percenters you want to nurture too, but

  13. Experimental validation of heterogeneity-corrected dose-volume prescription on respiratory-averaged CT images in stereotactic body radiotherapy for moving tumors

    International Nuclear Information System (INIS)

    Nakamura, Mitsuhiro; Miyabe, Yuki; Matsuo, Yukinori; Kamomae, Takeshi; Nakata, Manabu; Yano, Shinsuke; Sawada, Akira; Mizowaki, Takashi; Hiraoka, Masahiro

    2012-01-01

    The purpose of this study was to experimentally assess the validity of heterogeneity-corrected dose-volume prescription on respiratory-averaged computed tomography (RACT) images in stereotactic body radiotherapy (SBRT) for moving tumors. Four-dimensional computed tomography (CT) data were acquired while a dynamic anthropomorphic thorax phantom with a solitary target moved. Motion pattern was based on cos (t) with a constant respiration period of 4.0 sec along the longitudinal axis of the CT couch. The extent of motion (A 1 ) was set in the range of 0.0–12.0 mm at 3.0-mm intervals. Treatment planning with the heterogeneity-corrected dose-volume prescription was designed on RACT images. A new commercially available Monte Carlo algorithm of well-commissioned 6-MV photon beam was used for dose calculation. Dosimetric effects of intrafractional tumor motion were then investigated experimentally under the same conditions as 4D CT simulation using the dynamic anthropomorphic thorax phantom, films, and an ionization chamber. The passing rate of γ index was 98.18%, with the criteria of 3 mm/3%. The dose error between the planned and the measured isocenter dose in moving condition was within ± 0.7%. From the dose area histograms on the film, the mean ± standard deviation of the dose covering 100% of the cross section of the target was 102.32 ± 1.20% (range, 100.59–103.49%). By contrast, the irradiated areas receiving more than 95% dose for A 1 = 12 mm were 1.46 and 1.33 times larger than those for A 1 = 0 mm in the coronal and sagittal planes, respectively. This phantom study demonstrated that the cross section of the target received 100% dose under moving conditions in both the coronal and sagittal planes, suggesting that the heterogeneity-corrected dose-volume prescription on RACT images is acceptable in SBRT for moving tumors.

  14. Corrected entropy of Friedmann-Robertson-Walker universe in tunneling method

    International Nuclear Information System (INIS)

    Zhu, Tao; Ren, Ji-Rong; Li, Ming-Fan

    2009-01-01

    In this paper, we study the thermodynamic quantities of Friedmann-Robertson-Walker (FRW) universe by using the tunneling formalism beyond semiclassical approximation developed by Banerjee and Majhi [25]. For this we first calculate the corrected Hawking-like temperature on apparent horizon by considering both scalar particle and fermion tunneling. With this corrected Hawking-like temperature, the explicit expressions of the corrected entropy of apparent horizon for various gravity theories including Einstein gravity, Gauss-Bonnet gravity, Lovelock gravity, f(R) gravity and scalar-tensor gravity, are computed. Our results show that the corrected entropy formula for different gravity theories can be written into a general expression (4.39) of a same form. It is also shown that this expression is also valid for black holes. This might imply that the expression for the corrected entropy derived from tunneling method is independent of gravity theory, spacetime and dimension of the spacetime. Moreover, it is concluded that the basic thermodynamical property that the corrected entropy on apparent horizon is a state function is satisfied by the FRW universe

  15. Validering av vattenkraftmodeller i ARISTO

    OpenAIRE

    Lundbäck, Maja

    2013-01-01

    This master thesis was made to validate hydropower models of a turbine governor, Kaplan turbine and a Francis turbine in the power system simulator ARISTO at Svenska Kraftnät. The validation was made in three steps. The first step was to make sure the models was implement correctly in the simulator. The second was to compare the simulation results from the Kaplan turbine model to data from a real hydropower plant. The comparison was made to see how the models could generate simulation result ...

  16. Comparison of accuracy of uncorrected and corrected sagittal tomography in detection of mandibular condyle erosions: An exvivo study

    Directory of Open Access Journals (Sweden)

    Asieh Zamani Naser

    2010-01-01

    Full Text Available Background: Radiographic examination of TMJ is indicated when there are clinical signs of pathological conditions, mainly bone changes that may influence the diagnosis and treatment planning. The purpose of this study was to evaluate and to compare the validity and diagnostic accuracy of uncorrected and corrected sagittal tomographic images in the detection of simulated mandibular condyle erosions. Methods : Simulated lesions were created in 10 dry mandibles using a dental round bur. Using uncorrected and corrected sagittal tomography techniques, mandibular condyles were imaged by a Cranex Tome X-ray unit before and after creating the lesions. The uncorrected and corrected tomography images were examined by two independent observers for absence or presence of a lesion. The accuracy for detecting mandibular condyle lesions was expressed as sensitivity, specificity, and validity values. Differences between the two radiographic modalities were tested by Wilcoxon for paired data tests. Inter-observer agreement was determined by Cohen′s Kappa. Results: The sensitivity, specificity and validity were 45%, 85% and 30% in uncorrected sagittal tomographic images, respectively, and 70%, 92.5% and 60% in corrected sagittal tomographic images, respectively. There was a significant statistical difference between the accuracy of uncorrected and corrected sagittal tomography in detection of mandibular condyle erosions (P = 0.016. The inter-observer agreement was slight for uncorrected sagittal tomography and moderate for corrected sagittal tomography. Conclusion: The accuracy of corrected sagittal tomography is significantly higher than that of uncorrected sagittal tomography. Therefore, corrected sagittal tomography seems to be a better modality in detection of mandibular condyle erosions.

  17. New formula for calculation of cobalt-60 percent depth dose

    International Nuclear Information System (INIS)

    Tahmasebi Birgani, M. J.; Ghorbani, M.

    2005-01-01

    On the basis of percent depth dose calculation, the application of - dosimetry in radiotherapy has an important role to play in reducing the chance of tumor recurrence. The aim of this study is to introduce a new formula for calculating the central axis percent depth doses of Cobalt-60 beam. Materials and Methods: In the present study, based on the British Journal of Radiology table, nine new formulas are developed and evaluated for depths of 0.5 - 30 cm and fields of (4*4) - (45*45) cm 2 . To evaluate the agreement between the formulas and the table, the average of the absolute differences between the values was used and the formula with the least average was selected as the best fitted formula. The Microsoft Excel 2000 and the Data fit 8.0 soft wares were used to perform the calculations. Results: The results of this study indicated that one amongst the nine formulas gave a better agreement with the percent depth doses listed in the table of British Journal of Radiology . The new formula has two parts in terms of log (A/P). The first part as a linear function with the depth in the range of 0.5 to 5 cm and the other one as a second order polynomial with the depth in the range of 6 to 30 cm. The average of - the differences between the tabulated and the calculated data using the formula (Δ) is equal to 0.3 152. Discussion and Conclusion: Therefore, the calculated percent depth dose data based on this formula has a better agreement with the published data for Cobalt-60 source. This formula could be used to calculate the percent depth dose for the depths and the field sizes not listed in the British Journal of Radiology table

  18. Ambiguities in the grid-inefficiency correction for Frisch-Grid Ionization Chambers

    International Nuclear Information System (INIS)

    Al-Adili, A.; Hambsch, F.-J.; Bencardino, R.; Oberstedt, S.; Pomp, S.

    2012-01-01

    Ionization chambers with Frisch grids have been very successfully applied to neutron-induced fission-fragment studies during the past 20 years. They are radiation resistant and can be easily adapted to the experimental conditions. The use of Frisch grids has the advantage to remove the angular dependency from the charge induced on the anode plate. However, due to the Grid Inefficiency (GI) in shielding the charges, the anode signal remains slightly angular dependent. The correction for the GI is, however, essential to determine the correct energy of the ionizing particles. GI corrections can amount to a few percent of the anode signal. Presently, two contradicting correction methods are considered in literature. The first method adding the angular-dependent part of the signal to the signal pulse height; the second method subtracting the former from the latter. Both additive and subtractive approaches were investigated in an experiment where a Twin Frisch-Grid Ionization Chamber (TFGIC) was employed to detect the spontaneous fission fragments (FF) emitted by a 252 Cf source. Two parallel-wire grids with different wire spacing (1 and 2 mm, respectively), were used individually, in the same chamber side. All the other experimental conditions were unchanged. The 2 mm grid featured more than double the GI of the 1 mm grid. The induced charge on the anode in both measurements was compared, before and after GI correction. Before GI correction, the 2 mm grid resulted in a lower pulse-height distribution than the 1 mm grid. After applying both GI corrections to both measurements only the additive approach led to consistent grid independent pulse-height distributions. The application of the subtractive correction on the contrary led to inconsistent, grid-dependent results. It is also shown that the impact of either of the correction methods is small on the FF mass distributions of 235 U(n th , f).

  19. Next-to-leading order strong interaction corrections to the ΔF = 2 effective Hamiltonian in the MSSM

    International Nuclear Information System (INIS)

    Ciuchini, Marco; Franco, E.; Guadagnoli, D.; Lubicz, Vittorio; Porretti, V.; Silvestrini, L.

    2006-01-01

    We compute the next-to-leading order strong interaction corrections to gluino-mediated ΔF = 2 box diagrams in the Minimal Supersymmetric Standard Model. These corrections are given by two loop diagrams which we have calculated in three different regularization schemes in the mass insertion approximation. We obtain the next-to-leading order Wilson coefficients of the ΔF = 2 effective Hamiltonian relevant for neutral meson mixings. We find that the matching scale uncertainty is largely reduced at the next-to-leading order, typically from about 10-15% to few percent

  20. Heat transfer corrected isothermal model for devolatilization of thermally-thick biomass particles

    DEFF Research Database (Denmark)

    Luo, Hao; Wu, Hao; Lin, Weigang

    Isothermal model used in current computational fluid dynamic (CFD) model neglect the internal heat transfer during biomass devolatilization. This assumption is not reasonable for thermally-thick particles. To solve this issue, a heat transfer corrected isothermal model is introduced. In this model......, two heat transfer corrected coefficients: HT-correction of heat transfer and HR-correction of reaction, are defined to cover the effects of internal heat transfer. A series of single biomass devitalization case have been modeled to validate this model, the results show that devolatilization behaviors...... of both thermally-thick and thermally-thin particles are predicted reasonable by using heat transfer corrected model, while, isothermal model overestimate devolatilization rate and heating rate for thermlly-thick particle.This model probably has better performance than isothermal model when it is coupled...

  1. Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census

    Science.gov (United States)

    Li, C.; Guo, P.; Liu, X.

    2017-09-01

    A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.

  2. Validation of a Dish-Based Semiquantitative Food Questionnaire in Rural Bangladesh

    Directory of Open Access Journals (Sweden)

    Pi-I. D. Lin

    2017-01-01

    Full Text Available A locally validated tool was needed to evaluate long-term dietary intake in rural Bangladesh. We assessed the validity of a 42-item dish-based semi-quantitative food frequency questionnaire (FFQ using two 3-day food diaries (FDs. We selected a random subset of 47 families (190 participants from a longitudinal arsenic biomonitoring study in Bangladesh to administer the FFQ. Two 3-day FDs were completed by the female head of the households and we used an adult male equivalent method to estimate the FD for the other participants. Food and nutrient intakes measured by FFQ and FD were compared using Pearson’s and Spearman’s correlation, paired t-test, percent difference, cross-classification, weighted Kappa, and Bland–Altman analysis. Results showed good validity for total energy intake (paired t-test, p < 0.05; percent difference <10%, with no presence of proportional bias (Bland–Altman correlation, p > 0.05. After energy-adjustment and de-attenuation for within-person variation, macronutrient intakes had excellent correlations ranging from 0.55 to 0.70. Validity for micronutrients was mixed. High intraclass correlation coefficients (ICCs were found for most nutrients between the two seasons, except vitamin A. This dish-based FFQ provided adequate validity to assess and rank long-term dietary intake in rural Bangladesh for most food groups and nutrients, and should be useful for studying dietary-disease relationships.

  3. Magnetic Resonance-based Motion Correction for Quantitative PET in Simultaneous PET-MR Imaging.

    Science.gov (United States)

    Rakvongthai, Yothin; El Fakhri, Georges

    2017-07-01

    Motion degrades image quality and quantitation of PET images, and is an obstacle to quantitative PET imaging. Simultaneous PET-MR offers a tool that can be used for correcting the motion in PET images by using anatomic information from MR imaging acquired concurrently. Motion correction can be performed by transforming a set of reconstructed PET images into the same frame or by incorporating the transformation into the system model and reconstructing the motion-corrected image. Several phantom and patient studies have validated that MR-based motion correction strategies have great promise for quantitative PET imaging in simultaneous PET-MR. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Higher-order conductivity corrections to the Casimir force

    International Nuclear Information System (INIS)

    Bezerra, Valdir Barbosa; Klimchitskaya, Galina; Mostepanenko, Vladimir

    2000-01-01

    Full text follows: Considerable recent attention has been focused on the new experiments on measuring the Casimir force. To be confident that experimental data fit theory at a level of several percent, a variety of corrections to the ideal expression for the Casimir force should be taken into account. One of the main corrections at small separations between interacting bodies is the one due to finite conductivity of the boundary metal. This correction has its origin in non-zero penetration depth δ 0 of electromagnetic vacuum oscillations into the metal (for a perfect metal of infinitely large conductivity δ 0 = 0). The other quantity of the dimension of length is the space separation a between two plates or a plate and a sphere. Their relation δ 0 /a is the natural perturbation parameter in which powers the corrections to the Casimir force due to finite conductivity can be expanded. Such an expansion works good for all separations a >> δ 0 (i.e. for separations larger than 100-150 nm). The first-order term of this expansion was calculated almost forty years ago, and the second-order one in 1985 [1]. These two terms are not sufficient for the comparison of the theory with precision modern experiments. In this talk we report the results of paper [2] where the third- and fourth-order terms in δ 0 /a expansion of the Casimir force were calculated first. They gave the possibility to achieve an excellent agreement of a theory and experiment. (author)

  5. Validation suite for MCNP

    International Nuclear Information System (INIS)

    Mosteller, Russell D.

    2002-01-01

    Two validation suites, one for criticality and another for radiation shielding, have been defined and tested for the MCNP Monte Carlo code. All of the cases in the validation suites are based on experiments so that calculated and measured results can be compared in a meaningful way. The cases in the validation suites are described, and results from those cases are discussed. For several years, the distribution package for the MCNP Monte Carlo code1 has included an installation test suite to verify that MCNP has been installed correctly. However, the cases in that suite have been constructed primarily to test options within the code and to execute quickly. Consequently, they do not produce well-converged answers, and many of them are physically unrealistic. To remedy these deficiencies, sets of validation suites are being defined and tested for specific types of applications. All of the cases in the validation suites are based on benchmark experiments. Consequently, the results from the measurements are reliable and quantifiable, and calculated results can be compared with them in a meaningful way. Currently, validation suites exist for criticality and radiation-shielding applications.

  6. NLO QCD corrections to Higgs pair production including dimension-6 operators

    Energy Technology Data Exchange (ETDEWEB)

    Groeber, Ramona [INFN, Sezione di Roma Tre, Roma (Italy); Muehlleitner, Margarete; Streicher, Juraj [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany). Institut fuer Theoretische Physik; Spira, Michael [Paul Scherrer Institut, Villigen (Switzerland)

    2016-07-01

    The role of the Higgs boson has developed from the long-sought particle into a tool for exploring beyond Standard Model (BSM) physics. While the Higgs boson signal strengths are close to the values predicted in the Standard Model (SM), the trilinear Higgs-selfcoupling can still deviate significantly from the SM expectations in some BSM scenarios. The Effective Field Theory (EFT) framework provides a way to describe these deviations in a rather model independent way, by including higher-dimensional operators which modify the Higgs boson couplings and induce novel couplings not present in the SM. The trilinear Higgs-selfcoupling is accessible in Higgs pair production, for which the gluon fusion is the dominant production channel. The next-to-leading (NLO) QCD corrections to this process are important for a proper prediction of the cross section and are known in the limit of heavy top quark masses. In our work, we provide the NLO QCD corrections in the large top quark mass limit to Higgs pair production including dimension-6 operators. The various higher-dimensional contributions are affected differently by the QCD corrections, leading to deviations in the relative NLO QCD corrections of several per-cent, while modifying the cross section by up to an order of magnitude.

  7. Software for Correcting the Dynamic Error of Force Transducers

    Directory of Open Access Journals (Sweden)

    Naoki Miyashita

    2014-07-01

    Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.

  8. Correction of the angular dependence of MatriXX Evolution detectors and its impact in the IMRT and VMAT treatment validation; Correccion de la dependencia angular de los detectores del MatriXX Evolution y su impacto en la validacion de tratamientos de IMRT y VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Casares Magaz, O.; Seongheon, K.; Hernandez Armas, J.; Papanikolaou, N.

    2014-07-01

    The purpose of the study was to create detector element-specific angular correction factors for each detector of the MatriXX planar ion chamber array and compare them to vendor-default angular correction factors. Additionally, the impact of both factors on gamma index was quantified using two corrections. The correction factor of each element is determined irradiating the detector at different incidences by the ratio of the calculated expected dose to the MatriXX measured dose as a gantry angle function. To evaluate its impact, sixty-five pre-irradiated patient-specific dose validations were re-analyzed using the gamma index with: 3%/3 mm, 2%/2 mm, 1%/1 mm criteria. The factors for 6 MV were found to differ (7%) from the default ones for specific angles particularly for 85 degree centigrade to 95 degree centigrade. For 10 MV, differences (1.0%) existed when correction factors were created using various ROI's. Two corrections were proposed, absolute differences for 3%/3 mm, 2%/2 mm, and 1%/1 mm were up to 1.5%, 4.2% and 4.1% ( p < 0.01), respectively. Large differences in the default and specific factors were noted for 6 MV and lead to improvement of the absolute gamma index value of up to 4.2%. In general, gamma index value increases for patient specific dose validations when using device specific factors. (Author)

  9. Process validation for radiation processing

    International Nuclear Information System (INIS)

    Miller, A.

    1999-01-01

    Process validation concerns the establishment of the irradiation conditions that will lead to the desired changes of the irradiated product. Process validation therefore establishes the link between absorbed dose and the characteristics of the product, such as degree of crosslinking in a polyethylene tube, prolongation of shelf life of a food product, or degree of sterility of the medical device. Detailed international standards are written for the documentation of radiation sterilization, such as EN 552 and ISO 11137, and the steps of process validation that are described in these standards are discussed in this paper. They include material testing for the documentation of the correct functioning of the product, microbiological testing for selection of the minimum required dose and dose mapping for documentation of attainment of the required dose in all parts of the product. The process validation must be maintained by reviews and repeated measurements as necessary. This paper presents recommendations and guidance for the execution of these components of process validation. (author)

  10. Validation test of advanced technology for IPV nickel-hydrogen flight cells: Update

    Science.gov (United States)

    Smithrick, John J.; Hall, Stephen W.

    1992-01-01

    Individual pressure vessel (IPV) nickel-hydrogen technology was advanced at NASA Lewis and under Lewis contracts with the intention of improving cycle life and performance. One advancement was to use 26 percent potassium hydroxide (KOH) electrolyte to improve cycle life. Another advancement was to modify the state-of-the-art cell design to eliminate identified failure modes. The modified design is referred to as the advanced design. A breakthrough in the low-earth-orbit (LEO) cycle life of IPV nickel-hydrogen cells has been previously reported. The cycle life of boiler plate cells containing 26 percent KOH electrolyte was about 40,000 LEO cycles compared to 3,500 cycles for cells containing 31 percent KOH. The boiler plate test results are in the process of being validated using flight hardware and real time LEO testing at the Naval Weapons Support Center (NWSC), Crane, Indiana under a NASA Lewis Contract. An advanced 125 Ah IPV nickel-hydrogen cell was designed. The primary function of the advanced cell is to store and deliver energy for long-term, LEO spacecraft missions. The new features of this design are: (1) use of 26 percent rather than 31 percent KOH electrolyte; (2) use of a patented catalyzed wall wick; (3) use of serrated-edge separators to facilitate gaseous oxygen and hydrogen flow within the cell, while still maintaining physical contact with the wall wick for electrolyte management; and (4) use of a floating rather than a fixed stack (state-of-the-art) to accommodate nickel electrode expansion due to charge/discharge cycling. The significant improvements resulting from these innovations are: extended cycle life; enhanced thermal, electrolyte, and oxygen management; and accommodation of nickel electrode expansion. The advanced cell design is in the process of being validated using real time LEO cycle life testing of NWSC, Crane, Indiana. An update of validation test results confirming this technology is presented.

  11. Relationship between breast sound speed and mammographic percent density

    Science.gov (United States)

    Sak, Mark; Duric, Nebojsa; Boyd, Norman; Littrup, Peter; Myc, Lukasz; Faiz, Muhammad; Li, Cuiping; Bey-Knight, Lisa

    2011-03-01

    Despite some shortcomings, mammography is currently the standard of care for breast cancer screening and diagnosis. However, breast ultrasound tomography is a rapidly developing imaging modality that has the potential to overcome the drawbacks of mammography. It is known that women with high breast densities have a greater risk of developing breast cancer. Measuring breast density is accomplished through the use of mammographic percent density, defined as the ratio of fibroglandular to total breast area. Using an ultrasound tomography (UST) prototype, we created sound speed images of the patient's breast, motivated by the fact that sound speed in a tissue is proportional to the density of the tissue. The purpose of this work is to compare the acoustic performance of the UST system with the measurement of mammographic percent density. A cohort of 251 patients was studied using both imaging modalities and the results suggest that the volume averaged breast sound speed is significantly related to mammographic percent density. The Spearman correlation coefficient was found to be 0.73 for the 175 film mammograms and 0.69 for the 76 digital mammograms obtained. Since sound speed measurements do not require ionizing radiation or physical compression, they have the potential to form the basis of a safe, more accurate surrogate marker of breast density.

  12. Validation of missed space-group symmetry in X-ray powder diffraction structures with dispersion-corrected density functional theory.

    Science.gov (United States)

    Hempler, Daniela; Schmidt, Martin U; van de Streek, Jacco

    2017-08-01

    More than 600 molecular crystal structures with correct, incorrect and uncertain space-group symmetry were energy-minimized with dispersion-corrected density functional theory (DFT-D, PBE-D3). For the purpose of determining the correct space-group symmetry the required tolerance on the atomic coordinates of all non-H atoms is established to be 0.2 Å. For 98.5% of 200 molecular crystal structures published with missed symmetry, the correct space group is identified; there are no false positives. Very small, very symmetrical molecules can end up in artificially high space groups upon energy minimization, although this is easily detected through visual inspection. If the space group of a crystal structure determined from powder diffraction data is ambiguous, energy minimization with DFT-D provides a fast and reliable method to select the correct space group.

  13. Psychometric properties of a pictorial scale measuring correct condom use.

    Science.gov (United States)

    Li, Qing; Li, Xiaoming; Stanton, Bonita; Wang, Bo

    2011-02-01

    This study was designed to assess the psychometric properties of a pictorial scale of correct condom use (PSCCU) using data from female sex workers (FSWs) in China. The psychometric properties assessed in this study include construct validity by correlations and known-group validation. The study sample included 396 FSWs in Guangxi, China. The results demonstrate adequate validity of the PSCCU among the study population. FSWs with a higher level of education scored significantly higher on the PSCCU than those with a lower level of education. FSWs who self-reported appropriate condom use with stable partners scored significantly higher on PSCCU than their counterparts. The PSCCU should provide HIV/STI prevention researchers and practitioners with a valid alternative assessment tool among high-risk populations, especially in resource-limited settings.

  14. Isotopic and criticality validation for actinide-only burnup credit

    International Nuclear Information System (INIS)

    Fuentes, E.; Lancaster, D.; Rahimi, M.

    1997-01-01

    The techniques used for actinide-only burnup credit isotopic validation and criticality validation are presented and discussed. Trending analyses have been incorporated into both methodologies, requiring biases and uncertainties to be treated as a function of the trending parameters. The isotopic validation is demonstrated using the SAS2H module of SCALE 4.2, with the 27BURNUPLIB cross section library; correction factors are presented for each of the actinides in the burnup credit methodology. For the criticality validation, the demonstration is performed with the CSAS module of SCALE 4.2 and the 27BURNUPLIB, resulting in a validated upper safety limit

  15. On the Atmospheric Correction of Antarctic Airborne Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Martin Black

    2014-05-01

    Full Text Available The first airborne hyperspectral campaign in the Antarctic Peninsula region was carried out by the British Antarctic Survey and partners in February 2011. This paper presents an insight into the applicability of currently available radiative transfer modelling and atmospheric correction techniques for processing airborne hyperspectral data in this unique coastal Antarctic environment. Results from the Atmospheric and Topographic Correction version 4 (ATCOR-4 package reveal absolute reflectance values somewhat in line with laboratory measured spectra, with Root Mean Square Error (RMSE values of 5% in the visible near infrared (0.4–1 µm and 8% in the shortwave infrared (1–2.5 µm. Residual noise remains present due to the absorption by atmospheric gases and aerosols, but certain parts of the spectrum match laboratory measured features very well. This study demonstrates that commercially available packages for carrying out atmospheric correction are capable of correcting airborne hyperspectral data in the challenging environment present in Antarctica. However, it is anticipated that future results from atmospheric correction could be improved by measuring in situ atmospheric data to generate atmospheric profiles and aerosol models, or with the use of multiple ground targets for calibration and validation.

  16. Experience with high percent step load decrease from full power in NPP Krsko

    International Nuclear Information System (INIS)

    Vukovic, V.

    2000-01-01

    The control system of NPP Kriko, is designed to automatically control the reactor in the power range between 15 and 100 percent of rated power for the following designed transients; - 10 percent step change in load; 5 percent per minute loading and unloading; step full load decrease with the aid of automatically initiated and controlled steam dump. Because station operation below 15 percent of rated power is designed for a period of time during startup or standby conditions, automatic control below 15 percent is not provided. The steam dump accomplishes the following functional tasks: it permits the nuclear plants to accept a sudden 95 percent loss of load without incurring reactor trip; it removes stored energy and residual heat following a reactor trip and brings the plant to equilibrium no-load conditions without actuation of the steam generator safety valves; it permits control of the steam generator pressure at no-load conditions and permits a manually controlled cooldown of the plant. The first two functional tasks are controlled by Tavg. The third is controlled by steam pressure. Interlocks minimise any possibility of an inadvertent actuation of steam dump system. This paper discusses relationships between designed (described) characteristics of plant and the data which are obtained during startup and/or first ten years of operation. (author)

  17. Electromagnetic corrections to the hadronic vacuum polarization of the photon within QEDL and QEDM

    Science.gov (United States)

    Bussone, Andrea; Della Morte, Michele; Janowski, Tadeusz

    2018-03-01

    We compute the leading QED corrections to the hadronic vacuum polarization (HVP) of the photon, relevant for the determination of leptonic anomalous magnetic moments, al. We work in the electroquenched approximation and use dynamical QCD configurations generated by the CLS initiative with two degenerate flavors of nonperturbatively O(a)-improved Wilson fermions. We consider QEDL and QEDM to deal with the finite-volume zero modes. We compare results for the Wilson loops with exact analytical determinations. In addition we make sure that the volumes and photon masses used in QEDM are such that the correct dispersion relation is reproduced by the energy levels extracted from the charged pions two-point functions. Finally we compare results for pion masses and the HVP between QEDL and QEDM. For the vacuum polarization, corrections with respect to the pure QCD case, at fixed pion masses, turn out to be at the percent level.

  18. Determining spherical lens correction for astronaut training underwater.

    Science.gov (United States)

    Porter, Jason; Gibson, C Robert; Strauss, Samuel

    2011-09-01

    To develop a model that will accurately predict the distance spherical lens correction needed to be worn by National Aeronautics and Space Administration astronauts while training underwater. The replica space suit's helmet contains curved visors that induce refractive power when submersed in water. Anterior surface powers and thicknesses were measured for the helmet's protective and inside visors. The impact of each visor on the helmet's refractive power in water was analyzed using thick lens calculations and Zemax optical design software. Using geometrical optics approximations, a model was developed to determine the optimal distance spherical power needed to be worn underwater based on the helmet's total induced spherical power underwater and the astronaut's manifest spectacle plane correction in air. The validity of the model was tested using data from both eyes of 10 astronauts who trained underwater. The helmet's visors induced a total power of -2.737 D when placed underwater. The required underwater spherical correction (FW) was linearly related to the spectacle plane spherical correction in air (FAir): FW = FAir + 2.356 D. The mean magnitude of the difference between the actual correction worn underwater and the calculated underwater correction was 0.20 ± 0.11 D. The actual and calculated values were highly correlated (r = 0.971) with 70% of eyes having a difference in magnitude of astronauts. The model accurately predicts the actual values worn underwater and can be applied (more generally) to determine a suitable spectacle lens correction to be worn behind other types of masks when submerged underwater.

  19. School Designed To Use 80 Percent Less Energy

    Science.gov (United States)

    American School and University, 1975

    1975-01-01

    The new Terraset Elementary School in Reston, Virginia, uses earth as a cover for the roof area and for about 80 percent of the wall area. A heat recovery system will be used with solar collectors playing a primary role in heating and cooling. (Author/MLF)

  20. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  1. Attenuation correction factors for cylindrical, disc and box geometry

    International Nuclear Information System (INIS)

    Agarwal, Chhavi; Poi, Sanhita; Mhatre, Amol; Goswami, A.; Gathibandhe, M.

    2009-01-01

    In the present study, attenuation correction factors have been experimentally determined for samples having cylindrical, disc and box geometry and compared with the attenuation correction factors calculated by Hybrid Monte Carlo (HMC) method [ C. Agarwal, S. Poi, A. Goswami, M. Gathibandhe, R.A. Agrawal, Nucl. Instr. and. Meth. A 597 (2008) 198] and with the near-field and far-field formulations available in literature. It has been observed that the near-field formulae, although said to be applicable at close sample-detector geometry, does not work at very close sample-detector configuration. The advantage of the HMC method is that it is found to be valid for all sample-detector geometries.

  2. Groundwater Model Validation for the Project Shoal Area, Corrective Action Unit 447

    Energy Technology Data Exchange (ETDEWEB)

    Hassan, Ahmed [Desert Research Inst. (DRI), Las Vegas, NV (United States). Division of Hydrologic Sciences; Chapman, Jenny [Desert Research Inst. (DRI), Las Vegas, NV (United States). Division of Hydrologic Sciences; Lyles, Brad [Desert Research Inst. (DRI), Las Vegas, NV (United States). Division of Hydrologic Sciences

    2008-05-19

    Stoller has examined newly collected water level data in multiple wells at the Shoal site. On the basis of these data and information presented in the report, we are currently unable to confirm that the model is successfully validated. Most of our concerns regarding the model stem from two findings: (1) measured water level data do not provide clear evidence of a prevailing lateral flow direction; and (2) the groundwater flow system has been and continues to be in a transient state, which contrasts with assumed steady-state conditions in the model. The results of DRI's model validation efforts and observations made regarding water level behavior are discussed in the following sections. A summary of our conclusions and recommendations for a path forward are also provided in this letter report.

  3. Bias correction for selecting the minimal-error classifier from many machine learning models.

    Science.gov (United States)

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. National Land Cover Database (NLCD) Percent Developed Imperviousness Collection

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The National Land Cover Database (NLCD) Percent Developed Imperviousness Collection is produced through a cooperative project conducted by the Multi-Resolution Land...

  5. Development and Validation of the Spanish Numeracy Understanding in Medicine Instrument.

    Science.gov (United States)

    Jacobs, Elizabeth A; Walker, Cindy M; Miller, Tamara; Fletcher, Kathlyn E; Ganschow, Pamela S; Imbert, Diana; O'Connell, Maria; Neuner, Joan M; Schapira, Marilyn M

    2016-11-01

    The Spanish-speaking population in the U.S. is large and growing and is known to have lower health literacy than the English-speaking population. Less is known about the health numeracy of this population due to a lack of health numeracy measures in Spanish. we aimed to develop and validate a short and easy to use measure of health numeracy for Spanish-speaking adults: the Spanish Numeracy Understanding in Medicine Instrument (Spanish-NUMi). Items were generated based on qualitative studies in English- and Spanish-speaking adults and translated into Spanish using a group translation and consensus process. Candidate items for the Spanish NUMi were selected from an eight-item validated English Short NUMi. Differential Item Functioning (DIF) was conducted to evaluate equivalence between English and Spanish items. Cronbach's alpha was computed as a measure of reliability and a Pearson's correlation was used to evaluate the association between test scores and the Spanish Test of Functional Health Literacy (S-TOFHLA) and education level. Two-hundred and thirty-two Spanish-speaking Chicago residents were included in the study. The study population was diverse in age, gender, and level of education and 70 % reported Mexico as their country of origin. Two items of the English eight-item Short NUMi demonstrated DIF and were dropped. The resulting six-item test had a Cronbach's alpha of 0.72, a range of difficulty using classical test statistics (percent correct: 0.48 to 0.86), and adequate discrimination (item-total score correlation: 0.34-0.49). Scores were positively correlated with print literacy as measured by the S- TOFHLA (r = 0.67; p Spanish NUMi is a reliable and valid measure of important numerical concepts used in communicating health information.

  6. Evaluation and Validation of Assembling Corrected PacBio Long Reads for Microbial Genome Completion via Hybrid Approaches.

    Science.gov (United States)

    Lin, Hsin-Hung; Liao, Yu-Chieh

    2015-01-01

    Despite the ever-increasing output of next-generation sequencing data along with developing assemblers, dozens to hundreds of gaps still exist in de novo microbial assemblies due to uneven coverage and large genomic repeats. Third-generation single-molecule, real-time (SMRT) sequencing technology avoids amplification artifacts and generates kilobase-long reads with the potential to complete microbial genome assembly. However, due to the low accuracy (~85%) of third-generation sequences, a considerable amount of long reads (>50X) are required for self-correction and for subsequent de novo assembly. Recently-developed hybrid approaches, using next-generation sequencing data and as few as 5X long reads, have been proposed to improve the completeness of microbial assembly. In this study we have evaluated the contemporary hybrid approaches and demonstrated that assembling corrected long reads (by runCA) produced the best assembly compared to long-read scaffolding (e.g., AHA, Cerulean and SSPACE-LongRead) and gap-filling (SPAdes). For generating corrected long reads, we further examined long-read correction tools, such as ECTools, LSC, LoRDEC, PBcR pipeline and proovread. We have demonstrated that three microbial genomes including Escherichia coli K12 MG1655, Meiothermus ruber DSM1279 and Pdeobacter heparinus DSM2366 were successfully hybrid assembled by runCA into near-perfect assemblies using ECTools-corrected long reads. In addition, we developed a tool, Patch, which implements corrected long reads and pre-assembled contigs as inputs, to enhance microbial genome assemblies. With the additional 20X long reads, short reads of S. cerevisiae W303 were hybrid assembled into 115 contigs using the verified strategy, ECTools + runCA. Patch was subsequently applied to upgrade the assembly to a 35-contig draft genome. Our evaluation of the hybrid approaches shows that assembling the ECTools-corrected long reads via runCA generates near complete microbial genomes, suggesting

  7. Validation of dengue infection severity score

    Directory of Open Access Journals (Sweden)

    Pongpan S

    2014-03-01

    Full Text Available Surangrat Pongpan,1,2 Jayanton Patumanond,3 Apichart Wisitwong,4 Chamaiporn Tawichasri,5 Sirianong Namwongprom1,6 1Clinical Epidemiology Program, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand; 2Department of Occupational Medicine, Phrae Hospital, Phrae, Thailand; 3Clinical Epidemiology Program, Faculty of Medicine, Thammasat University, Bangkok, Thailand; 4Department of Social Medicine, Sawanpracharak Hospital, Nakorn Sawan, Thailand; 5Clinical Epidemiology Society at Chiang Mai, Chiang Mai, Thailand; 6Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand Objective: To validate a simple scoring system to classify dengue viral infection severity to patients in different settings. Methods: The developed scoring system derived from 777 patients from three tertiary-care hospitals was applied to 400 patients in the validation data obtained from another three tertiary-care hospitals. Percentage of correct classification, underestimation, and overestimation was compared. The score discriminative performance in the two datasets was compared by analysis of areas under the receiver operating characteristic curves. Results: Patients in the validation data were different from those in the development data in some aspects. In the validation data, classifying patients into three severity levels (dengue fever, dengue hemorrhagic fever, and dengue shock syndrome yielded 50.8% correct prediction (versus 60.7% in the development data, with clinically acceptable underestimation (18.6% versus 25.7% and overestimation (30.8% versus 13.5%. Despite the difference in predictive performances between the validation and the development data, the overall prediction of the scoring system is considered high. Conclusion: The developed severity score may be applied to classify patients with dengue viral infection into three severity levels with clinically acceptable under- or overestimation. Its impact when used in routine

  8. Methods for Geometric Data Validation of 3d City Models

    Science.gov (United States)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  9. Projection correction for the pixel-by-pixel basis in diffraction enhanced imaging

    International Nuclear Information System (INIS)

    Huang Zhifeng; Kang Kejun; Li Zheng

    2006-01-01

    Theories and methods of x-ray diffraction enhanced imaging (DEI) and computed tomography of the DEI (DEI-CT) have been investigated recently. But the phenomenon of projection offsets which may affect the accuracy of the results of extraction methods of refraction-angle images and reconstruction algorithms of the DEI-CT is seldom of concern. This paper focuses on it. Projection offsets are revealed distinctly according to the equivalent rectilinear propagation model of the DEI. Then, an effective correction method using the equivalent positions of projection data is presented to eliminate the errors induced by projection offsets. The correction method is validated by a computer simulation experiment and extraction methods or reconstruction algorithms based on the corrected data can give more accurate results. The limitations of the correction method are discussed at the end

  10. Validation of cardiovascular diagnoses in the Greenlandic Hospital Discharge Register for epidemiological use

    DEFF Research Database (Denmark)

    Tvermosegaard, Maria; Ronn, Pernille Falberg; Pedersen, Michael Lynge

    2018-01-01

    not previously been validated specifically. The objective of the study was to validate diagnoses of CVD in GHDR. The study was conducted as a validation study with primary investigator comparing information in GHDR with information in medical records. Diagnoses in GHDR were considered correct and thus valid......Cardiovascular disease (CVD) is one of the leading causes of death worldwide. In Greenland, valid estimates of prevalence and incidence of CVD do not exist and can only be calculated if diagnoses of CVD in the Greenlandic Hospital Discharge Register (GHDR) are correct. Diagnoses of CVD in GHDR have...... if they matched the diagnoses or the medical information in the medical records. A total of 432 online accessible medical records with a cardiovascular diagnosis according to GHDR from Queen Ingrid's Hospital from 2001 to 2013 (n=291) and from local health care centres from 2007 to 2013 (n=141) were reviewed...

  11. QED corrections to Planck's radiation law and photon thermodynamics

    International Nuclear Information System (INIS)

    Partovi, M.H.

    1994-01-01

    Leading corrections to Planck's radiation formula and other photon thermodynamic functions arising from the pair-mediated photon-photon interaction are calculated. This interaction is found to be attractive and to cause a small increase in occupation number for all modes and a corresponding correction to the equation of state. The results are valid for the range of temperatures well below T e =5.9 GK, the temperature equivalent to the electron mass, a range for which the photon gas is essentially free of pair-produced electrons and positrons. An interesting effect of these corrections is the behavior of the photon gas as an elastic medium and its ability to propagate density perturbations. It is found that the cosmic photon gas subsequent to electron-positron annihilation would have manifested these elastic properties were it not for the presence of the free electrons and their dominance of the photon thermodynamics

  12. Defining the "Correct Form": Using Biomechanics to Develop Reliable and Valid Assessment Instruments

    Science.gov (United States)

    Satern, Miriam N.

    2011-01-01

    Physical educators should be able to define the "correct form" they expect to see each student performing in their classes. Moreover, they should be able to go beyond assessing students' skill levels by measuring the outcomes (products) of movements (i.e., how far they throw the ball or how many successful attempts are completed) or counting the…

  13. Markerless 3D Head Tracking for Motion Correction in High Resolution PET Brain Imaging

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter

    relying on markers. Data-driven motion correction is problematic due to the physiological dynamics. Marker-based tracking is potentially unreliable, and it is extremely hard to validate when the tracking information is correct. The motion estimation is essential for proper motion correction of the PET......This thesis concerns application specific 3D head tracking. The purpose is to improve motion correction in position emission tomography (PET) brain imaging through development of markerless tracking. Currently, motion correction strategies are based on either the PET data itself or tracking devices...... images. Incorrect motion correction can in the worst cases result in wrong diagnosis or treatment. The evolution of a markerless custom-made structured light 3D surface tracking system is presented. The system is targeted at state-of-the-art high resolution dedicated brain PET scanners with a resolution...

  14. Dominant two-loop electroweak corrections to the hadroproduction of a pseudoscalar Higgs boson and its photonic decay

    International Nuclear Information System (INIS)

    Brod, J.; Kniehl, B.A.

    2008-01-01

    We present the dominant two-loop electroweak corrections to the partial decay widths to gluon jets and prompt photons of the neutral CP-odd Higgs boson A 0 , with mass M A 0 W , in the two-Higgs-doublet model for low to intermediate values of the ratio tan β=v 2 /v 1 of the vacuum expectation values. They apply as they stand to the production cross sections in hadronic and two-photon collisions, at the Tevatron, the LHC, and a future photon collider. The appearance of three γ 5 matrices in closed fermion loops requires special care in the dimensional regularization of ultraviolet divergences. The corrections are negative and amount to several percent, so that they fully compensate or partly screen the enhancement due to QCD corrections. (orig.)

  15. Measuring coverage in MNCH: a prospective validation study in Pakistan and Bangladesh on measuring correct treatment of childhood pneumonia.

    Directory of Open Access Journals (Sweden)

    Tabish Hazir

    Full Text Available Antibiotic treatment for pneumonia as measured by Demographic and Health Surveys (DHS and Multiple Indicator Cluster Surveys (MICS is a key indicator for tracking progress in achieving Millennium Development Goal 4. Concerns about the validity of this indicator led us to perform an evaluation in urban and rural settings in Pakistan and Bangladesh.Caregivers of 950 children under 5 y with pneumonia and 980 with "no pneumonia" were identified in urban and rural settings and allocated for DHS/MICS questions 2 or 4 wk later. Study physicians assigned a diagnosis of pneumonia as reference standard; the predictive ability of DHS/MICS questions and additional measurement tools to identify pneumonia versus non-pneumonia cases was evaluated. Results at both sites showed suboptimal discriminative power, with no difference between 2- or 4-wk recall. Individual patterns of sensitivity and specificity varied substantially across study sites (sensitivity 66.9% and 45.5%, and specificity 68.8% and 69.5%, for DHS in Pakistan and Bangladesh, respectively. Prescribed antibiotics for pneumonia were correctly recalled by about two-thirds of caregivers using DHS questions, increasing to 72% and 82% in Pakistan and Bangladesh, respectively, using a drug chart and detailed enquiry.Monitoring antibiotic treatment of pneumonia is essential for national and global programs. Current (DHS/MICS questions and proposed new (video and pneumonia score methods of identifying pneumonia based on maternal recall discriminate poorly between pneumonia and children with cough. Furthermore, these methods have a low yield to identify children who have true pneumonia. Reported antibiotic treatment rates among these children are therefore not a valid proxy indicator of pneumonia treatment rates. These results have important implications for program monitoring and suggest that data in its current format from DHS/MICS surveys should not be used for the purpose of monitoring antibiotic

  16. The reliability and validity of the Saliba Postural Classification System.

    Science.gov (United States)

    Collins, Cristiana Kahl; Johnson, Vicky Saliba; Godwin, Ellen M; Pappas, Evangelos

    2016-07-01

    To determine the reliability and validity of the Saliba Postural Classification System (SPCS). Two physical therapists classified pictures of 100 volunteer participants standing in their habitual posture for inter and intra-tester reliability. For validity, 54 participants stood on a force plate in a habitual and a corrected posture, while a vertical force was applied through the shoulders until the clinician felt a postural give. Data were extracted at the time the give was felt and at a time in the corrected posture that matched the peak vertical ground reaction force (VGRF) in the habitual posture. Inter-tester reliability demonstrated 75% agreement with a Kappa = 0.64 (95% CI = 0.524-0.756, SE = 0.059). Intra-tester reliability demonstrated 87% agreement with a Kappa = 0.8, (95% CI = 0.702-0.898, SE = 0.05) and 80% agreement with a Kappa = 0.706, (95% CI = 0.594-0818, SE = 0.057). The examiner applied a significantly higher (p < 0.001) peak vertical force in the corrected posture prior to a postural give when compared to the habitual posture. Within the corrected posture, the %VGRF was higher when the test was ongoing vs. when a postural give was felt (p < 0.001). The %VGRF was not different between the two postures when comparing the peaks (p = 0.214). The SPCS has substantial agreement for inter- and intra-tester reliability and is largely a valid postural classification system as determined by the larger vertical forces in the corrected postures. Further studies on the correlation between the SPCS and diagnostic classifications are indicated.

  17. National Land Cover Database (NLCD) Percent Tree Canopy Collection

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The National Land Cover Database (NLCD) Percent Tree Canopy Collection is a product of the U.S. Forest Service (USFS), and is produced through a cooperative project...

  18. Atmospheric scattering corrections to solar radiometry

    International Nuclear Information System (INIS)

    Box, M.A.; Deepak, A.

    1979-01-01

    Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. In this paper we shall discuss the correction factors needed to account for the diffuse (i.e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle 0 ) and relatively clear skies (optical depths <0.4), it is shown that the total diffuse contributions represents approximately l% of the total intensity. It is assumed here that the main contributions to the diffuse radiation within the detector's view cone are due to single scattering by molecules and aerosols and multiple scattering by molecules alone, aerosol multiple scattering contributions being treated as negligibly small. The theory and the numerical results discussed in this paper will be helpful not only in making corrections to the measured optical depth data but also in designing improved solar radiometers

  19. Construct validation of an interactive digital algorithm for ostomy care.

    Science.gov (United States)

    Beitz, Janice M; Gerlach, Mary A; Schafer, Vickie

    2014-01-01

    The purpose of this study was to evaluate construct validity for a previously face and content validated Ostomy Algorithm using digital real-life clinical scenarios. A cross-sectional, mixed-methods Web-based survey design study was conducted. Two hundred ninety-seven English-speaking RNs completed the study; participants practiced in both acute care and postacute settings, with 1 expert ostomy nurse (WOC nurse) and 2 nonexpert nurses. Following written consent, respondents answered demographic questions and completed a brief algorithm tutorial. Participants were then presented with 7 ostomy-related digital scenarios consisting of real-life photos and pertinent clinical information. Respondents used the 11 assessment components of the digital algorithm to choose management options. Participant written comments about the scenarios and the research process were collected. The mean overall percentage of correct responses was 84.23%. Mean percentage of correct responses for respondents with a self-reported basic ostomy knowledge was 87.7%; for those with a self-reported intermediate ostomy knowledge was 85.88% and those who were self-reported experts in ostomy care achieved 82.77% correct response rate. Five respondents reported having no prior ostomy care knowledge at screening and achieved an overall 45.71% correct response rate. No negative comments regarding the algorithm were recorded by participants. The new standardized Ostomy Algorithm remains the only face, content, and construct validated digital clinical decision instrument currently available. Further research on application at the bedside while tracking patient outcomes is warranted.

  20. Development and validation of a septoplasty training model using 3-dimensional printing technology.

    Science.gov (United States)

    AlReefi, Mahmoud A; Nguyen, Lily H P; Mongeau, Luc G; Haq, Bassam Ul; Boyanapalli, Siddharth; Hafeez, Nauman; Cegarra-Escolano, Francois; Tewfik, Marc A

    2017-04-01

    Providing alternative training modalities may improve trainees' ability to perform septoplasty. Three-dimensional printing has been shown to be a powerful tool in surgical training. The objectives of this study were to explain the development of our 3-dimensional (3D) printed septoplasty training model, to assess its face and content validity, and to present evidence supporting its ability to distinguish between levels of surgical proficiency. Imaging data of a patient with a nasal septal deviation was selected for printing. Printing materials reproducing the mechanical properties of human tissues were selected based on literature review and prototype testing. Eight expert rhinologists, 6 senior residents, and 6 junior residents performed endoscopic septoplasties on the model and completed a postsimulation survey. Performance metrics in quality (final product analysis), efficiency (time), and safety (eg, perforation length, nares damage) were recorded and analyzed in a study-blind manner. The model was judged to be anatomically correct and the steps performed realistic, with scores of 4.05 ± 0.82 and 4.2 ± 1, respectively, on a 5-point Likert scale. Ninety-two percent of residents desired the simulator to be integrated into their teaching curriculum. There was a significant difference (p simulator training models for septoplasty. Our model incorporates 2 different materials mixed into the 3 relevant consistencies necessary to simulate septoplasty. Our findings provide evidence supporting the validity of the model. © 2016 ARS-AAOA, LLC.

  1. Validation of Virtual Environments Incorporating Virtual Operators for Procedural Learning

    Science.gov (United States)

    2012-09-01

    Percent Correct RADHAZ Radiation Hazard RAM Random Access Memory DRDC Toronto TM 2011-132 xiii RCA Radio Corporation of America RSD Rapid... colour head mounted display (HMD; for hardware details, see Appendix B). The instantaneous point of view was determined by a magnetic, head- tracking...readily recalled ; deck landings are not tracked, so the values provided are crude estimates at best. For those subjects who provided an estimated range

  2. Stochastic simulation experiment to assess radar rainfall retrieval uncertainties associated with attenuation and its correction

    Directory of Open Access Journals (Sweden)

    R. Uijlenhoet

    2008-03-01

    Full Text Available As rainfall constitutes the main source of water for the terrestrial hydrological processes, accurate and reliable measurement and prediction of its spatial and temporal distribution over a wide range of scales is an important goal for hydrology. We investigate the potential of ground-based weather radar to provide such measurements through a theoretical analysis of some of the associated observation uncertainties. A stochastic model of range profiles of raindrop size distributions is employed in a Monte Carlo simulation experiment to investigate the rainfall retrieval uncertainties associated with weather radars operating at X-, C-, and S-band. We focus in particular on the errors and uncertainties associated with rain-induced signal attenuation and its correction for incoherent, non-polarimetric, single-frequency, operational weather radars. The performance of two attenuation correction schemes, the (forward Hitschfeld-Bordan algorithm and the (backward Marzoug-Amayenc algorithm, is analyzed for both moderate (assuming a 50 km path length and intense Mediterranean rainfall (for a 30 km path. A comparison shows that the backward correction algorithm is more stable and accurate than the forward algorithm (with a bias in the order of a few percent for the former, compared to tens of percent for the latter, provided reliable estimates of the total path-integrated attenuation are available. Moreover, the bias and root mean square error associated with each algorithm are quantified as a function of path-averaged rain rate and distance from the radar in order to provide a plausible order of magnitude for the uncertainty in radar-retrieved rain rates for hydrological applications.

  3. The validated sun exposure questionnaire

    DEFF Research Database (Denmark)

    Køster, B; Søndergaard, J; Nielsen, J B

    2017-01-01

    Few questionnaires used in monitoring sun-related behavior have been tested for validity. We established criteria validity of a developed questionnaire for monitoring population sun-related behavior. During May-August 2013, 664 Danes wore a personal electronic UV-dosimeter for one week...... that measured the outdoor time and dose of erythemal UVR exposure. In the following week, they answered a questionnaire on their sun-related behavior in the measurement week. Outdoor time measured by dosimetry correlated strongly with both outdoor time and the developed exposure scale measured...... in the questionnaire. Exposure measured in SED by dosimetry correlated strongly with the exposure scale. In a linear regression model of UVR (SED) received, 41 percent of the variation was explained by skin type, age, week of participation and the exposure scale, with the exposure scale as the main contributor...

  4. The validity of vignettes in cross country health studies

    DEFF Research Database (Denmark)

    Pozzoli, Dario; Gupta, Nabanita Datta; Kristensen, Nicolai

    Cross-country comparisons of subjective assessments may be ham-pered by sub-population speci.c response style. To correct for this, the use of vignettes has become increasingly popular - notably within cross-country health studies. However, the validity of vignettes as a means to re-scale across ...... that the assumption of RC is not innocous and that our extended model improves the fit and significantly changes the cross-country rankings of health vis-á-vis the standard Chopit model.......Cross-country comparisons of subjective assessments may be ham-pered by sub-population speci.c response style. To correct for this, the use of vignettes has become increasingly popular - notably within cross-country health studies. However, the validity of vignettes as a means to re-scale across...

  5. Learning Style Scales: a valid and reliable questionnaire

    Directory of Open Access Journals (Sweden)

    Abdolghani Abdollahimohammad

    2014-08-01

    Full Text Available Purpose: Learning-style instruments assist students in developing their own learning strategies and outcomes, in eliminating learning barriers, and in acknowledging peer diversity. Only a few psychometrically validated learning-style instruments are available. This study aimed to develop a valid and reliable learning-style instrument for nursing students. Methods: A cross-sectional survey study was conducted in two nursing schools in two countries. A purposive sample of 156 undergraduate nursing students participated in the study. Face and content validity was obtained from an expert panel. The LSS construct was established using principal axis factoring (PAF with oblimin rotation, a scree plot test, and parallel analysis (PA. The reliability of LSS was tested using Cronbach’s α, corrected item-total correlation, and test-retest. Results: Factor analysis revealed five components, confirmed by PA and a relatively clear curve on the scree plot. Component strength and interpretability were also confirmed. The factors were labeled as perceptive, solitary, analytic, competitive, and imaginative learning styles. Cronbach’s α was > 0.70 for all subscales in both study populations. The corrected item-total correlations were > 0.30 for the items in each component. Conclusion: The LSS is a valid and reliable inventory for evaluating learning style preferences in nursing students in various multicultural environments.

  6. Learning Style Scales: a valid and reliable questionnaire.

    Science.gov (United States)

    Abdollahimohammad, Abdolghani; Ja'afar, Rogayah

    2014-01-01

    Learning-style instruments assist students in developing their own learning strategies and outcomes, in eliminating learning barriers, and in acknowledging peer diversity. Only a few psychometrically validated learning-style instruments are available. This study aimed to develop a valid and reliable learning-style instrument for nursing students. A cross-sectional survey study was conducted in two nursing schools in two countries. A purposive sample of 156 undergraduate nursing students participated in the study. Face and content validity was obtained from an expert panel. The LSS construct was established using principal axis factoring (PAF) with oblimin rotation, a scree plot test, and parallel analysis (PA). The reliability of LSS was tested using Cronbach's α, corrected item-total correlation, and test-retest. Factor analysis revealed five components, confirmed by PA and a relatively clear curve on the scree plot. Component strength and interpretability were also confirmed. The factors were labeled as perceptive, solitary, analytic, competitive, and imaginative learning styles. Cronbach's α was >0.70 for all subscales in both study populations. The corrected item-total correlations were >0.30 for the items in each component. The LSS is a valid and reliable inventory for evaluating learning style preferences in nursing students in various multicultural environments.

  7. How valid are commercially available medical simulators?

    Science.gov (United States)

    Stunt, JJ; Wulms, PH; Kerkhoffs, GM; Dankelman, J; van Dijk, CN; Tuijthof, GJM

    2014-01-01

    Background Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary Four hundred and thirty-three commercially available simulators were found, from which 405 (94%) were physical models. One hundred and thirty validation studies evaluated 35 (8%) commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity). Twenty-four (37%) simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity) were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity). Conclusion Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of) validation depends on the difficulty level of skills training and possible consequences when skills are insufficient, it is advisable for medical professionals, trainees, medical educators, and companies who manufacture medical simulators to critically judge the available medical simulators for proper validation. This way adequate, safe, and affordable medical psychomotor skills training can be achieved. PMID:25342926

  8. Methodology for testing and validating knowledge bases

    Science.gov (United States)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  9. Modeling percent tree canopy cover: a pilot study

    Science.gov (United States)

    John W. Coulston; Gretchen G. Moisen; Barry T. Wilson; Mark V. Finco; Warren B. Cohen; C. Kenneth Brewer

    2012-01-01

    Tree canopy cover is a fundamental component of the landscape, and the amount of cover influences fire behavior, air pollution mitigation, and carbon storage. As such, efforts to empirically model percent tree canopy cover across the United States are a critical area of research. The 2001 national-scale canopy cover modeling and mapping effort was completed in 2006,...

  10. Self-absorption corrections of various sample-detector geometries in gamma-ray spectrometry using sample Monte Carlo Simulations

    International Nuclear Information System (INIS)

    Ahmad Saat; Appleby, P.G.; Nolan, P.J.

    1997-01-01

    Corrections for self-absorption in gamma-ray spectrometry have been developed using a simple Monte Carlo simulation technique. The simulation enables the calculation of gamma-ray path lengths in the sample which, using available data, can be used to calculate self-absorption correction factors. The simulation was carried out on three sample geometries: disk, Marinelli beaker, and cylinder (for well-type detectors). Mathematical models and experimental measurements are used to evaluate the simulations. A good agreement of within a few percents was observed. The simulation results are also in good agreement with those reported in the literature. The simulation code was carried out in FORTRAN 90,

  11. Completion of the first approach to critical for the seven percent critical experiment

    International Nuclear Information System (INIS)

    Barber, A. D.; Harms, G. A.

    2009-01-01

    The first approach-to-critical experiment in the Seven Percent Critical Experiment series was recently completed at Sandia. This experiment is part of the Seven Percent Critical Experiment which will provide new critical and reactor physics benchmarks for fuel enrichments greater than five weight percent. The inverse multiplication method was used to determine the state of the system during the course of the experiment. Using the inverse multiplication method, it was determined that the critical experiment went slightly supercritical with 1148 fuel elements in the fuel array. The experiment is described and the results of the experiment are presented. (authors)

  12. Topographic Correction of Wind-driven Rainfall for Landslide Analysis in Central Taiwan with Validation from Aerial and Satellite Optical Images

    Directory of Open Access Journals (Sweden)

    Jin-King Liu

    2013-05-01

    Full Text Available Rainfall intensity plays an important role in landslide prediction especially in mountain areas. However, the rainfall intensity of a location is usually interpolated from rainfall recorded at nearby gauges without considering any possible effects of topographic slopes. In order to obtain reliable rainfall intensity for disaster mitigation, this study proposes a rainfall-vector projection method for topographic-corrected rainfall. The topographic-corrected rainfall is derived from wind speed, terminal velocity of raindrops, and topographical factors from digital terrain model. In addition, scatter plot was used to present landslide distribution with two triggering factors and kernel density analysis is adopted to enhance the perception of the distribution. Numerical analysis is conducted for a historic event, typhoon Mindulle, which occurred in 2004, in a location in central Taiwan. The largest correction reaches 11%, which indicates that topographic correction is significant. The corrected rainfall distribution is then applied to the analysis of landslide triggering factors. The result with corrected rainfall distribution provides better agreement with the actual landslide occurrence than the result without correction.

  13. WHK Student Internship Enrollment, Mentor Participation Up More than 50 Percent | Poster

    Science.gov (United States)

    By Nancy Parrish, Staff Writer The Werner H. Kirsten Student Internship Program (WHK SIP) has enrolled the largest class ever for the 2013–2014 academic year, with 66 students and 50 mentors. This enrollment reflects a 53 percent increase in students and a 56 percent increase in mentors, compared to 2012–2013 (43 students and 32 mentors), according to Julie Hartman, WHK SIP

  14. Aerial population estimates of wild horses (Equus caballus) in the adobe town and salt wells creek herd management areas using an integrated simultaneous double-count and sightability bias correction technique

    Science.gov (United States)

    Lubow, Bruce C.; Ransom, Jason I.

    2007-01-01

    An aerial survey technique combining simultaneous double-count and sightability bias correction methodologies was used to estimate the population of wild horses inhabiting Adobe Town and Salt Wells Creek Herd Management Areas, Wyoming. Based on 5 surveys over 4 years, we conclude that the technique produced estimates consistent with the known number of horses removed between surveys and an annual population growth rate of 16.2 percent per year. Therefore, evidence from this series of surveys supports the validity of this survey method. Our results also indicate that the ability of aerial observers to see horse groups is very strongly dependent on skill of the individual observer, size of the horse group, and vegetation cover. It is also more modestly dependent on the ruggedness of the terrain and the position of the sun relative to the observer. We further conclude that censuses, or uncorrected raw counts, are inadequate estimates of population size for this herd. Such uncorrected counts were all undercounts in our trials, and varied in magnitude from year to year and observer to observer. As of April 2007, we estimate that the population of the Adobe Town /Salt Wells Creek complex is 906 horses with a 95 percent confidence interval ranging from 857 to 981 horses.

  15. The effect and correction of coupling generated by the RHIC triplet quadrupoles

    International Nuclear Information System (INIS)

    Pilat, F.; Peggs, S.; Tepikian, S.; Trbojevic, D.; Wei, J.

    1995-01-01

    This study explores the possibility of operating the nominal RHIC coupling correction system in local decoupling mode, where a subset of skew quadrupoles are independently set by minimizing the coupling as locally measured by beam position monitors. The goal is to establish a correction procedure for the skew quadrupole errors in the interaction region triplets that does not rely on a priori knowledge of the individual errors. After a description of the present coupling correction scheme envisioned for RHIC, the basics of the local decoupling method will be briefly recalled in the context of its implementation in the TEAPOT simulation code as well as operationally. The method is then applied to the RHIC lattice: a series of simple tests establish that single triplet skew quadrupole errors can be corrected by local decoupling. More realistic correction schemes are then studied in order to correct distributed sources of skew quadrupole errors: the machine can be decoupled either by pure local decoupling or by a combination of global (minimum tune separation) and local decoupling. The different correction schemes are successively validated and evaluated by standard RHIC simulation runs with the complete set of errors and corrections. The different solutions and results are finally discussed together with their implications for the hardware

  16. Validity of the ages and stages questionnaires in Korean compared to Bayley Scales of infant development-II for screening preterm infants at corrected age of 18-24 months for neurodevelopmental delay.

    Science.gov (United States)

    Kwun, Yoojin; Park, Hye Won; Kim, Min-Ju; Lee, Byong Sop; Kim, Ellen Ai-Rhan

    2015-04-01

    This study aimed to evaluate the validity of the ages and stages questionnaire in Korean (ASQ 1st edition, Korean Questionnaires, Seoul Community Rehabilitation Center, 2000) for premature infants. The study population consisted of 90 premature infants born between January 1, 2005, and December 31, 2011, who were tested using the ASQ (Korean) and Bayley Scales of Infant Development (BSID) (II) at a corrected age of 18-24 months. The validity of the ASQ (Korean) using cut-off values set at < -2 SD was examined by comparing it to the BSID (II) components, namely, the mental developmental index (MDI) or psychomotor developmental index (PDI), which were both set at < 85. The calculation of the sensitivities, specificities, positive predictive values, and negative predictive values of the ASQ (Korean) components revealed that they detected infants with neurodevelopmental delay with low sensitivity and positive predictive values, however, the communication domain showed moderate correlations with MDI. The failure in more than one domain of the ASQ (Korean) was significantly correlated with the failure in MDI. The ASQ (Korean) showed low validity for screening neurodevelopmentally delayed premature infants.

  17. Metabolic correction for attention deficit/hyperactivity disorder: A biochemical-physiological therapeutic approach

    Directory of Open Access Journals (Sweden)

    Mikirova NA

    2013-01-01

    Full Text Available ABSTRACTObjective: This investigation was undertaken to determine the reference values of specific biochemical markers that have been have been associated with behavior typical of ADHD in a group of patients before and after metabolic correction.Background: Attention deficit hyperactivity disorder (ADHD affects approximately two million American children, and this condition has grown to become the most commonly diagnosed behavioral disorder of childhood. According to the National Institute of Mental Health (NIMH, the cause of the condition, once called hyperkinesis, is not known.The cause of ADHD is generally acknowledged to be multifactorial, involving both biological and environmental influence. Molecular, genetic, and pharmacological studies suggest the involvement of the neurotransmitter systems in the pathogenesis of ADHD. Polymorphic variants in several genes involved in regulation of dopamine have been identified, and related neurotransmitter pathways alterations are reported to be associated with the disease.Nutritional deficiencies, including deficiencies in fatty acids (EPA, DHA, the amino acid methionine, and the trace minerals zinc and selenium, have been shown to influence neuronal function and produce defects in neuronal plasticity, as well as impact behavior in children with attention deficit hyperactivity disorder.Materials/Methods: This study was based on data extracted from our patient history database covering a period of over ten years. We performed laboratory tests in 116 patients 2.7-25 years old with a diagnosis of ADHD. Sixty-six percent (66% of patients were males. Patients were followed from 3 month to 3 years. We compared the distributions of fatty acids, essential metals, and the levels of metabolic stress factors with established reference ranges before and after interventions. In addition, we analyzed the association between toxic metal concentrations and the levels of essential metals.Results: This study was based

  18. Phenomenology of threshold corrections for inclusive jet production at hadron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, M.C. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Moch, S. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik

    2013-09-15

    We study one-jet inclusive hadro-production and compute the QCD threshold corrections for large transverse momentum of the jet in the soft-gluon resummation formalism at next-to-leading logarithmic accuracy. We use the resummed result to generate approximate QCD corrections at next-to-next-to leading order, compare with results in the literature and present rapidity integrated distributions of the jet's transverse momentum for Tevatron and LHC. For the threshold approximation we investigate its kinematical range of validity as well as its dependence on the jet's cone size and kinematics.

  19. Novel scatter compensation with energy and spatial dependent corrections in positron emission tomography

    International Nuclear Information System (INIS)

    Guerin, Bastien

    2010-01-01

    We developed and validated a fast Monte Carlo simulation of PET acquisitions based on the SimSET program modeling accurately the propagation of gamma photons in the patient as well as the block-based PET detector. Comparison of our simulation with another well validated code, GATE, and measurements on two GE Discovery ST PET scanners showed that it models accurately energy spectra (errors smaller than 4.6%), the spatial resolution of block-based PET scanners (6.1%), scatter fraction (3.5%), sensitivity (2.3%) and count rates (12.7%). Next, we developed a novel scatter correction incorporating the energy and position of photons detected in list-mode. Our approach is based on the reformulation of the list-mode likelihood function containing the energy distribution of detected coincidences in addition to their spatial distribution, yielding an EM reconstruction algorithm containing spatial and energy dependent correction terms. We also proposed using the energy in addition to the position of gamma photons in the normalization of the scatter sinogram. Finally, we developed a method for estimating primary and scatter photons energy spectra from total spectra detected in different sectors of the PET scanner. We evaluated the accuracy and precision of our new spatio-spectral scatter correction and that of the standard spatial correction using realistic Monte Carlo simulations. These results showed that incorporating the energy in the scatter correction reduces bias in the estimation of the absolute activity level by ∼ 60% in the cold regions of the largest patients and yields quantification errors less than 13% in all regions. (author)

  20. Bias correction for the least squares estimator of Weibull shape parameter with complete and censored data

    International Nuclear Information System (INIS)

    Zhang, L.F.; Xie, M.; Tang, L.C.

    2006-01-01

    Estimation of the Weibull shape parameter is important in reliability engineering. However, commonly used methods such as the maximum likelihood estimation (MLE) and the least squares estimation (LSE) are known to be biased. Bias correction methods for MLE have been studied in the literature. This paper investigates the methods for bias correction when model parameters are estimated with LSE based on probability plot. Weibull probability plot is very simple and commonly used by practitioners and hence such a study is useful. The bias of the LS shape parameter estimator for multiple censored data is also examined. It is found that the bias can be modeled as the function of the sample size and the censoring level, and is mainly dependent on the latter. A simple bias function is introduced and bias correcting formulas are proposed for both complete and censored data. Simulation results are also presented. The bias correction methods proposed are very easy to use and they can typically reduce the bias of the LSE of the shape parameter to less than half percent

  1. Relativistic corrections to η{sub c}-pair production in high energy proton–proton collisions

    Energy Technology Data Exchange (ETDEWEB)

    Martynenko, A.P., E-mail: a.p.martynenko@samsu.ru [Samara State University, Pavlov Street 1, 443011, Samara (Russian Federation); Samara State Aerospace University named after S.P. Korolyov, Moskovskoye Shosse 34, 443086, Samara (Russian Federation); Trunin, A.M., E-mail: amtrnn@gmail.com [Samara State Aerospace University named after S.P. Korolyov, Moskovskoye Shosse 34, 443086, Samara (Russian Federation)

    2013-06-10

    On the basis of perturbative QCD and the relativistic quark model we calculate relativistic corrections to the double η{sub c} meson production in proton–proton interactions at LHC energies. Relativistic terms in the production amplitude connected with the relative motion of heavy quarks and the transformation law of the bound state wave functions to the reference frame of moving charmonia are taken into account. For the gluon and quark propagators entering the amplitude we use a truncated expansion in relative quark momenta up to the second order. Relativistic corrections to the quark bound state wave functions are considered by means of the Breit-like potential. It turns out that the examined effects decrease total non-relativistic cross section more than two times and on 20 percents in the rapidity region of LHCb detector.

  2. A comparison of methods of determining the 100 percent survival of preserved red cells

    International Nuclear Information System (INIS)

    Valeri, C.R.; Pivacek, L.E.; Ouellet, R.; Gray, A.

    1984-01-01

    Studies were done to compare three methods to determine the 100 percent survival value from which to estimate the 24-hour posttransfusion survival of preserved red cells. The following methods using small aliquots of 51 Cr-labeled autologous preserved red cells were evaluated: First, the 125 I-albumin method, which is an indirect measurement of the recipient's red cell volume derived from the plasma volume measured using 125 I-labeled albumin and the total body hematocrit. Second, the body surface area method (BSA) in which the recipient's red cell volume is derived from a body surface area nomogram. Third, an extrapolation method, which extrapolates to zero time the radioactivity associated with the red cells in the recipient's circulation from 10 to 20 or 15 to 30 minutes after transfusion. The three methods gave similar results in all studies in which less than 20 percent of the transfused red cells were nonviable (24-hour posttransfusion survival values of between 80-100%), but not when more than 20 percent of the red cells were nonviable. When 21 to 35 percent of the transfused red cells were nonviable (24-hour posttransfusion survivals of 65 to 79%), values with the 125 I-albumin method and the body surface area method were about 5 percent lower (p less than 0.001) than values with the extrapolation method. When greater than 35 percent of the red cells were nonviable (24-hour posttransfusion survival values of less than 65%), values with the 125 I-albumin method and the body surface area method were about 10 percent lower (p less than 0.001) than those obtained by the extrapolation method

  3. 77 FR 25152 - Applications for New Awards; Investing in Innovation Fund, Validation Grants

    Science.gov (United States)

    2012-04-27

    ... DEPARTMENT OF EDUCATION Applications for New Awards; Investing in Innovation Fund, Validation Grants Correction In notice document 2012-7365 appearing on pages 18229-18242 in the issue of Tuesday, March 27, 2012 make the following corrections: 1. On page 18238 in the second column, in the second...

  4. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    Directory of Open Access Journals (Sweden)

    Daan Nieboer

    Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  5. An Experimental Evaluation of Blockage Corrections for Current Turbines

    Science.gov (United States)

    Ross, Hannah; Polagye, Brian

    2017-11-01

    Flow confinement has been shown to significantly alter the performance of turbines that extract power from water currents. These performance effects are related to the degree of constraint, defined by the ratio of turbine projected area to channel cross-sectional area. This quantity is referred to as the blockage ratio. Because it is often desirable to adjust experimental observations in water channels to unconfined conditions, analytical corrections for both wind and current turbines have been derived. These are generally based on linear momentum actuator disk theory but have been applied to turbines without experimental validation. This work tests multiple blockage corrections on performance and thrust data from a cross-flow turbine and porous plates (experimental analogues to actuator disks) collected in laboratory flumes at blockage ratios ranging between 10 and 35%. To isolate the effects of blockage, the Reynolds number, Froude number, and submergence depth were held constant while the channel width was varied. Corrected performance data are compared to performance in a towing tank at a blockage ratio of less than 5%. In addition to examining the accuracy of each correction, underlying assumptions are assessed to determine why some corrections perform better than others. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1256082 and the Naval Facilities Engineering Command (NAVFAC).

  6. Data validation report for the 100-D Ponds Operable Unit: 100-D ponds sampling

    International Nuclear Information System (INIS)

    Stankovich, M.T.

    1994-01-01

    Westinghouse-Hanford has requested that 100 percent of the Sample Delivery Groups be validated for the 100-D Ponds Operable Unit Sampling Investigation. Therefore the data from the chemical analysis of all 30 samples from this sampling event and their related quality assurance samples were reviewed and validated to verify that reported sample results were of sufficient quality to support decisions regarding remedial actions performed at this site

  7. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  8. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ...

  9. Site characterization and validation - validation drift fracture data, stage 4

    International Nuclear Information System (INIS)

    Bursey, G.; Gale, J.; MacLeod, R.; Straahle, A.; Tiren, S.

    1991-08-01

    This report describes the mapping procedures and the data collected during fracture mapping in the validation drift. Fracture characteristics examined include orientation, trace length, termination mode, and fracture minerals. These data have been compared and analysed together with fracture data from the D-boreholes to determine the adequacy of the borehole mapping procedures and to assess the nature and degree of orientation bias in the borehole data. The analysis of the validation drift data also includes a series of corrections to account for orientation, truncation, and censoring biases. This analysis has identified at least 4 geologically significant fracture sets in the rock mass defined by the validation drift. An analysis of the fracture orientations in both the good rock and the H-zone has defined groups of 7 clusters and 4 clusters, respectively. Subsequent analysis of the fracture patterns in five consecutive sections along the validation drift further identified heterogeneity through the rock mass, with respect to fracture orientations. These results are in stark contrast to the results form the D-borehole analysis, where a strong orientation bias resulted in a consistent pattern of measured fracture orientations through the rock. In the validation drift, fractures in the good rock also display a greater mean variance in length than those in the H-zone. These results provide strong support for a distinction being made between fractures in the good rock and the H-zone, and possibly between different areas of the good rock itself, for discrete modelling purposes. (au) (20 refs.)

  10. Digital correction of magnification in pelvic x rays for preoperative planning of hip joint replacements: Theoretical development and clinical results of a new protocol

    International Nuclear Information System (INIS)

    The, B.; Diercks, R.L.; Stewart, R.E.; Ooijen, P.M.A. van; Horn, J.R. van

    2005-01-01

    The introduction of digital radiological facilities leads to the necessity of digital preoperative planning, which is an essential part of joint replacement surgery. To avoid errors in the preparation and execution of hip surgery, reliable correction of the magnification of the projected hip is a prerequisite. So far, no validated method exists to accomplish this. We present validated geometrical models of the x-ray projection of spheres, relevant for the calibration procedure to correct for the radiographic magnification. With help of these models a new calibration protocol was developed. The validity and precision of this procedure was determined in clinical practice. Magnification factors could be predicted with a maximal margin of error of 1.5%. The new calibration protocol is valid and reliable. The clinical tests revealed that correction of magnification has a 95% margin of error of -3% to +3%. Future research might clarify if a strict calibration protocol, as presented in this study, results in more accurate preoperative planning of hip joint replacements

  11. Reduction of density-modification bias by β correction

    International Nuclear Information System (INIS)

    Skubák, Pavol; Pannu, Navraj S.

    2011-01-01

    A cross-validation-based method for bias reduction in ‘classical’ iterative density modification of experimental X-ray crystallography maps provides significantly more accurate phase-quality estimates and leads to improved automated model building. Density modification often suffers from an overestimation of phase quality, as seen by escalated figures of merit. A new cross-validation-based method to address this estimation bias by applying a bias-correction parameter ‘β’ to maximum-likelihood phase-combination functions is proposed. In tests on over 100 single-wavelength anomalous diffraction data sets, the method is shown to produce much more reliable figures of merit and improved electron-density maps. Furthermore, significantly better results are obtained in automated model building iterated with phased refinement using the more accurate phase probability parameters from density modification

  12. 78 FR 54278 - Proposed Information Collection; Safety Defects; Examination, Correction and Records, (Pertains...

    Science.gov (United States)

    2013-09-03

    ... other unfired pressure vessels must be inspected by inspectors holding a valid National Board Commission... unfired pressure vessels have caused injuries and fatalities in the mining industry. Records of... Defects; Examination, Correction and Records. DATES: All comments must be postmarked or received by...

  13. Towards natural language question generation for the validation of ontologies and mappings.

    Science.gov (United States)

    Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos

    2016-08-08

    The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.

  14. Validation of molecular crystal structures from powder diffraction data with dispersion-corrected density functional theory (DFT-D).

    Science.gov (United States)

    van de Streek, Jacco; Neumann, Marcus A

    2014-12-01

    In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published in an IUCr journal were energy-minimized with DFT-D and compared to the SX benchmark. The on average slightly less accurate atomic coordinates of XRPD structures do lead to systematically higher root mean square Cartesian displacement (RMSCD) values upon energy minimization than for SX structures, but the RMSCD value is still a good indicator for the detection of structures that deserve a closer look. The upper RMSCD limit for a correct structure must be increased from 0.25 Å for SX structures to 0.35 Å for XRPD structures; the grey area must be extended from 0.30 to 0.40 Å. Based on the energy minimizations, three structures are re-refined to give more precise atomic coordinates. For six structures our calculations provide the missing positions for the H atoms, for five structures they provide corrected positions for some H atoms. Seven crystal structures showed a minor error for a non-H atom. For five structures the energy minimizations suggest a higher space-group symmetry. For the 225 SX structures, the only deviations observed upon energy minimization were three minor H-atom related issues. Preferred orientation is the most important cause of problems. A preferred-orientation correction is the only correction where the experimental data are modified to fit the model. We conclude that molecular crystal structures determined from powder diffraction data that are published in IUCr journals are of high quality, with less than 4% containing an error in a non-H atom.

  15. Functional outcomes and patient satisfaction after laser in situ keratomileusis for correction of myopia.

    NARCIS (Netherlands)

    Tahzib, N.G.; Bootsma, S.J.; Eggink, F.A.G.J.; Nabar, V.A.; Nuijts, R.M.

    2005-01-01

    PURPOSE: To determine subjective patient satisfaction and self-perceived quality of vision after laser in situ keratomileusis (LASIK) to correct myopia and myopic astigmatism. SETTING: Department of Ophthalmology, Academic Hospital Maastricht, Maastricht, The Netherlands. METHODS: A validated

  16. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  17. Topographic Correction Module at Storm (TC@Storm)

    Science.gov (United States)

    Zaksek, K.; Cotar, K.; Veljanovski, T.; Pehani, P.; Ostir, K.

    2015-04-01

    Different solar position in combination with terrain slope and aspect result in different illumination of inclined surfaces. Therefore, the retrieved satellite data cannot be accurately transformed to the spectral reflectance, which depends only on the land cover. The topographic correction should remove this effect and enable further automatic processing of higher level products. The topographic correction TC@STORM was developed as a module within the SPACE-SI automatic near-real-time image processing chain STORM. It combines physical approach with the standard Minnaert method. The total irradiance is modelled as a three-component irradiance: direct (dependent on incidence angle, sun zenith angle and slope), diffuse from the sky (dependent mainly on sky-view factor), and diffuse reflected from the terrain (dependent on sky-view factor and albedo). For computation of diffuse irradiation from the sky we assume an anisotropic brightness of the sky. We iteratively estimate a linear combination from 10 different models, to provide the best results. Dependent on the data resolution, we mask shades based on radiometric (image) or geometric properties. The method was tested on RapidEye, Landsat 8, and PROBA-V data. Final results of the correction were evaluated and statistically validated based on various topography settings and land cover classes. Images show great improvements in shaded areas.

  18. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    Science.gov (United States)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed

  19. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    International Nuclear Information System (INIS)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib

    2008-01-01

    clinical setting. More importantly, correction of oral contrast artifacts improved the readability and interpretation of the PET scan and showed substantial decrease of the SUV (104.3%) after correction. An automated segmentation algorithm for classification of irregular shapes of regions containing contrast medium was developed for wider applicability of the SCC algorithm for correction of oral contrast artifacts during the CTAC procedure. The algorithm is being refined and further validated in clinical setting. (orig.)

  20. Age-specific association between percent body fat and pulmonary ...

    African Journals Online (AJOL)

    This study describes the association between percent body fat and pulmonary function among apparently normal twenty male children tidal volume aged 4 years and twenty male children aged 10 years in Ogbomoso. The mean functional residual capacity of the lung in male children aged 10 years was significantly higher ...

  1. Identification of a novel percent mammographic density locus at 12q24.

    Science.gov (United States)

    Stevens, Kristen N; Lindstrom, Sara; Scott, Christopher G; Thompson, Deborah; Sellers, Thomas A; Wang, Xianshu; Wang, Alice; Atkinson, Elizabeth; Rider, David N; Eckel-Passow, Jeanette E; Varghese, Jajini S; Audley, Tina; Brown, Judith; Leyland, Jean; Luben, Robert N; Warren, Ruth M L; Loos, Ruth J F; Wareham, Nicholas J; Li, Jingmei; Hall, Per; Liu, Jianjun; Eriksson, Louise; Czene, Kamila; Olson, Janet E; Pankratz, V Shane; Fredericksen, Zachary; Diasio, Robert B; Lee, Adam M; Heit, John A; DeAndrade, Mariza; Goode, Ellen L; Vierkant, Robert A; Cunningham, Julie M; Armasu, Sebastian M; Weinshilboum, Richard; Fridley, Brooke L; Batzler, Anthony; Ingle, James N; Boyd, Norman F; Paterson, Andrew D; Rommens, Johanna; Martin, Lisa J; Hopper, John L; Southey, Melissa C; Stone, Jennifer; Apicella, Carmel; Kraft, Peter; Hankinson, Susan E; Hazra, Aditi; Hunter, David J; Easton, Douglas F; Couch, Fergus J; Tamimi, Rulla M; Vachon, Celine M

    2012-07-15

    Percent mammographic density adjusted for age and body mass index (BMI) is one of the strongest risk factors for breast cancer and has a heritable component that remains largely unidentified. We performed a three-stage genome-wide association study (GWAS) of percent mammographic density to identify novel genetic loci associated with this trait. In stage 1, we combined three GWASs of percent density comprised of 1241 women from studies at the Mayo Clinic and identified the top 48 loci (99 single nucleotide polymorphisms). We attempted replication of these loci in 7018 women from seven additional studies (stage 2). The meta-analysis of stage 1 and 2 data identified a novel locus, rs1265507 on 12q24, associated with percent density, adjusting for age and BMI (P = 4.43 × 10(-8)). We refined the 12q24 locus with 459 additional variants (stage 3) in a combined analysis of all three stages (n = 10 377) and confirmed that rs1265507 has the strongest association in the 12q24 region (P = 1.03 × 10(-8)). Rs1265507 is located between the genes TBX5 and TBX3, which are members of the phylogenetically conserved T-box gene family and encode transcription factors involved in developmental regulation. Understanding the mechanism underlying this association will provide insight into the genetics of breast tissue composition.

  2. Method for decoupling error correction from privacy amplification

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  3. Method for decoupling error correction from privacy amplification

    International Nuclear Information System (INIS)

    Lo, Hoi-Kwong

    2003-01-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof

  4. Magnetic corrections to π -π scattering lengths in the linear sigma model

    Science.gov (United States)

    Loewe, M.; Monje, L.; Zamora, R.

    2018-03-01

    In this article, we consider the magnetic corrections to π -π scattering lengths in the frame of the linear sigma model. For this, we consider all the one-loop corrections in the s , t , and u channels, associated to the insertion of a Schwinger propagator for charged pions, working in the region of small values of the magnetic field. Our calculation relies on an appropriate expansion for the propagator. It turns out that the leading scattering length, l =0 in the S channel, increases for an increasing value of the magnetic field, in the isospin I =2 case, whereas the opposite effect is found for the I =0 case. The isospin symmetry is valid because the insertion of the magnetic field occurs through the absolute value of the electric charges. The channel I =1 does not receive any corrections. These results, for the channels I =0 and I =2 , are opposite with respect to the thermal corrections found previously in the literature.

  5. Measuring Coverage in MNCH: A Prospective Validation Study in Pakistan and Bangladesh on Measuring Correct Treatment of Childhood Pneumonia

    Science.gov (United States)

    el Arifeen, Shams; Khan, Amira M.; Huque, M. Hamidul; Kazmi, Narjis; Roy, Sushmita; Abbasi, Saleem; Rahman, Qazi Sadeq-ur; Theodoratou, Evropi; Khorshed, Mahmuda Shayema; Rahman, Kazi Mizanur; Bari, Sanwarul; Kaiser, M. Mahfuzul Islam; Saha, Samir K.; Ahmed, A. S. M. Nawshad Uddin; Rudan, Igor; Bryce, Jennifer; Qazi, Shamim Ahmad; Campbell, Harry

    2013-01-01

    Background Antibiotic treatment for pneumonia as measured by Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS) is a key indicator for tracking progress in achieving Millennium Development Goal 4. Concerns about the validity of this indicator led us to perform an evaluation in urban and rural settings in Pakistan and Bangladesh. Methods and Findings Caregivers of 950 children under 5 y with pneumonia and 980 with “no pneumonia” were identified in urban and rural settings and allocated for DHS/MICS questions 2 or 4 wk later. Study physicians assigned a diagnosis of pneumonia as reference standard; the predictive ability of DHS/MICS questions and additional measurement tools to identify pneumonia versus non-pneumonia cases was evaluated. Results at both sites showed suboptimal discriminative power, with no difference between 2- or 4-wk recall. Individual patterns of sensitivity and specificity varied substantially across study sites (sensitivity 66.9% and 45.5%, and specificity 68.8% and 69.5%, for DHS in Pakistan and Bangladesh, respectively). Prescribed antibiotics for pneumonia were correctly recalled by about two-thirds of caregivers using DHS questions, increasing to 72% and 82% in Pakistan and Bangladesh, respectively, using a drug chart and detailed enquiry. Conclusions Monitoring antibiotic treatment of pneumonia is essential for national and global programs. Current (DHS/MICS questions) and proposed new (video and pneumonia score) methods of identifying pneumonia based on maternal recall discriminate poorly between pneumonia and children with cough. Furthermore, these methods have a low yield to identify children who have true pneumonia. Reported antibiotic treatment rates among these children are therefore not a valid proxy indicator of pneumonia treatment rates. These results have important implications for program monitoring and suggest that data in its current format from DHS/MICS surveys should not be used for the

  6. SERIAL PERCENT-FREE PSA IN COMBINATION WITH PSA FOR POPULATION-BASED EARLY DETECTION OF PROSTATE CANCER

    Science.gov (United States)

    Ankerst, Donna Pauler; Gelfond, Jonathan; Goros, Martin; Herrera, Jesus; Strobl, Andreas; Thompson, Ian M.; Hernandez, Javier; Leach, Robin J.

    2016-01-01

    PURPOSE To characterize the diagnostic properties of serial percent-free prostate-specific antigen (PSA) in relation to PSA in a multi-ethnic, multi-racial cohort of healthy men. MATERIALS AND METHODS 6,982 percent-free PSA and PSA measures were obtained from participants in a 12 year+ Texas screening study comprising 1625 men who never underwent biopsy, 497 who underwent one or more biopsies negative for prostate cancer, and 61 diagnosed with prostate cancer. Area underneath the receiver-operating-characteristic-curve (AUC) for percent-free PSA, and the proportion of patients with fluctuating values across multiple visits were determined according to two thresholds (under 15% versus 25%) were evaluated. The proportion of cancer cases where percent-free PSA indicated a positive test before PSA > 4 ng/mL did and the number of negative biopsies that would have been spared by percent-free PSA testing negative were computed. RESULTS Percent-free PSA fluctuated around its threshold of PSA tested positive earlier than PSA in 71.4% (34.2%) of cancer cases, and among men with multiple negative biopsies and a PSA > 4 ng/mL, percent-free PSA would have tested negative in 31.6% (65.8%) instances. CONCLUSIONS Percent-free PSA should accompany PSA testing in order to potentially spare unnecessary biopsies or detect cancer earlier. When near the threshold, both tests should be repeated due to commonly observed fluctuation. PMID:26979652

  7. Generalized Second Law of Thermodynamics in Wormhole Geometry with Logarithmic Correction

    International Nuclear Information System (INIS)

    Faiz-ur-Rahman; Salahuddin; Akbar, M.

    2011-01-01

    We construct various cases for validity of the generalized second law (GSL) of thermodynamics by assuming the logarithmic correction to the horizon entropy of an evolving wormhole. It is shown that the GSL is always respected for α 0 ≤ 0, whereas for α 0 > 0 the GSL is respected only if πr 2 A+ /ℏ < α. (general)

  8. Development of a Detailed Volumetric Finite Element Model of the Spine to Simulate Surgical Correction of Spinal Deformities

    Directory of Open Access Journals (Sweden)

    Mark Driscoll

    2013-01-01

    Full Text Available A large spectrum of medical devices exists; it aims to correct deformities associated with spinal disorders. The development of a detailed volumetric finite element model of the osteoligamentous spine would serve as a valuable tool to assess, compare, and optimize spinal devices. Thus the purpose of the study was to develop and initiate validation of a detailed osteoligamentous finite element model of the spine with simulated correction from spinal instrumentation. A finite element of the spine from T1 to L5 was developed using properties and geometry from the published literature and patient data. Spinal instrumentation, consisting of segmental translation of a scoliotic spine, was emulated. Postoperative patient and relevant published data of intervertebral disc stress, screw/vertebra pullout forces, and spinal profiles was used to evaluate the models validity. Intervertebral disc and vertebral reaction stresses respected published in vivo, ex vivo, and in silico values. Screw/vertebra reaction forces agreed with accepted pullout threshold values. Cobb angle measurements of spinal deformity following simulated surgical instrumentation corroborated with patient data. This computational biomechanical analysis validated a detailed volumetric spine model. Future studies seek to exploit the model to explore the performance of corrective spinal devices.

  9. Radar Rainfall Bias Correction based on Deep Learning Approach

    Science.gov (United States)

    Song, Yang; Han, Dawei; Rico-Ramirez, Miguel A.

    2017-04-01

    Radar rainfall measurement errors can be considerably attributed to various sources including intricate synoptic regimes. Temperature, humidity and wind are typically acknowledged as critical meteorological factors in inducing the precipitation discrepancies aloft and on the ground. The conventional practices mainly use the radar-gauge or geostatistical techniques by direct weighted interpolation algorithms as bias correction schemes whereas rarely consider the atmospheric effects. This study aims to comprehensively quantify those meteorological elements' impacts on radar-gauge rainfall bias correction based on a deep learning approach. The deep learning approach employs deep convolutional neural networks to automatically extract three-dimensional meteorological features for target recognition based on high range resolution profiles. The complex nonlinear relationships between input and target variables can be implicitly detected by such a scheme, which is validated on the test dataset. The proposed bias correction scheme is expected to be a promising improvement in systematically minimizing the synthesized atmospheric effects on rainfall discrepancies between radar and rain gauges, which can be useful in many meteorological and hydrological applications (e.g., real-time flood forecasting) especially for regions with complex atmospheric conditions.

  10. Methods of correcting Anger camera deadtime losses

    International Nuclear Information System (INIS)

    Sorenson, J.A.

    1976-01-01

    Three different methods of correcting for Anger camera deadtime loss were investigated. These included analytic methods (mathematical modeling), the marker-source method, and a new method based on counting ''pileup'' events appearing in a pulseheight analyzer window positioned above the photopeak of interest. The studies were done with /sup 99m/Tc on a Searle Radiographics camera with a measured deadtime of about 6 μsec. Analytic methods were found to be unreliable because of unpredictable changes in deadtime with changes in radiation scattering conditions. Both the marker-source method and the pileup-counting method were found to be accurate to within a few percent for true counting rates of up to about 200 K cps, with the pileup-counting method giving better results. This finding applied to sources at depths ranging up to 10 cm of pressed wood. The relative merits of the two methods are discussed

  11. Simplified fringe order correction for absolute phase maps recovered with multiple-spatial-frequency fringe projections

    International Nuclear Information System (INIS)

    Ding, Yi; Peng, Kai; Lu, Lei; Zhong, Kai; Zhu, Ziqi

    2017-01-01

    Various kinds of fringe order errors may occur in the absolute phase maps recovered with multi-spatial-frequency fringe projections. In existing methods, multiple successive pixels corrupted by fringe order errors are detected and corrected pixel-by-pixel with repeating searches, which is inefficient for applications. To improve the efficiency of multiple successive fringe order corrections, in this paper we propose a method to simplify the error detection and correction by the stepwise increasing property of fringe order. In the proposed method, the numbers of pixels in each step are estimated to find the possible true fringe order values, repeating the search in detecting multiple successive errors can be avoided for efficient error correction. The effectiveness of our proposed method is validated by experimental results. (paper)

  12. Development and validation of in vitro-in vivo correlation (IVIVC) for estradiol transdermal drug delivery systems.

    Science.gov (United States)

    Yang, Yang; Manda, Prashanth; Pavurala, Naresh; Khan, Mansoor A; Krishnaiah, Yellela S R

    2015-07-28

    The objective of this study was to develop a level A in vitro-in vivo correlation (IVIVC) for drug-in-adhesive (DIA) type estradiol transdermal drug delivery systems (TDDS). In vitro drug permeation studies across human skin were carried out to obtain the percent of estradiol permeation from marketed products. The in vivo time versus plasma concentration data of three estradiol TDDS at drug loadings of 2.0, 3.8 and 7.6mg (delivery rates of 25, 50 and 100μg/day, respectively) was deconvoluted using Wagner-Nelson method to obtain percent of in vivo drug absorption in postmenopausal women. The IVIVC between the in vitro percent of drug permeation (X) and in vivo percent of drug absorption (Y) for these three estradiol TDDS was constructed using GastroPlus® software. There was a high correlation (R(2)=1.0) with a polynomial regression of Y=-0.227X(2)+0.331X-0.001. These three estradiol TDDS were used for internal validation whereas another two products of the same formulation design (with delivery rates of 60 and 100μg/day) were used for external validation. The predicted estradiol serum concentrations (convoluted from in vitro skin permeation data) were compared with the observed serum concentrations for the respective products. The developed IVIVC model passed both the internal and external validations as the prediction errors (%PE) for Cmax and AUC were less than 15%. When another marketed estradiol TDDS with a delivery rate of 100μg/day but with a slight variation in formulation design was chosen, it did not pass external validation indicating the product-specific nature of IVIVC model. Results suggest that the IVIVC model developed in this study can be used to successfully predict the in vivo performance of the same estradiol TDDS with in vivo delivery rates ranging from 25 to 100μg/day. Published by Elsevier B.V.

  13. Validation of hospital discharge diagnoses for hypertensive disorders of pregnancy

    DEFF Research Database (Denmark)

    Møller Luef, Birgitte; Andersen, Louise B; Renault, Kristina Martha

    2016-01-01

    INTRODUCTION: A correct diagnosis of preeclampsia and gestational hypertension is important for treatment and epidemiological studies. Changes in diagnostic criteria and underreporting in certain subsets of patients may hamper validity of the diagnoses. MATERIALS AND METHODS: We validated....... After validation, significantly more patients fulfilled criteria for diagnosis of preeclampsia (n = 163, 7.5%, p = 0.002); more had severe preeclampsia, 14 (0.6%) vs. 70 (3.2%), p hypertension, 62 (2.9%) vs. 46 (2.1%), p = 0.12. The diagnostic sensitivity for preeclampsia...... of hypertensive disorders in pregnancy for research purposes....

  14. How valid are commercially available medical simulators?

    Directory of Open Access Journals (Sweden)

    Stunt JJ

    2014-10-01

    Full Text Available JJ Stunt,1 PH Wulms,2 GM Kerkhoffs,1 J Dankelman,2 CN van Dijk,1 GJM Tuijthof1,2 1Orthopedic Research Center Amsterdam, Department of Orthopedic Surgery, Academic Medical Centre, Amsterdam, the Netherlands; 2Department of Biomechanical Engineering, Faculty of Mechanical, Materials and Maritime Engineering, Delft University of Technology, Delft, the Netherlands Background: Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary: Four hundred and thirty-three commercially available simulators were found, from which 405 (94% were physical models. One hundred and thirty validation studies evaluated 35 (8% commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity. Twenty-four (37% simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity. Conclusion: Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of validation depends on the difficulty level of skills training and possible consequences when skills are

  15. COPD characteristics and socioeconomic burden in Hellenic correctional institutions

    Directory of Open Access Journals (Sweden)

    Bania EG

    2016-02-01

    Full Text Available Eleni G Bania,1 Zoe Daniil,1 Chrysa Hatzoglou,1 Evangelos C Alexopoulos,2 Eirini Mitsiki,3 Konstantinos I Gourgoulianis1 1Respiratory Medicine Department, University of Thessaly Medical School, University Hospital of Larissa, Larissa, 2Faculty of Social Sciences, Hellenic Open University, Patras, 3Medical Department, Novartis Hellas, Athens, Greece Background: The high prevalence of smoking (80% in Greek correctional institutions is anticipated to result in high prevalence of COPD in such settings. Aim: The aim of the Greek obstructive luNg disease epidemiOlogy and health economics Study In corrective institutionS (GNOSIS is to determine the prevalence of smoking and COPD among inmates and to assess the health-related quality of life. Methods: GNOSIS, a cross-sectional epidemiological study, was conducted between March 2011 and December 2011 in seven correctional institutions in Greece. Results: A total of 552 participants, 91.3% male, median age of 43.0 years (interquartile range: 35–53, were enrolled. COPD prevalence was 6.0% and was found to increase with age (18.6% among those ≥60 years, length of prison stay, and length of sentence. Of the participants diagnosed with COPD, 36.4% were diagnosed with Global initiative for chronic Obstructive Lung Disease (GOLD stage I and 51.5% were diagnosed with stage II. Dyspnea severity was assessed as grades 0–1 on the medical research council dyspnea scale for 88.3%, while 31% reported ≥2 COPD exacerbations in the past year. Seventy-nine percent of the total number of the participants were smokers, with a median smoking of 20.0 cigarettes per day, while 42.9% were assessed as having a strong addiction to nicotine. The median EuroQol visual analog scale score was 70.0 (interquartile range: 60.0–90.0. Problems in the dimension of anxiety/depression were reported by 82.8%. Conclusion: The results of the study support the notion that the prevalence of COPD among inmates of Greek correctional

  16. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  17. Monte Carlo evaluation of scattering correction methods in 131I studies using pinhole collimator

    International Nuclear Information System (INIS)

    López Díaz, Adlin; San Pedro, Aley Palau; Martín Escuela, Juan Miguel; Rodríguez Pérez, Sunay; Díaz García, Angelina

    2017-01-01

    Scattering is quite important for image activity quantification. In order to study the scattering factors and the efficacy of 3 multiple window energy scatter correction methods during 131 I thyroid studies with a pinhole collimator (5 mm hole) a Monte Carlo simulation (MC) was developed. The GAMOS MC code was used to model the gamma camera and the thyroid source geometry. First, to validate the MC gamma camera pinhole-source model, sensibility in air and water of the simulated and measured thyroid phantom geometries were compared. Next, simulations to investigate scattering and the result of triple energy (TEW), Double energy (DW) and Reduced double (RDW) energy windows correction methods were performed for different thyroid sizes and depth thicknesses. The relative discrepancies to MC real event were evaluated. Results: The accuracy of the GAMOS MC model was verified and validated. The image’s scattering contribution was significant, between 27-40 %. The discrepancies between 3 multiple window energy correction method results were significant (between 9-86 %). The Reduce Double Window methods (15%) provide discrepancies of 9-16 %. Conclusions: For the simulated thyroid geometry with pinhole, the RDW (15 %) was the most effective. (author)

  18. Verification and validation of RADMODL Version 1.0

    International Nuclear Information System (INIS)

    Kimball, K.D.

    1993-03-01

    RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V ampersand V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transport of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident

  19. Verification and validation of RADMODL Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, K.D.

    1993-03-01

    RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V&V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transport of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident.

  20. Mass-induced sea level variations in the Red Sea from GRACE, steric-corrected altimetry, in situ bottom pressure records, and hydrographic observations

    Science.gov (United States)

    Feng, W.; Lemoine, J.-M.; Zhong, M.; Hsu, H. T.

    2014-08-01

    An annual amplitude of ∼18 cm mass-induced sea level variations (SLV) in the Red Sea is detected from the Gravity Recovery and Climate Experiment (GRACE) satellites and steric-corrected altimetry from 2003 to 2011. The annual mass variations in the region dominate the mean SLV, and generally reach maximum in late January/early February. The annual steric component of the mean SLV is relatively small (mass-induced SLV. In situ bottom pressure records at the eastern coast of the Red Sea validate the high mass variability observed by steric-corrected altimetry and GRACE. In addition, the horizontal water mass flux of the Red Sea estimated from GRACE and steric-corrected altimetry is validated by hydrographic observations.

  1. Higher order corrections to asymptotic-de Sitter inflation

    Science.gov (United States)

    Mohsenzadeh, M.; Yusofi, E.

    2017-08-01

    Since trans-Planckian considerations can be associated with the re-definition of the initial vacuum, we investigate further the influence of trans-Planckian physics on the spectra produced by the initial quasi-de Sitter (dS) state during inflation. We use the asymptotic-dS mode to study the trans-Planckian correction of the power spectrum to the quasi-dS inflation. The obtained spectra consist of higher order corrections associated with the type of geometry and harmonic terms sensitive to the fluctuations of space-time (or gravitational waves) during inflation. As an important result, the amplitude of the power spectrum is dependent on the choice of c, i.e. the type of space-time in the period of inflation. Also, the results are always valid for any asymptotic dS space-time and particularly coincide with the conventional results for dS and flat space-time.

  2. Implementation of electroweak corrections in the POWHEG BOX: single W production

    CERN Document Server

    Barzè, L; Nason, P; Nicrosini, O; Piccinini, F

    2012-01-01

    We present a fully consistent implementation of electroweak and strong radiative corrections to single W hadroproduction in the POWHEG BOX framework, treating soft and collinear photon emissions on the same ground as coloured parton emissions. This framework can be easily extended to more complex electroweak processes. We describe how next-to-leading order (NLO) electroweak corrections are combined with the NLO QCD calculation, and show how they are interfaced to QCD and QED shower Monte Carlo. The resulting tool fills a gap in the literature and allows to study comprehensively the interplay of QCD and electroweak effects to W production using a single computational framework. Numerical comparisons with the predictions of the electroweak generator HORACE, as well as with existing results on the combination of electroweak and QCD corrections to W production, are shown for the LHC energies, to validate the reliability and accuracy of the approach

  3. An embedded optical tracking system for motion-corrected magnetic resonance imaging at 7T.

    Science.gov (United States)

    Schulz, Jessica; Siegert, Thomas; Reimer, Enrico; Labadie, Christian; Maclaren, Julian; Herbst, Michael; Zaitsev, Maxim; Turner, Robert

    2012-12-01

    Prospective motion correction using data from optical tracking systems has been previously shown to reduce motion artifacts in MR imaging of the head. We evaluate a novel optical embedded tracking system. The home-built optical embedded tracking system performs image processing within a 7 T scanner bore, enabling high speed tracking. Corrected and uncorrected in vivo MR volumes are acquired interleaved using a modified 3D FLASH sequence, and their image quality is assessed and compared. The latency between motion and correction of the slice position was measured to be (19 ± 5) ms, and the tracking noise has a standard deviation no greater than 10 μm/0.005° during conventional MR scanning. Prospective motion correction improved the edge strength by 16 % on average, even though the volunteers were asked to remain motionless during the acquisitions. Using a novel method for validating the effectiveness of in vivo prospective motion correction, we have demonstrated that prospective motion correction using motion data from the embedded tracking system considerably improved image quality.

  4. Validation and correction of rainfall data from the WegenerNet high density network in southeast Austria

    Science.gov (United States)

    O, Sungmin; Foelsche, U.; Kirchengast, G.; Fuchsberger, J.

    2018-01-01

    Eight years of daily rainfall data from WegenerNet were analyzed by comparison with data from Austrian national weather stations. WegenerNet includes 153 ground level weather stations in an area of about 15 km × 20 km in the Feldbach region in southeast Austria. Rainfall has been measured by tipping bucket gauges at 150 stations of the network since the beginning of 2007. Since rain gauge measurements are considered close to true rainfall, there are increasing needs for WegenerNet data for the validation of rainfall data products such as remote sensing based estimates or model outputs. Serving these needs, this paper aims at providing a clearer interpretation on WegenerNet rainfall data for users in hydro-meteorological communities. Five clusters - a cluster consists of one national weather station and its four closest WegenerNet stations - allowed us close comparison of datasets between the stations. Linear regression analysis and error estimation with statistical indices were conducted to quantitatively evaluate the WegenerNet daily rainfall data. It was found that rainfall data between the stations show good linear relationships with an average correlation coefficient (r) of 0.97 , while WegenerNet sensors tend to underestimate rainfall according to the regression slope (0.87). For the five clusters investigated, the bias and relative bias were - 0.97 mm d-1 and - 11.5 % on average (except data from new sensors). The average of bias and relative bias, however, could be reduced by about 80 % through a simple linear regression-slope correction, with the assumption that the underestimation in WegenerNet data was caused by systematic errors. The results from the study have been employed to improve WegenerNet data for user applications so that a new version of the data (v5) is now available at the WegenerNet data portal (www.wegenernet.org).

  5. Verification and validation guidelines for high integrity systems. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Hecht, H.; Hecht, M.; Dinsmore, G.; Hecht, S.; Tang, D. [SoHaR, Inc., Beverly Hills, CA (United States)

    1995-03-01

    High integrity systems include all protective (safety and mitigation) systems for nuclear power plants, and also systems for which comparable reliability requirements exist in other fields, such as in the process industries, in air traffic control, and in patient monitoring and other medical systems. Verification aims at determining that each stage in the software development completely and correctly implements requirements that were established in a preceding phase, while validation determines that the overall performance of a computer system completely and correctly meets system requirements. Volume I of the report reviews existing classifications for high integrity systems and for the types of errors that may be encountered, and makes recommendations for verification and validation procedures, based on assumptions about the environment in which these procedures will be conducted. The final chapter of Volume I deals with a framework for standards in this field. Volume II contains appendices dealing with specific methodologies for system classification, for dependability evaluation, and for two software tools that can automate otherwise very labor intensive verification and validation activities.

  6. Verification and validation guidelines for high integrity systems. Volume 1

    International Nuclear Information System (INIS)

    Hecht, H.; Hecht, M.; Dinsmore, G.; Hecht, S.; Tang, D.

    1995-03-01

    High integrity systems include all protective (safety and mitigation) systems for nuclear power plants, and also systems for which comparable reliability requirements exist in other fields, such as in the process industries, in air traffic control, and in patient monitoring and other medical systems. Verification aims at determining that each stage in the software development completely and correctly implements requirements that were established in a preceding phase, while validation determines that the overall performance of a computer system completely and correctly meets system requirements. Volume I of the report reviews existing classifications for high integrity systems and for the types of errors that may be encountered, and makes recommendations for verification and validation procedures, based on assumptions about the environment in which these procedures will be conducted. The final chapter of Volume I deals with a framework for standards in this field. Volume II contains appendices dealing with specific methodologies for system classification, for dependability evaluation, and for two software tools that can automate otherwise very labor intensive verification and validation activities

  7. Analysis association of milk fat and protein percent in quantitative ...

    African Journals Online (AJOL)

    SAM

    2014-05-14

    May 14, 2014 ... African Journal of Biotechnology. Full Length ... quantitative trait locus (QTLs) on chromosomes 1, 6, 7 and 20 in ... Protein and fat percent as content of milk are high-priority criteria for financial aims and selection of programs ...

  8. True coincidence summing correction determination for 214Bi principal gamma lines in NORM samples

    International Nuclear Information System (INIS)

    Haddad, Kh.

    2014-01-01

    The gamma lines 609.3 and 1,120.3 keV are two of the most intensive γ emissions of 214 Bi, but they have serious true coincidence summing (TCS) effects due to the complex decay schemes with multi-cascading transitions. TCS effects cause inaccurate count rate and hence erroneous results. A simple and easy experimental method for determination of TCS correction of 214 Bi gamma lines was developed in this work using naturally occurring radioactive material samples. Height efficiency and self attenuation corrections were determined as well. The developed method has been formulated theoretically and validated experimentally. The corrections problems were solved simply with neither additional standard source nor simulation skills. (author)

  9. Validity and repeatability of inertial measurement units for measuring gait parameters.

    Science.gov (United States)

    Washabaugh, Edward P; Kalyanaraman, Tarun; Adamczyk, Peter G; Claflin, Edward S; Krishnan, Chandramouli

    2017-06-01

    Inertial measurement units (IMUs) are small wearable sensors that have tremendous potential to be applied to clinical gait analysis. They allow objective evaluation of gait and movement disorders outside the clinic and research laboratory, and permit evaluation on large numbers of steps. However, repeatability and validity data of these systems are sparse for gait metrics. The purpose of this study was to determine the validity and between-day repeatability of spatiotemporal metrics (gait speed, stance percent, swing percent, gait cycle time, stride length, cadence, and step duration) as measured with the APDM Opal IMUs and Mobility Lab system. We collected data on 39 healthy subjects. Subjects were tested over two days while walking on a standard treadmill, split-belt treadmill, or overground, with IMUs placed in two locations: both feet and both ankles. The spatiotemporal measurements taken with the IMU system were validated against data from an instrumented treadmill, or using standard clinical procedures. Repeatability and minimally detectable change (MDC) of the system was calculated between days. IMUs displayed high to moderate validity when measuring most of the gait metrics tested. Additionally, these measurements appear to be repeatable when used on the treadmill and overground. The foot configuration of the IMUs appeared to better measure gait parameters; however, both the foot and ankle configurations demonstrated good repeatability. In conclusion, the IMU system in this study appears to be both accurate and repeatable for measuring spatiotemporal gait parameters in healthy young adults. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. An analysis of reliability and validity of the papilla index score of implant-supported single crowns of maxillary central incisors

    DEFF Research Database (Denmark)

    Peng, Min; Fei, Wei; Hosseini, Mandana

    2012-01-01

    Objectives: To test the reliability and validity of the papilla index scores of the implant-supported single crowns (ISSCs) of maxillary central incisors. Materials and Methods: Twenty-five patients with 25 ISSCs were included. Two prosthodontists evaluated the papilla index score (PIS) of three...... inter-observer agreement. The PIS score demonstrated significant correlation to the corresponding PP value (rs=.567, p=.000). Conclusions: The feasibility, reliability and validity of the PIS made the parameter useful for quality control of the pri-implant soft tissue of ISSCs....... fill percent (PP) was calculated. The validity of PIS was tested against the corresponding papilla fill percent (PP) by using the Spearman correlation analysis. Results: The intra-observer agreement was >70% in 4/5 and >50% in all observations, the pooled Cohen’s ¿ was 0.64 and 0.70 for two observers...

  11. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  12. Design of respiration averaged CT for attenuation correction of the PET data from PET/CT

    International Nuclear Information System (INIS)

    Chi, Pai-Chun Melinda; Mawlawi, Osama; Nehmeh, Sadek A.; Erdi, Yusuf E.; Balter, Peter A.; Luo, Dershan; Mohan, Radhe; Pan Tinsu

    2007-01-01

    Our previous patient studies have shown that the use of respiration averaged computed tomography (ACT) for attenuation correction of the positron emission tomography (PET) data from PET/CT reduces the potential misalignment in the thorax region by matching the temporal resolution of the CT to that of the PET. In the present work, we investigated other approaches of acquiring ACT in order to reduce the CT dose and to improve the ease of clinical implementation. Four-dimensional CT (4DCT) data sets for ten patients (17 lung/esophageal tumors) were acquired in the thoracic region immediately after the routine PET/CT scan. For each patient, multiple sets of ACTs were generated based on both phase image averaging (phase approach) and fixed cine duration image averaging (cine approach). In the phase approach, the ACTs were calculated from CT images corresponding to the significant phases of the respiratory cycle: ACT 050phs from end-inspiration (0%) and end-expiration (50%), ACT 2070phs from mid-inspiration (20%) and mid-expiration (70%), ACT 4phs from 0%, 20%, 50% and 70%, and ACT 10phs from all ten phases, which was the original approach. In the cine approach, which does not require 4DCT, the ACTs were calculated based on the cine images from cine durations of 1 to 6 s at 1 s increments. PET emission data for each patient were attenuation corrected with each of the above mentioned ACTs and the tumor maximum standard uptake value (SUV max ), average SUV (SUV avg ), and tumor volume measurements were compared. Percent differences were calculated between PET data corrected with various ACTs and that corrected with ACT 10phs . In the phase approach, the ACT 10phs can be approximated by the ACT 4phs to within a mean percent difference of 2% in SUV and tumor volume measurements. In cine approach, ACT 10phs can be approximated to within a mean percent difference of 3% by ACTs computed from cine durations ≥3 s. Acquiring CT images only at the four significant phases for the

  13. Effect of self-absorption correction on surface hardness estimation of Fe-Cr-Ni alloys via LIBS.

    Science.gov (United States)

    Ramezanian, Zahra; Darbani, Seyyed Mohammad Reza; Majd, Abdollah Eslami

    2017-08-20

    The effect of self-absorption was investigated on the estimation of surface hardness of Fe-Cr-Ni metallic alloys by the laser-induced breakdown spectroscopy (LIBS) technique. For this purpose, the linear relationship between the ratio of chromium ionic to atomic line intensities (CrII/CrI) and surface hardness was studied, both before and after correcting the self-absorption effect. The correlation coefficient significantly increased from 47% to 90% after self-absorption correction. The results showed the measurements of surface hardness using LIBS can be more accurate and valid by correcting the self-absorption effect.

  14. Experimental and casework validation of ambient temperature corrections in forensic entomology.

    Science.gov (United States)

    Johnson, Aidan P; Wallman, James F; Archer, Melanie S

    2012-01-01

    This paper expands on Archer (J Forensic Sci 49, 2004, 553), examining additional factors affecting ambient temperature correction of weather station data in forensic entomology. Sixteen hypothetical body discovery sites (BDSs) in Victoria and New South Wales (Australia), both in autumn and in summer, were compared to test whether the accuracy of correlation was affected by (i) length of correlation period; (ii) distance between BDS and weather station; and (iii) periodicity of ambient temperature measurements. The accuracy of correlations in data sets from real Victorian and NSW forensic entomology cases was also examined. Correlations increased weather data accuracy in all experiments, but significant differences in accuracy were found only between periodicity treatments. We found that a >5°C difference between average values of body in situ and correlation period weather station data was predictive of correlations that decreased the accuracy of ambient temperatures estimated using correlation. Practitioners should inspect their weather data sets for such differences. © 2011 American Academy of Forensic Sciences.

  15. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  16. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  17. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Directory of Open Access Journals (Sweden)

    Haris Akram Bhatti

    2016-06-01

    Full Text Available With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA Climate Prediction Centre (CPC morphing technique (CMORPH satellite rainfall product (CMORPH in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW sizes and for sequential windows (SW’s of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE. To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r and standard deviation (SD. Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  18. Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images

    Science.gov (United States)

    Antony, Bhavna; Abràmoff, Michael D.; Tang, Li; Ramdas, Wishal D.; Vingerling, Johannes R.; Jansonius, Nomdo M.; Lee, Kyungmoo; Kwon, Young H.; Sonka, Milan; Garvin, Mona K.

    2011-01-01

    The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct the distinct axial artifacts in SD-OCT images. The method was quantitatively validated using nine pairs of OCT scans obtained with orthogonal fast-scanning axes, where a segmented surface was compared after both datasets had been corrected. The mean unsigned difference computed between the locations of this artifact-corrected surface after the single-spline and dual-spline correction was 23.36 ± 4.04 μm and 5.94 ± 1.09 μm, respectively, and showed a significant difference (p < 0.001 from two-tailed paired t-test). The method was also validated using depth maps constructed from stereo fundus photographs of the optic nerve head, which were compared to the flattened top surface from the OCT datasets. Significant differences (p < 0.001) were noted between the artifact-corrected datasets and the original datasets, where the mean unsigned differences computed over 30 optic-nerve-head-centered scans (in normalized units) were 0.134 ± 0.035 and 0.302 ± 0.134, respectively. PMID:21833377

  19. Accounting for treatment use when validating a prognostic model: a simulation study.

    Science.gov (United States)

    Pajouheshnia, Romin; Peelen, Linda M; Moons, Karel G M; Reitsma, Johannes B; Groenwold, Rolf H H

    2017-07-14

    Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW) on the estimated model discrimination (c-index) and calibration (observed:expected ratio and calibration plots) in scenarios with different patterns and effects of treatment use. Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and should not be ignored. When treatment use is random, treated

  20. Validity and Reliability of the Catastrophic Cognitions Questionnaire-Turkish Version

    Directory of Open Access Journals (Sweden)

    Ayse Kart

    2016-01-01

    Full Text Available Aim: Importance of catastrophic cognitions is well known for the development and maintance of panic disorder. Catastrophic Cognitions Questionnaire (CCQ measures thoughts associated with danger and was originally developed by Khawaja (1992. In this study, it is aimed to evaluate the validity and reliability of CCQ- Turkish version. Material and Method: CCQ was administered to 250 patients with panic disorder. Turkish version of CCQ was created by translation, back-translation and pilot assessment. Socio-demographic Data Form and CCQ Turkish version were administered to participants. Reliability of CCQ was analyzed by test-retest correlation, split-half technique, Cronbach%u2019s alpha coefficient. Construct validity was evaluated by factor analysis after the Kaiser-Meyer-Olkin (KMO and Bartlett test had been performed. Principal component analysis and varimax rotation were used for factor analysis. Results: Fifty-five point six percent (n=139 of the participants were female and fourty-four point four percent (n=111 were male. Internal consistency of the questionnaire was calculated 0.920 by Cronbach alpha. In analysis performed by split-half method reliability coefficients of half questionnaire were found as 0.917 and 0.832. Again spearmen-brown coefficient was found as 0.875 by the same analysis. Factor analysis revealed five basic factors. These five factors explained %66.2 of the total variance. Discussion: The results of this study show that the Turkish version of CCQ is a reliable and valid scale.

  1. Axial geometrical aberration correction up to 5th order with N-SYLC.

    Science.gov (United States)

    Hoque, Shahedul; Ito, Hiroyuki; Takaoka, Akio; Nishi, Ryuji

    2017-11-01

    We present N-SYLC (N-fold symmetric line currents) models to correct 5th order axial geometrical aberrations in electron microscopes. In our previous paper, we showed that 3rd order spherical aberration can be corrected by 3-SYLC doublet. After that, mainly the 5th order aberrations remain to limit the resolution. In this paper, we extend the doublet to quadruplet models also including octupole and dodecapole fields for correcting these higher order aberrations, without introducing any new unwanted ones. We prove the validity of our models by analytical calculations. Also by computer simulations, we show that for beam energy of 5keV and initial angle 10mrad at the corrector object plane, beam size of less than 0.5nm is achieved at the corrector image plane. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A library least-squares approach for scatter correction in gamma-ray tomography

    International Nuclear Information System (INIS)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-01-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system. - Highlights: • A LLS approach is proposed for scatter correction in gamma-ray tomography. • The validity of the LLS approach is tested through experiments. • Gain shift and pulse pile-up affect the accuracy of the LLS approach. • The LLS approach successfully estimates scatter profiles

  3. Study of extraterrestrial disposal of radioactive wastes. Part 3: Preliminary feasibility screening study of space disposal of the actinide radioactive wastes with 1 percent and 0.1 percent fission product contamination

    Science.gov (United States)

    Hyland, R. E.; Wohl, M. L.; Finnegan, P. M.

    1973-01-01

    A preliminary study was conducted of the feasibility of space disposal of the actinide class of radioactive waste material. This waste was assumed to contain 1 and 0.1 percent residual fission products, since it may not be feasible to completely separate the actinides. The actinides are a small fraction of the total waste but they remain radioactive much longer than the other wastes and must be isolated from human encounter for tens of thousands of years. Results indicate that space disposal is promising but more study is required, particularly in the area of safety. The minimum cost of space transportation would increase the consumer electric utility bill by the order of 1 percent for earth escape and 3 percent for solar escape. The waste package in this phase of the study was designed for normal operating conditions only; the design of next phase of the study will include provisions for accident safety. The number of shuttle launches per year required to dispose of all U.S. generated actinide waste with 0.1 percent residual fission products varies between 3 and 15 in 1985 and between 25 and 110 by 2000. The lower values assume earth escape (solar orbit) and the higher values are for escape from the solar system.

  4. Copula-based assimilation of radar and gauge information to derive bias-corrected precipitation fields

    Directory of Open Access Journals (Sweden)

    S. Vogl

    2012-07-01

    Full Text Available This study addresses the problem of combining radar information and gauge measurements. Gauge measurements are the best available source of absolute rainfall intensity albeit their spatial availability is limited. Precipitation information obtained by radar mimics well the spatial patterns but is biased for their absolute values.

    In this study copula models are used to describe the dependence structure between gauge observations and rainfall derived from radar reflectivity at the corresponding grid cells. After appropriate time series transformation to generate "iid" variates, only the positive pairs (radar >0, gauge >0 of the residuals are considered. As not each grid cell can be assigned to one gauge, the integration of point information, i.e. gauge rainfall intensities, is achieved by considering the structure and the strength of dependence between the radar pixels and all the gauges within the radar image. Two different approaches, namely Maximum Theta and Multiple Theta, are presented. They finally allow for generating precipitation fields that mimic the spatial patterns of the radar fields and correct them for biases in their absolute rainfall intensities. The performance of the approach, which can be seen as a bias-correction for radar fields, is demonstrated for the Bavarian Alps. The bias-corrected rainfall fields are compared to a field of interpolated gauge values (ordinary kriging and are validated with available gauge measurements. The simulated precipitation fields are compared to an operationally corrected radar precipitation field (RADOLAN. The copula-based approach performs similarly well as indicated by different validation measures and successfully corrects for errors in the radar precipitation.

  5. Barkas effect, shell correction, screening and correlation in collisional energy-loss straggling of an ion beam

    CERN Document Server

    Sigmund, P

    2003-01-01

    Collisional electronic energy-loss straggling has been treated theoretically on the basis of the binary theory of electronic stopping. In view of the absence of a Bloch correction in straggling the range of validity of the theory includes both the classical and the Born regime. The theory incorporates Barkas effect and projectile screening. Shell correction and electron bunching are added on. In the absence of shell corrections the Barkas effect has a dominating influence on straggling, but much of this is wiped out when the shell correction is included. Weak projectile screening tends to noticeably reduce collisional straggling. Sizable bunching effects are found in particular for heavy ions. Comparisons are made with selected results of the experimental and theoretical literature. (authors)

  6. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  7. 46 CFR 42.20-7 - Flooding standard: Type “B” vessel, 60 percent reduction.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Flooding standard: Type âBâ vessel, 60 percent reduction... DOMESTIC AND FOREIGN VOYAGES BY SEA Freeboards § 42.20-7 Flooding standard: Type “B” vessel, 60 percent... applied to the following flooding standards: (1) If the vessel is 225 meters (738 feet) or less in length...

  8. Calculation of the Pitot tube correction factor for Newtonian and non-Newtonian fluids.

    Science.gov (United States)

    Etemad, S Gh; Thibault, J; Hashemabadi, S H

    2003-10-01

    This paper presents the numerical investigation performed to calculate the correction factor for Pitot tubes. The purely viscous non-Newtonian fluids with the power-law model constitutive equation were considered. It was shown that the power-law index, the Reynolds number, and the distance between the impact and static tubes have a major influence on the Pitot tube correction factor. The problem was solved for a wide range of these parameters. It was shown that employing Bernoulli's equation could lead to large errors, which depend on the magnitude of the kinetic energy and energy friction loss terms. A neural network model was used to correlate the correction factor of a Pitot tube as a function of these three parameters. This correlation is valid for most Newtonian, pseudoplastic, and dilatant fluids at low Reynolds number.

  9. Validation of the reactor dynamics code HEXTRAN

    International Nuclear Information System (INIS)

    Kyrki-Rajamaeki, R.

    1994-05-01

    HEXTRAN is a new three-dimensional, hexagonal reactor dynamics code developed in the Technical Research Centre of Finland (VTT) for VVER type reactors. This report describes the validation work of HEXTRAN. The work has been made with the financing of the Finnish Centre for Radiation and Nuclear Safety (STUK). HEXTRAN is particularly intended for calculation of such accidents, in which radially asymmetric phenomena are included and both good neutron dynamics and two-phase thermal hydraulics are important. HEXTRAN is based on already validated codes. The models of these codes have been shown to function correctly also within the HEXTRAN code. The main new model of HEXTRAN, the spatial neutron kinetics model has been successfully validated against LR-0 test reactor and Loviisa plant measurements. Connected with SMABRE, HEXTRAN can be reliably used for calculation of transients including effects of the whole cooling system of VVERs. Further validation plans are also introduced in the report. (orig.). (23 refs., 16 figs., 2 tabs.)

  10. [Do we always correctly interpret the results of statistical nonparametric tests].

    Science.gov (United States)

    Moczko, Jerzy A

    2014-01-01

    Mann-Whitney, Wilcoxon, Kruskal-Wallis and Friedman tests create a group of commonly used tests to analyze the results of clinical and laboratory data. These tests are considered to be extremely flexible and their asymptotic relative efficiency exceeds 95 percent. Compared with the corresponding parametric tests they do not require checking the fulfillment of the conditions such as the normality of data distribution, homogeneity of variance, the lack of correlation means and standard deviations, etc. They can be used both in the interval and or-dinal scales. The article presents an example Mann-Whitney test, that does not in any case the choice of these four nonparametric tests treated as a kind of gold standard leads to correct inference.

  11. Global cue inconsistency diminishes learning of cue validity

    Directory of Open Access Journals (Sweden)

    Tony Wang

    2016-11-01

    Full Text Available We present a novel two-stage probabilistic learning task that examines the participants’ ability to learn and utilize valid cues across several levels of probabilistic feedback. In the first stage, participants sample from one of three cues that gives predictive information about the outcome of the second stage. Participants are rewarded for correct prediction of the outcome in stage two. Only one of the three cues gives valid predictive information and thus participants can maximise their reward by learning to sample from the valid cue. The validity of this predictive information, however, is reinforced across several levels of probabilistic feedback. A second manipulation involved changing the consistency of the predictive information in stage one and the outcome in stage two. The results show that participants, with higher probabilistic feedback, learned to utilise the valid cue. In inconsistent task conditions, however, participants were significantly less successful in utilising higher validity cues. We interpret this result as implying that learning in probabilistic categorization is based on developing a representation of the task that allows for goal-directed action.

  12. Validation of NAA Method for Urban Particulate Matter

    International Nuclear Information System (INIS)

    Woro Yatu Niken Syahfitri; Muhayatun; Diah Dwiana Lestiani; Natalia Adventini

    2009-01-01

    Nuclear analytical techniques have been applied in many countries for determination of environmental pollutant. Method of NAA (neutron activation analysis) representing one of nuclear analytical technique of that has low detection limits, high specificity, high precision, and accuracy for large majority of naturally occurring elements, and ability of non-destructive and simultaneous determination of multi-elemental, and can handle small sample size (< 1 mg). To ensure quality and reliability of the method, validation are needed to be done. A standard reference material, SRM NIST 1648 Urban Particulate Matter, has been used to validate NAA method. Accuracy and precision test were used as validation parameters. Particulate matter were validated for 18 elements: Ti, I, V, Br, Mn, Na, K, Cl, Cu, Al, As, Fe, Co, Zn, Ag, La, Cr, and Sm,. The result showed that the percent relative standard deviation of the measured elemental concentrations are found to be within ranged from 2 to 14,8% for most of the elements analyzed whereas Hor rat value in range 0,3-1,3. Accuracy test results showed that relative bias ranged from -11,1 to 3,6%. Based on validation results, it can be stated that NAA method is reliable for characterization particulate matter and other similar matrix samples to support air quality monitoring. (author)

  13. Two dimensional spatial distortion correction algorithm for scintillation GAMMA cameras

    International Nuclear Information System (INIS)

    Chaney, R.; Gray, E.; Jih, F.; King, S.E.; Lim, C.B.

    1985-01-01

    Spatial distortion in an Anger gamma camera originates fundamentally from the discrete nature of scintillation light sampling with an array of PMT's. Historically digital distortion correction started with the method based on the distortion measurement by using 1-D slit pattern and the subsequent on-line bi-linear approximation with 64 x 64 look-up tables for X and Y. However, the X, Y distortions are inherently two-dimensional in nature, and thus the validity of this 1-D calibration method becomes questionable with the increasing distortion amplitude in association with the effort to get better spatial and energy resolutions. The authors have developed a new accurate 2-D correction algorithm. This method involves the steps of; data collection from 2-D orthogonal hole pattern, 2-D distortion vector measurement, 2-D Lagrangian polynomial interpolation, and transformation to X, Y ADC frame. The impact of numerical precision used in correction and the accuracy of bilinear approximation with varying look-up table size have been carefully examined through computer simulation by using measured single PMT light response function together with Anger positioning logic. Also the accuracy level of different order Lagrangian polynomial interpolations for correction table expansion from hole centroids were investigated. Detailed algorithm and computer simulation are presented along with camera test results

  14. Trustworthy Variant Derivation with Translation Validation for Safety Critical Product Lines

    DEFF Research Database (Denmark)

    Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej

    2016-01-01

    Software product line (SPL) engineering facilitates development of entire families of software products with systematic reuse. Model driven SPLs use models in the design and development process. In the safety critical domain, validation of models and testing of code increases the quality...... of the products altogether. However, to maintain this trustworthiness it is necessary to know that the SPL tools, which manipulate models and code to derive concrete product variants, do not introduce errors in the process. We propose a general technique of checking correctness of product derivation tools through...... translation validation. We demonstrate it using Featherweight VML—a core language for separate variability modeling relying on a single kind of variation point to define transformations of artifacts seen as object models. We use Featherweight VML with its semantics as a correctness specification...

  15. Production process validation of 2-[18F]-fluoro-2-deoxy-D-glucose

    International Nuclear Information System (INIS)

    Cantero, Miguel; Iglesias, Rocio; Aguilar, Juan; Sau, Pablo; Tardio, Evaristo; Narrillos, Marcos

    2003-01-01

    The main of validation of production process of 2-[18F]-fluoro-2-deoxi-D-glucose (FDG) was to check: A) equipment's and services implicated in the production process were correctly installed, well documented, and worked properly, and B) production of FDG was done in a repetitive way according to predefined parameters. The main document was the Validation Master Plan, and steps were: installation qualification, operation qualification, process qualification and validation report. After finalization of all tests established in qualification steps without deviations, we concluded that the production process was validated because is done in a repetitive way according predefined parameters (Au)

  16. Segmented attenuation correction using artificial neural networks in positron tomography

    International Nuclear Information System (INIS)

    Yu, S.K.; Nahmias, C.

    1996-01-01

    The measured attenuation correction technique is widely used in cardiac positron tomographic studies. However, the success of this technique is limited because of insufficient counting statistics achievable in practical transmission scan times, and of the scattered radiation in transmission measurement which leads to an underestimation of the attenuation coefficients. In this work, a segmented attenuation correction technique has been developed that uses artificial neural networks. The technique has been validated in phantoms and verified in human studies. The results indicate that attenuation coefficients measured in the segmented transmission image are accurate and reproducible. Activity concentrations measured in the reconstructed emission image can also be recovered accurately using this new technique. The accuracy of the technique is subject independent and insensitive to scatter contamination in the transmission data. This technique has the potential of reducing the transmission scan time, and satisfactory results are obtained if the transmission data contain about 400 000 true counts per plane. It can predict accurately the value of any attenuation coefficient in the range from air to water in a transmission image with or without scatter correction. (author)

  17. Hanford Environmental Restoration data validation process for chemical and radiochemical analyses

    International Nuclear Information System (INIS)

    Adams, M.R.; Bechtold, R.A.; Clark, D.E.; Angelos, K.M.; Winter, S.M.

    1993-10-01

    Detailed procedures for validation of chemical and radiochemical data are used to assure consistent application of validation principles and support a uniform database of quality environmental data. During application of these procedures, it was determined that laboratory data packages were frequently missing certain types of documentation causing subsequent delays in meeting critical milestones in the completion of validation activities. A quality improvement team was assembled to address the problems caused by missing documentation and streamline the entire process. The result was the development of a separate data package verification procedure and revisions to the data validation procedures. This has resulted in a system whereby deficient data packages are immediately identified and corrected prior to validation and revised validation procedures which more closely match the common analytical reporting practices of laboratory service vendors

  18. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    Science.gov (United States)

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  19. Correcting Bidirectional Effects in Remote Sensing Reflectance from Coastal Waters

    Science.gov (United States)

    Stamnes, K. H.; Fan, Y.; Li, W.; Voss, K. J.; Gatebe, C. K.

    2016-02-01

    Understanding bidirectional effects including sunglint is important for GEO-CAPE for several reasons: (i) correct interpretation of ocean color data; (ii) comparing consistency of spectral radiance data derived from space observations with a single instrument for a variety of illumination and viewing conditions; (iii) merging data collected by different instruments operating simultaneously. We present a new neural network (NN) method to correct bidirectional effects in water-leaving radiance for both Case 1 and Case 2 waters. We also discuss a new BRDF and 2D sun-glint model that was validated by comparing simulated surface reflectances with Cloud Absorption Radiometer (CAR) data. Finally, we present an extension of our marine bio-optical model to the UV range that accounts for the seasonal dependence of the inherent optical properties (IOPs).

  20. Internal and external validation of an ESTRO delineation guideline

    DEFF Research Database (Denmark)

    Eldesoky, Ahmed R.; Yates, Esben Svitzer; Nyeng, Tine B

    2016-01-01

    Background and purpose To internally and externally validate an atlas based automated segmentation (ABAS) in loco-regional radiation therapy of breast cancer. Materials and methods Structures of 60 patients delineated according to the ESTRO consensus guideline were included in four categorized...... and axillary nodal levels and poor agreement for interpectoral, internal mammary nodal regions and LADCA. Correcting ABAS significantly improved all the results. External validation of ABAS showed comparable results. Conclusions ABAS is a clinically useful tool for segmenting structures in breast cancer loco...

  1. Subject positioning in the BOD POD® only marginally affects measurement of body volume and estimation of percent body fat in young adult men.

    Directory of Open Access Journals (Sweden)

    Maarten W Peeters

    Full Text Available INTRODUCTION: The aim of the study was to evaluate whether subject positioning would affect the measurement of raw body volume, thoracic gas volume, corrected body volume and the resulting percent body fat as assessed by air displacement plethysmography (ADP. METHODS: Twenty-five young adult men (20.7±1.1 y, BMI = 22.5±1.4 kg/m(2 were measured using the BOD POD® system using a measured thoracic gas volume sitting in a 'forward bent' position and sitting up in a straight position in random order. RESULTS: Raw body volume was 58±124 ml (p<0.05 higher in the 'straight' position compared to the 'bent' position. The mean difference in measured thoracic gas volume (bent-straight = -71±211 ml was not statistically significant. Corrected body volume and percent body fat in the bent position consequently were on average 86±122 ml (p<0.05 and 0.5±0.7% (p<0.05 lower than in the straight position respectively. CONCLUSION: Although the differences reached statistical significance, absolute differences are rather small. Subject positioning should be viewed as a factor that may contribute to between-test variability and hence contribute to (inprecision in detecting small individual changes in body composition, rather than a potential source of systematic bias. It therefore may be advisable to pay attention to standardizing subject positioning when tracking small changes in PF are of interest. The cause of the differences is shown not to be related to changes in the volume of isothermal air in the lungs. It is hypothesized and calculated that the observed direction and magnitude of these differences may arise from the surface area artifact which does not take into account that a subject in the bent position exposes more skin to the air in the device therefore potentially creating a larger underestimation of the actual body volume due to the isothermal effect of air close to the skin.

  2. External validation of the use of vignettes in cross-country health studies

    DEFF Research Database (Denmark)

    Datta Gupta, Nabanita; Kristensen, Nicolai; Pozzoli, Dario

    2010-01-01

    Cross-country comparisons of subjective assessments are rendered difficult if not impossible because of sub-population specific response style. To correct for this, the use of vignettes has become increasingly popular, notably within cross-country health studies. However, the validity of vignette...... and vignettes. Our results indicate that the assumption of RC is not innocuous and that our extended model relaxing this assumption improves the fit and significantly changes the cross-country rankings of health vis-a-vis the standard Chopit model.......Cross-country comparisons of subjective assessments are rendered difficult if not impossible because of sub-population specific response style. To correct for this, the use of vignettes has become increasingly popular, notably within cross-country health studies. However, the validity of vignettes...

  3. External Validation of the Use of Vignettes in Cross-Country Health Studies

    DEFF Research Database (Denmark)

    Datta Gupta, Nabanita; Kristensen, Nicolai; Pozzoli, Dario

    Cross-country comparisons of subjective assessments are rendered difficult if not impossible because of sub-population specific response style. To correct for this, the use of vignettes has become increasingly popular, notably within cross-country health studies. However, the validity of vignette...... and vignettes. Our results indicate that the assumption of RC is not innocuous and that our extended model relaxing this assumption improves the fit and significantly changes the cross-country rankings of health vis-à-vis the standard Chopit model.......Cross-country comparisons of subjective assessments are rendered difficult if not impossible because of sub-population specific response style. To correct for this, the use of vignettes has become increasingly popular, notably within cross-country health studies. However, the validity of vignettes...

  4. External validation of the use of vignettes in cross-country health studies

    DEFF Research Database (Denmark)

    Gupta, Nabanita Datta; Kristensen, Nicolai; Pozzoli, Dario

    Cross-country comparisons of subjective assessments are rendered difficult if not impossible because of sub-population specific response style. To correct for this, the use of vignettes has become increasingly popular, notably within cross-country health studies. However, the validity of vignette...... and vignettes. Our results indicate that the assumption of RC is not innocous and that our extended model relaxing this assumption improves the fit and significantly changes the cross-country rankings of health vis-\\'{a}-vis the standard Chopit model.......Cross-country comparisons of subjective assessments are rendered difficult if not impossible because of sub-population specific response style. To correct for this, the use of vignettes has become increasingly popular, notably within cross-country health studies. However, the validity of vignettes...

  5. Correction of the equilibrium orbit at the Bonn 3.5 GeV electron stretcher facility ELSA

    International Nuclear Information System (INIS)

    Wenzel, J.

    1990-09-01

    A beam position monitor system is being built for the Bonn electron stretcher facility ELSA. Based on the ELSA monitor system the Closed Orbit Correction Program for Interactive Tasks COCPIT has been developed. It enables an online correction of the closed orbit and is fully integrated into the ELSA operating system which allows control of all steering and diagnostic facilities of ELSA. COCPIT implements least square and harmonic correction methods with choosable harmonic components. A statistical analysis shows which correction method is best under given circumstances. Furthermore data about the current status of the monitor system as well as the corrector system enter the correction. Monitor offsets are added to the position measurements and the corrector-dipoles maximum currents are accounted for as constraints. Computer simulations prove the proper work of COCPIT and the validity of the statistical analysis. Possibilities of a future development of COCPIT are shown. (orig.) [de

  6. EnviroAtlas - Cleveland, OH - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  7. EnviroAtlas - Austin, TX - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  8. EnviroAtlas - Portland, ME - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  9. EnviroAtlas - Woodbine, IA - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  10. EnviroAtlas - Milwaukee, WI - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  11. EnviroAtlas - Tampa, FL - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  12. EnviroAtlas - Pittsburgh, PA - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  13. EnviroAtlas - Phoenix, AZ - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  14. EnviroAtlas - Durham, NC - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  15. EnviroAtlas - Portland, OR - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  16. EnviroAtlas - Paterson, NJ - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  17. EnviroAtlas - Memphis, TN - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  18. EnviroAtlas - Fresno, CA - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  19. EnviroAtlas Estimated Percent Tree Cover Along Walkable Roads Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  20. Amazing 7-day, super-simple, scripted guide to teaching or learning percents

    CERN Document Server

    Hernandez, Lisa

    2014-01-01

    Welcome to The Amazing 7-Day, Super-Simple, Scripted Guide to Teaching or Learning Percents. I have attempted to do just what the title says: make learning percents super simple. I have also attempted to make it fun and even ear-catching. The reason for this is not that I am a frustrated stand-up comic, but because in my fourteen years of teaching the subject, I have come to realize that my jokes, even the bad ones, have a crazy way of sticking in my students' heads. And should I use a joke (even a bad one) repetitively, the associations become embedded in their brains, many times to their cha

  1. Escaping the correction for body surface area when calculating glomerular filtration rate in children

    International Nuclear Information System (INIS)

    Piepsz, Amy; Tondeur, Marianne; Ham, Hamphrey

    2008-01-01

    51 Cr ethylene diamine tetraacetic acid ( 51 Cr EDTA) clearance is nowadays considered as an accurate and reproducible method for measuring glomerular filtration rate (GFR) in children. Normal values in function of age, corrected for body surface area, have been recently updated. However, much criticism has been expressed about the validity of body surface area correction. The aim of the present paper was to present the normal GFR values, not corrected for body surface area, with the associated percentile curves. For that purpose, the same patients as in the previous paper were selected, namely those with no recent urinary tract infection, having a normal left to right 99m Tc MAG3 uptake ratio and a normal kidney morphology on the early parenchymal images. A single blood sample method was used for 51 Cr EDTA clearance measurement. Clearance values, not corrected for body surface area, increased progressively up to the adolescence. The percentile curves were determined and allow, for a single patient, to estimate accurately the level of non-corrected clearance and the evolution with time, whatever the age. (orig.)

  2. [Multifocal phakic intraocular lens implant to correct presbyopia].

    Science.gov (United States)

    Baikoff, G; Matach, G; Fontaine, A; Ferraz, C; Spera, C

    2005-03-01

    Presbyopic surgery is considered as the new frontier in refractive surgery. Different solutions are proposed: myopization of one eye, insertion of an accommodative crystalline lens, scleral surgery, the effects of which are still unknown, and finally multifocal phakic implants. We therefore decided to undertake a prospective study under the Huriet law to determine its efficacy and specify the conditions required for an anterior chamber multifocal phakic implant. Fifty-five eyes of 33 patients received an anterior chamber foldable multifocal phakic implant. Twenty-one females and 12 males underwent surgery. Initial refraction was between -5D and +5D. The implant's single addition was +2.50. Recuperating a distant uncorrected visual acuity of 0.6 or better and near uncorrected vision of Parinaud 3 or better can be considered a very good postoperative result. Average follow-up was 42.6+/-18 weeks. Mean postoperative refraction was -0.12+/-0.51 D. Mean postoperative uncorrected visual acuity was 0.78+/-0.20. Postoperative uncorrected visual acuity was Parinaud 2.3+/-0.6. Eighty-four percent of eyes operated on recuperated 0.6 or better without correction and Parinaud 3 or better without correction. Lenses in four eyes were explanted for different reasons, essentially optical, and no severe anatomical complications were observed. Placing an anterior chamber multifocal phakic implant to correct presbyopia is an effective technique with good predictability and has the advantage of being reversible in case of intolerance, optical parasite effects or undesired complications. Considering the particularity of this surgery, it is imperative to respect very strict inclusion criteria: anterior chamber depth equal to or above 3.1 mm, open angle, endothelial cell count equal to or above 2000 cells/mm2, absence of an incipient cataract or the slightest evidence of macular alteration.

  3. Partial Volume Effects correction in emission tomography

    International Nuclear Information System (INIS)

    Le Pogam, Adrien

    2010-01-01

    Partial Volume Effects (PVE) designates the blur commonly found in nuclear medicine images and this PhD work is dedicated to their correction with the objectives of qualitative and quantitative improvement of such images. PVE arise from the limited spatial resolution of functional imaging with either Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography (SPECT). They can be defined as a signal loss in tissues of size similar to the Full Width at Half Maximum (FWHM) of the PSF of the imaging device. In addition, PVE induce activity cross contamination between adjacent structures with different tracer uptakes. This can lead to under or over estimation of the real activity of such analyzed regions. Various methodologies currently exist to compensate or even correct for PVE and they may be classified depending on their place in the processing chain: either before, during or after the image reconstruction process, as well as their dependency on co-registered anatomical images with higher spatial resolution, for instance Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel-based and post-reconstruction approach was chosen for this work to avoid regions of interest definition and dependency on proprietary reconstruction developed by each manufacturer, in order to improve the PVE correction. Two different contributions were carried out in this work: the first one is based on a multi-resolution methodology in the wavelet domain using the higher resolution details of a co-registered anatomical image associated to the functional dataset to correct. The second one is the improvement of iterative deconvolution based methodologies by using tools such as directional wavelets and curvelets extensions. These various developed approaches were applied and validated using synthetic, simulated and clinical images, for instance with neurology and oncology applications in mind. Finally, as currently available PET/CT scanners incorporate more

  4. Percent-level-precision physics at the Tevatron: next-to-next-to-leading order QCD corrections to qq¯→tt¯+X.

    Science.gov (United States)

    Bärnreuther, Peter; Czakon, Michał; Mitov, Alexander

    2012-09-28

    We compute the next-to-next-to-leading order QCD corrections to the partonic reaction that dominates top-pair production at the Tevatron. This is the first ever next-to-next-to-leading order calculation of an observable with more than two colored partons and/or massive fermions at hadron colliders. Augmenting our fixed order calculation with soft-gluon resummation through next-to-next-to-leading logarithmic accuracy, we observe that the predicted total inclusive cross section exhibits a very small perturbative uncertainty, estimated at ±2.7%. We expect that once all subdominant partonic reactions are accounted for, and work in this direction is ongoing, the perturbative theoretical uncertainty for this observable could drop below ±2%. Our calculation demonstrates the power of our computational approach and proves it can be successfully applied to all processes at hadron colliders for which high-precision analyses are needed.

  5. Probabilistic validation of protein NMR chemical shift assignments

    International Nuclear Information System (INIS)

    Dashti, Hesam; Tonelli, Marco; Lee, Woonghee; Westler, William M.; Cornilescu, Gabriel; Ulrich, Eldon L.; Markley, John L.

    2016-01-01

    Data validation plays an important role in ensuring the reliability and reproducibility of studies. NMR investigations of the functional properties, dynamics, chemical kinetics, and structures of proteins depend critically on the correctness of chemical shift assignments. We present a novel probabilistic method named ARECA for validating chemical shift assignments that relies on the nuclear Overhauser effect data. ARECA has been evaluated through its application to 26 case studies and has been shown to be complementary to, and usually more reliable than, approaches based on chemical shift databases. ARECA is available online at http://areca.nmrfam.wisc.edu/ http://areca.nmrfam.wisc.edu/

  6. Validity of Edgeworth expansions for realized volatility estimators

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Veliyev, Bezirgen

    (2009). Second, we show that the validity of the Edgeworth expansions for realized volatility may not cover the optimal two-point distribution wild bootstrap proposed by Gonçalves and Meddahi (2009). Then, we propose a new optimal nonlattice distribution which ensures the second-order correctness...... of the bootstrap. Third, in the presence of microstructure noise, based on our Edgeworth expansions, we show that the new optimal choice proposed in the absence of noise is still valid in noisy data for the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). Finally, we show how...

  7. Probabilistic validation of protein NMR chemical shift assignments

    Energy Technology Data Exchange (ETDEWEB)

    Dashti, Hesam [University of Wisconsin-Madison, Graduate Program in Biophysics, Biochemistry Department (United States); Tonelli, Marco; Lee, Woonghee; Westler, William M.; Cornilescu, Gabriel [University of Wisconsin-Madison, Biochemistry Department, National Magnetic Resonance Facility at Madison (United States); Ulrich, Eldon L. [University of Wisconsin-Madison, BioMagResBank, Biochemistry Department (United States); Markley, John L., E-mail: markley@nmrfam.wisc.edu, E-mail: jmarkley@wisc.edu [University of Wisconsin-Madison, Biochemistry Department, National Magnetic Resonance Facility at Madison (United States)

    2016-01-15

    Data validation plays an important role in ensuring the reliability and reproducibility of studies. NMR investigations of the functional properties, dynamics, chemical kinetics, and structures of proteins depend critically on the correctness of chemical shift assignments. We present a novel probabilistic method named ARECA for validating chemical shift assignments that relies on the nuclear Overhauser effect data. ARECA has been evaluated through its application to 26 case studies and has been shown to be complementary to, and usually more reliable than, approaches based on chemical shift databases. ARECA is available online at http://areca.nmrfam.wisc.edu/ http://areca.nmrfam.wisc.edu/.

  8. MFT homogeneity study at TNX: Final report on the low weight percent solids concentration

    International Nuclear Information System (INIS)

    Jenkins, W.J.

    1993-01-01

    A statistical design and analysis of both elemental analyses and weight percent solids analyses data was utilized to evaluate the MFT homogeneity at low heel levels and low agitator speed at both high and low solids feed concentrations. The homogeneity was also evaluated at both low and high agitator speed at the 6000+ gallons static level. The dynamic level portion of the test simulated feeding the Melter from the MFT to evaluate the uniformity of the solids slurry composition (Frit-PHA-Sludge) entering the melter from the MFT. This final report provides the results and conclusions from the second half of the study, the low weight percent solids concentration portion, as well as a comparison with the results from the first half of the study, the high weight percent solids portion

  9. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    Science.gov (United States)

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  10. Person fit and criterion-related validity: an extension of the Schmitt, Cortina, and Whitney study

    NARCIS (Netherlands)

    Meijer, R.R.

    1997-01-01

    The effect on criterion-related validity of nonfitting response vectors (NRVs) on a predictor test was investigated. Using simulated data, it was shown that there was a substantial decrease in validity when the type of misfit was severe (i.e., guessing the correct answers to all test items), when

  11. A validated methodology for evaluating burnup credit in spent fuel casks

    International Nuclear Information System (INIS)

    Brady, M.C.; Sanders, T.L.

    1991-01-01

    The concept of allowing reactivity credit for the transmuted state of spent fuel offers both economic and risk incentives. This paper presents a general overview of the technical work being performed in support of the US Department of Energy (DOE) program to resolve issues related to the implementation of burnup credit. An analysis methodology is presented along with information representing the validation of the method against available experimental data. The experimental data that are applicable to burnup credit include chemical assay data for the validation of the isotopic prediction models, fresh fuel critical experiments for the validation of criticality calculations for various casks geometries, and reactor restart critical data to validate criticality calculations with spent fuel. The methodology has been specifically developed to be simple and generally applicable, therefore giving rise to uncertainties or sensitivities which are identified and quantified in terms of a percent bias in k eff . Implementation issues affecting licensing requirements and operational procedures are discussed briefly

  12. A validated methodology for evaluating burnup credit in spent fuel casks

    International Nuclear Information System (INIS)

    Brady, M.C.; Sanders, T.L.

    1991-01-01

    The concept of allowing reactivity credit for the transmuted state of spent fuel offers both economic and risk incentives. This paper presents a general overview of the technical work being performed in support of the U.S. Department of Energy (DOE) program to resolve issues related to the implementation of burnup credit. An analysis methodology is presented along with information representing the validation of the method against available experimental data. The experimental data that are applicable to burnup credit include chemical assay data for the validation of the isotopic prediction models, fresh fuel critical experiments for the validation of criticality calculations for various cask geometries, and reactor restart critical data to validate criticality calculations with spent fuel. The methodology has been specifically developed to be simple and generally applicable, therefore giving rise to uncertainties or sensitivities which are identified and quantified in terms of a percent bias in k eff . Implementation issues affecting licensing requirements and operational procedures are discussed briefly. (Author)

  13. A validated methodology for evaluating burnup credit in spent fuel casks

    International Nuclear Information System (INIS)

    Brady, M.C.; Sanders, T.L.

    1991-01-01

    The concept of allowing reactivity credit for the transmuted state of spent fuel offers both economic and risk incentives. This paper presents a general overview of the technical work being performed in support of the US Department of Energy (DOE) program to resolve issues related to the implementation of burnup credit. An analysis methodology is presented along with information representing the validation of the method against available experimental data. The experimental data that are applicable to burnup credit include chemical assay data for the validation of the isotopic prediction models, fresh fuel critical experiments for the validation of criticality calculations for various cask geometries, and reactor restart critical data to validate criticality calculations with spent fuel. The methodology has been specifically developed to be simple and generally applicable, therefore giving rise to uncertainties or sensitivities which are identified and quantified in terms of a percent bias in k eff . Implementation issues affecting licensing requirements and operational procedures are discussed briefly. 24 refs., 3 tabs

  14. Self-Excited Single-Stage Power Factor Correction Driving Circuit for LED Lighting

    Directory of Open Access Journals (Sweden)

    Yong-Nong Chang

    2014-01-01

    Full Text Available This pa\tper proposes a self-excited single-stage high power factor LED lighting driving circuit. Being featured with power factor correction capability without needing any control devices, the proposed circuit structure is with low cost and suitable for commercial production. The power factor correction function is accomplished by using inductor in combination with a half-bridge quasi resonant converter to achieve active switching and yield out voltage regulation according to load requirement. Furthermore, the zero-voltage switching in the half-bridge converter can be attained to promote the overall performance efficiency of the proposed circuit. Finally, the validity and production availability of the proposed circuit will be verified as well.

  15. Cultural adaptation and content validation of the Single-Question for screening alcohol abuse

    Directory of Open Access Journals (Sweden)

    Marjorie Ester Dias Maciel

    2018-03-01

    Full Text Available ABSTRACT Objective Describing the stages of translation, cultural adaptation and content validation of the Single-Question into Brazilian Portuguese, which will be named Questão Chave. Method This study is a cultural adaptation. The instrument was translated into Portuguese as two independent versions which led to a synthesis of translations (S1, and later to the synthesis S2, which was then submitted to evaluation by a Committee of Expert Judges in the area of alcohol use and instrument validation. The Content Validity Index and Kappa agreement coefficient were calculated from this evaluation. Results The judges evaluated the Questão Chave regarding the clarity of the sentence and aspects related to the quality of the translation (cultural adaptation, preservation of original meaning and correct use of technical terms. The Content Validity Index was 1 for clarity of sentence and correct use of technical terms, and 0.8 for adaptation and preservation of the original meaning. The Kappa index for concordance among the judges was 0.83. After an adjustment proposed by the judges, the S3 version was originated. Conclusion The Questão Chave had its content validity confirmed, which supports future studies that aim for its application in the target population to verify their psychometric properties.

  16. Well Completion Report for Corrective Action Unit 447, Project Shoal Area, Churchill County, Nevada

    International Nuclear Information System (INIS)

    Rick Findlay

    2006-01-01

    This Well Completion Report is being provided as part of the implementation of the Corrective Action Decision Document (CADD)/Corrective Action Plan (CAP) for Corrective Action Unit (CAU) 447 (NNSA/NSO, 2006a). The CADD/CAP is part of an ongoing U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO) funded project for the investigation of CAU 447 at the Project Shoal Area (PSA). All work performed on this project was conducted in accordance with the ''Federal Facility Agreement and Consent Order'' (FFACO) (1996), and all applicable Nevada Division of Environmental Protection (NDEP) policies and regulations. Investigation activities included the drilling, construction, and development of three monitoring/validation (MV) wells at the PSA. This report summarizes the field activities and data collected during the investigation

  17. Neutral current Drell-Yan with combined QCD and electroweak corrections in the POWHEG BOX

    CERN Document Server

    Barze', Luca; Nason, Paolo; Nicrosini, Oreste; Piccinini, Fulvio; Vicini, Alessandro

    2013-01-01

    Following recent work on the combination of electroweak and strong radiative corrections to single W-boson hadroproduction in the POWHEG BOX framework, we generalize the above treatment to cover the neutral current Drell-Yan process. According to the POWHEG method, we combine both the next-to-leading order (NLO) electroweak and QED multiple photon corrections with the native NLO and Parton Shower QCD contributions. We show comparisons with the predictions of the electroweak generator HORACE, to validate the reliability and accuracy of the approach. We also present phenomenological results obtained with the new tool for physics studies at the LHC.

  18. Production process validation of 2-[18F]-fluoro-2-deoxy-D-glucose

    International Nuclear Information System (INIS)

    Cantero, Miguel; Iglesias, Rocio; Aguilar, Juan; Sau, Pablo; Tardio, Evaristo; Narrillos, Marcos

    2003-01-01

    The aim of production process validation of 2-[18F]-fluoro-2-deoxi-D-glucose (FDG) was to check: A) equipments and services implicated in the production process were correctly installed, well documented, and worked properly, and B) production of FDG was done in a repetitive way according to predefined parameters. The main document was the Validation Master Plan, and steps were: installation qualification, operational qualification, performance qualification and validation final report. After finalization of all tests established in qualification steps without deviations, we concluded that the production process was validated because consistently produced FDG meeting its pre-determined specifications and quality characteristics (Au)

  19. Accounting for treatment use when validating a prognostic model: a simulation study

    Directory of Open Access Journals (Sweden)

    Romin Pajouheshnia

    2017-07-01

    Full Text Available Abstract Background Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. Methods We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW on the estimated model discrimination (c-index and calibration (observed:expected ratio and calibration plots in scenarios with different patterns and effects of treatment use. Results Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. Conclusions When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and

  20. Analytic Validation of Immunohistochemistry Assays: New Benchmark Data From a Survey of 1085 Laboratories.

    Science.gov (United States)

    Stuart, Lauren N; Volmar, Keith E; Nowak, Jan A; Fatheree, Lisa A; Souers, Rhona J; Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - A cooperative agreement between the College of American Pathologists (CAP) and the United States Centers for Disease Control and Prevention was undertaken to measure laboratories' awareness and implementation of an evidence-based laboratory practice guideline (LPG) on immunohistochemical (IHC) validation practices published in 2014. - To establish new benchmark data on IHC laboratory practices. - A 2015 survey on IHC assay validation practices was sent to laboratories subscribed to specific CAP proficiency testing programs and to additional nonsubscribing laboratories that perform IHC testing. Specific questions were designed to capture laboratory practices not addressed in a 2010 survey. - The analysis was based on responses from 1085 laboratories that perform IHC staining. Ninety-six percent (809 of 844) always documented validation of IHC assays. Sixty percent (648 of 1078) had separate procedures for predictive and nonpredictive markers, 42.7% (220 of 515) had procedures for laboratory-developed tests, 50% (349 of 697) had procedures for testing cytologic specimens, and 46.2% (363 of 785) had procedures for testing decalcified specimens. Minimum case numbers were specified by 85.9% (720 of 838) of laboratories for nonpredictive markers and 76% (584 of 768) for predictive markers. Median concordance requirements were 95% for both types. For initial validation, 75.4% (538 of 714) of laboratories adopted the 20-case minimum for nonpredictive markers and 45.9% (266 of 579) adopted the 40-case minimum for predictive markers as outlined in the 2014 LPG. The most common method for validation was correlation with morphology and expected results. Laboratories also reported which assay changes necessitated revalidation and their minimum case requirements. - Benchmark data on current IHC validation practices and procedures may help laboratories understand the issues and influence further refinement of LPG recommendations.

  1. Guidance and Control Design for a Class of Spin-Stabilized Projectiles with a Two-Dimensional Trajectory Correction Fuze

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2015-01-01

    Full Text Available A guidance and control strategy for a class of 2D trajectory correction fuze with fixed canards is developed in this paper. Firstly, correction control mechanism is researched through studying the deviation motion, the key point of which is the dynamic equilibrium angle. Phase lag of swerve response is the dominating factor for correction control, and formula is deduced with the Mach number as argument. Secondly, impact point deviation prediction based on perturbation theory is proposed, and the numerical solution and application method are introduced. Finally, guidance and control strategy is developed, and simulations to validate the strategy are conducted.

  2. Attenuation correction in pulmonary and myocardial single photon emission computed tomography

    International Nuclear Information System (INIS)

    Almquist, H.

    2000-01-01

    The objective was to develop and validate methods for single photon emission computed tomography, SPECT, allowing quantitative physiologic and diagnostic studies of lung and heart. A method for correction of variable attenuation in SPECT, based on transmission measurements before administration of an isotope to the subject, was developed and evaluated. A protocol based upon geometrically well defined phantoms was developed. In a mosaic pattern phantom count rates were corrected from 39-43% to 101-110% of reference. In healthy subjects non-gravitational pulmonary perfusion gradients observed without attenuation correction were artefacts caused by attenuation. Pulmonary density in centre of right lung, obtained from the transmission measurement, was 0.28 ± 0.03 g/ml in normal subjects. Mean density was lower in large lungs compared to smaller ones. We also showed that regional ventilation/perfusion ratios could be measured with SPECT, using the readily available tracer 133 Xe. Because of the low energy of 133 Xe this relies heavily upon attenuation correction. A commercially available system for attenuation correction with simultaneous emission and transmission, considered to improve myocardial SPECT, performed erroneously. This could lead to clinical misjudgement. We considered that manufacturer-independent pre-clinical tests are required. In a test of two other commercial systems, based on different principles, an adapted variant of our initial protocol was proven useful. Only one of the systems provided correct emission count rates independently on phantom configuration. Errors in the other system were related to inadequate compensation of the influence of emission activity on the transmission study

  3. Translation, Cultural Adaptation and Validation of the Simple Shoulder Test to Spanish

    OpenAIRE

    Arcuri, Francisco; Barclay, Fernando; Nacul, Ivan

    2015-01-01

    Background: The validation of widely used scales facilitates the comparison across international patient samples. Objective: The objective was to translate, culturally adapt and validate the Simple Shoulder Test into Argentinian Spanish. Methods: The Simple Shoulder Test was translated from English into Argentinian Spanish by two independent translators, translated back into English and evaluated for accuracy by an expert committee to correct the possible discrepancies. It was then administer...

  4. Universal opt-out screening for hepatitis C virus (HCV) within correctional facilities is an effective intervention to improve public health.

    Science.gov (United States)

    Morris, Meghan D; Brown, Brandon; Allen, Scott A

    2017-09-11

    Purpose Worldwide efforts to identify individuals infected with the hepatitis C virus (HCV) focus almost exclusively on community healthcare systems, thereby failing to reach high-risk populations and those with poor access to primary care. In the USA, community-based HCV testing policies and guidelines overlook correctional facilities, where HCV rates are believed to be as high as 40 percent. This is a missed opportunity: more than ten million Americans move through correctional facilities each year. Herein, the purpose of this paper is to examine HCV testing practices in the US correctional system, California and describe how universal opt-out HCV testing could expand early HCV detection, improve public health in correctional facilities and communities, and prove cost-effective over time. Design/methodology/approach A commentary on the value of standardizing screening programs across facilities by mandating all facilities (universal) to implement opt-out testing policies for all prisoners upon entry to the correctional facilities. Findings Current variability in facility-level testing programs results in inconsistent testing levels across correctional facilities, and therefore makes estimating the actual number of HCV-infected adults in the USA difficult. The authors argue that universal opt-out testing policies ensure earlier diagnosis of HCV among a population most affected by the disease and is more cost-effective than selective testing policies. Originality/value The commentary explores the current limitations of selective testing policies in correctional systems and provides recommendations and implications for public health and correctional organizations.

  5. [Impact of corrective measures on fluoroquinolones prescriptions for urinary tract infections during a 2-round relevance study].

    Science.gov (United States)

    Gendrin, V; Letranchant, L; Hénard, S; Frentiu, E; Demore, B; Burty, C; May, T; Doco-Lecompte, T

    2012-01-01

    Evaluating the impact of corrective measures on fluoroquinolones (FQ) prescriptions for urinary tract infections (UTI) during a 2-round relevance study on a regional scale. FQ prescriptions of voluntary hospitals were checked by an infectious diseases physician and a pharmacist according to regional guidelines. A first round (R1) took place in January 2008, with feedback and proposal for personalized corrective measures in January 2009. A second round (R2) was organized in June 2009. UTI data were extracted and the results of the two rounds were compared. Four hundred and thirty-five and 302 FQ prescriptions for UTI, coming from 28 and 24 different hospitals, were analyzed at R1 and R2, respectively. Thirty-six percent and 55% of these prescriptions were entirely in accordance with regional guidelines, at respectively R1 and R2 (PUTI through better adhesion to the regional guidelines between the two rounds. This is probably due to first turn results feedback, and corrective measures suggestion. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  6. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    International Nuclear Information System (INIS)

    Mathew, D; Tanny, S; Parsai, E; Sperling, N

    2015-01-01

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measured on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm 2 to 0.6×0.6 cm 2 , normalized to values at 5×5cm 2 . Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm 2 fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference

  7. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, D; Tanny, S; Parsai, E; Sperling, N [University of Toledo Medical Center, Toledo, OH (United States)

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measured on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class

  8. 27 CFR 30.64 - Table 4, showing the fractional part of a gallon per pound at each percent and each tenth percent...

    Science.gov (United States)

    2010-04-01

    ... of the liquid and its apparent proof (hydrometer indication, corrected to 60 degrees Fahrenheit). The... of blended whisky containing added solids Temperature °F 75.0° Hydrometer reading 92.0° Apparent...

  9. Contribution of Lateral Column Lengthening to Correction of Forefoot Abduction in Stage IIb Adult Acquired Flatfoot Deformity Reconstruction.

    Science.gov (United States)

    Chan, Jeremy Y; Greenfield, Stephen T; Soukup, Dylan S; Do, Huong T; Deland, Jonathan T; Ellis, Scott J

    2015-12-01

    Correction of forefoot abduction in stage IIb adult acquired flatfoot likely depends on the amount of lateral column lengthening (LCL) performed, although this represents only one aspect of a successful reconstruction. The purpose of this study was to evaluate the correlation between common reconstructive variables and the observed change in forefoot abduction. Forty-one patients who underwent flatfoot reconstruction involving an Evans-type LCL were assessed retrospectively. Preoperative and postoperative anteroposterior (AP) radiographs of the foot at a minimum of 40 weeks (mean, 2 years) after surgery were reviewed to determine correction in forefoot abduction as measured by talonavicular coverage (TNC) angle, talonavicular uncoverage percent, talus-first metatarsal (T-1MT) angle, and lateral incongruency angle. Fourteen demographic and intraoperative variables were evaluated for association with change in forefoot abduction including age, gender, height, weight, body mass index, as well as the amount of LCL and medializing calcaneal osteotomy performed, LCL graft type, Cotton osteotomy, first tarsometatarsal fusion, flexor digitorum longus transfer, spring ligament repair, gastrocnemius recession and any one of the modified McBride/Akin/Silver procedures. Two variables significantly affected the change in lateral incongruency angle. These were weight (P = .04) and the amount of LCL performed (P < .001). No variables were associated with the change in TNC angle, talonavicular uncoverage percent, or T-1MT angle. Multivariate regression analysis revealed that LCL was the only significant predictor of the change in lateral incongruency angle. The final regression model for LCL showed a good fit (R2 = 0.70, P < .001). Each millimeter of LCL corresponded to a 6.8-degree change in lateral incongruency angle. Correction of forefoot abduction in flatfoot reconstruction was primarily determined by the LCL procedure and could be modeled linearly. We believe that the

  10. Noninvasive assessment of mitral inertness [correction of inertance]: clinical results with numerical model validation.

    Science.gov (United States)

    Firstenberg, M S; Greenberg, N L; Smedira, N G; McCarthy, P M; Garcia, M J; Thomas, J D

    2001-01-01

    Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.

  11. NLO corrections to the photon impact factor: Combining real and virtual corrections

    International Nuclear Information System (INIS)

    Bartels, J.; Colferai, D.; Kyrieleis, A.; Gieseke, S.

    2002-08-01

    In this third part of our calculation of the QCD NLO corrections to the photon impact factor we combine our previous results for the real corrections with the singular pieces of the virtual corrections and present finite analytic expressions for the quark-antiquark-gluon intermediate state inside the photon impact factor. We begin with a list of the infrared singular pieces of the virtual correction, obtained in the first step of our program. We then list the complete results for the real corrections (longitudinal and transverse photon polarization). In the next step we defined, for the real corrections, the collinear and soft singular regions and calculate their contributions to the impact factor. We then subtract the contribution due to the central region. Finally, we combine the real corrections with the singular pieces of the virtual corrections and obtain our finite results. (orig.)

  12. Validation of general job satisfaction in the Korean Labor and Income Panel Study.

    Science.gov (United States)

    Park, Shin Goo; Hwang, Sang Hee

    2017-01-01

    The purpose of this study is to assess the validity and reliability of general job satisfaction (JS) in the Korean Labor and Income Panel Study (KLIPS). We used the data from the 17th wave (2014) of the nationwide KLIPS, which selected a representative panel sample of Korean households and individuals aged 15 or older residing in urban areas. We included in this study 7679 employed subjects (4529 males and 3150 females). The general JS instrument consisted of five items rated on a scale from 1 (strongly disagree) to 5 (strongly agree). The general JS reliability was assessed using the corrected item-total correlation and Cronbach's alpha coefficient. The validity of general JS was assessed using confirmatory factor analysis (CFA) and Pearson's correlation. The corrected item-total correlations ranged from 0.736 to 0.837. Therefore, no items were removed. Cronbach's alpha for general JS was 0.925, indicating excellent internal consistency. The CFA of the general JS model showed a good fit. Pearson's correlation coefficients for convergent validity showed moderate or strong correlations. The results obtained in our study confirm the validity and reliability of general JS.

  13. Matter power spectrum and the challenge of percent accuracy

    OpenAIRE

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Onions, Julian; Reed, Darren S.; Smith, Robert E.; Springel, Volker; Pearce, Frazer R.; Scoccimarro, Roman

    2015-01-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day $N$-body methods, identifying main potential error sources from the set-up of initial conditions to...

  14. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...

  15. Spacecraft early design validation using formal methods

    International Nuclear Information System (INIS)

    Bozzano, Marco; Cimatti, Alessandro; Katoen, Joost-Pieter; Katsaros, Panagiotis; Mokos, Konstantinos; Nguyen, Viet Yen; Noll, Thomas; Postma, Bart; Roveri, Marco

    2014-01-01

    The size and complexity of software in spacecraft is increasing exponentially, and this trend complicates its validation within the context of the overall spacecraft system. Current validation methods are labor-intensive as they rely on manual analysis, review and inspection. For future space missions, we developed – with challenging requirements from the European space industry – a novel modeling language and toolset for a (semi-)automated validation approach. Our modeling language is a dialect of AADL and enables engineers to express the system, the software, and their reliability aspects. The COMPASS toolset utilizes state-of-the-art model checking techniques, both qualitative and probabilistic, for the analysis of requirements related to functional correctness, safety, dependability and performance. Several pilot projects have been performed by industry, with two of them having focused on the system-level of a satellite platform in development. Our efforts resulted in a significant advancement of validating spacecraft designs from several perspectives, using a single integrated system model. The associated technology readiness level increased from level 1 (basic concepts and ideas) to early level 4 (laboratory-tested)

  16. SMAP RADAR Calibration and Validation

    Science.gov (United States)

    West, R. D.; Jaruwatanadilok, S.; Chaubel, M. J.; Spencer, M.; Chan, S. F.; Chen, C. W.; Fore, A.

    2015-12-01

    The Soil Moisture Active Passive (SMAP) mission launched on Jan 31, 2015. The mission employs L-band radar and radiometer measurements to estimate soil moisture with 4% volumetric accuracy at a resolution of 10 km, and freeze-thaw state at a resolution of 1-3 km. Immediately following launch, there was a three month instrument checkout period, followed by six months of level 1 (L1) calibration and validation. In this presentation, we will discuss the calibration and validation activities and results for the L1 radar data. Early SMAP radar data were used to check commanded timing parameters, and to work out issues in the low- and high-resolution radar processors. From April 3-13 the radar collected receive only mode data to conduct a survey of RFI sources. Analysis of the RFI environment led to a preferred operating frequency. The RFI survey data were also used to validate noise subtraction and scaling operations in the radar processors. Normal radar operations resumed on April 13. All radar data were examined closely for image quality and calibration issues which led to improvements in the radar data products for the beta release at the end of July. Radar data were used to determine and correct for small biases in the reported spacecraft attitude. Geo-location was validated against coastline positions and the known positions of corner reflectors. Residual errors at the time of the beta release are about 350 m. Intra-swath biases in the high-resolution backscatter images are reduced to less than 0.3 dB for all polarizations. Radiometric cross-calibration with Aquarius was performed using areas of the Amazon rain forest. Cross-calibration was also examined using ocean data from the low-resolution processor and comparing with the Aquarius wind model function. Using all a-priori calibration constants provided good results with co-polarized measurements matching to better than 1 dB, and cross-polarized measurements matching to about 1 dB in the beta release. During the

  17. Validation of simulation codes for future systems: motivations, approach, and the role of nuclear data

    International Nuclear Information System (INIS)

    Palmiotti, G.; Salvatores, M.; Aliberti, G.

    2007-01-01

    The validation of advanced simulation tools will still play a very significant role in several areas of reactor system analysis. This is the case of reactor physics and neutronics, where nuclear data uncertainties still play a crucial role for many core and fuel cycle parameters. The present paper gives a summary of validation motivations, objectives and approach. A validation effort is in particular necessary in the frame of advanced (e.g. Generation-IV or GNEP) reactors and associated fuel cycles assessment and design. Validation of simulation codes is complementary to the 'verification' process. In fact, 'verification' addresses the question 'are we solving the equations correctly' while validation addresses the question 'are we solving the correct equations with the correct parameters'. Verification implies comparisons with 'reference' equation solutions or with analytical solutions, when they exist. Most of what is called 'numerical validation' falls in this category. Validation strategies differ according to the relative weight of the methods and of the parameters that enter into the simulation tools. Most validation is based on experiments, and the field of neutronics where a 'robust' physics description model exists and which is function of 'input' parameters not fully known, will be the focus of this paper. In fact, in the case of reactor core, shielding and fuel cycle physics the model (theory) is well established (the Boltzmann and Bateman equations) and the parameters are the nuclear cross-sections, decay data etc. Two types of validation approaches can and have been used: (a) Mock-up experiments ('global' validation): need for a very close experimental simulation of a reference configuration. Bias factors cannot be extrapolated beyond reference configuration; (b) Use of 'clean', 'representative' integral experiments ('bias factor and adjustment' method). Allows to define bias factors, uncertainties and can be used for a wide range of applications. It

  18. Nuclear security. Improving correction of security deficiencies at DOE's weapons facilities

    International Nuclear Information System (INIS)

    Wells, James E.; Cannon, Doris E.; Fenzel, William F.; Lightner, Kenneth E. Jr.; Curtis, Lois J.; DuBois, Julia A.; Brown, Gail W.; Trujillo, Charles S.; Tumler, Pamela K.

    1992-11-01

    deficiencies have problems that limit the effectiveness of DOE's oversight. Also, DOE's review of contractors' plans to correct deficiencies is sometimes untimely, potentially resulting in prolonged security risks. Finally, some DOE field offices' validation of corrective actions was inadequate

  19. Calculation and measurement of radiation corrections for plasmon resonances in nanoparticles

    Science.gov (United States)

    Hung, L.; Lee, S. Y.; McGovern, O.; Rabin, O.; Mayergoyz, I.

    2013-08-01

    The problem of plasmon resonances in metallic nanoparticles can be formulated as an eigenvalue problem under the condition that the wavelengths of the incident radiation are much larger than the particle dimensions. As the nanoparticle size increases, the quasistatic condition is no longer valid. For this reason, the accuracy of the electrostatic approximation may be compromised and appropriate radiation corrections for the calculation of resonance permittivities and resonance wavelengths are needed. In this paper, we present the radiation corrections in the framework of the eigenvalue method for plasmon mode analysis and demonstrate that the computational results accurately match analytical solutions (for nanospheres) and experimental data (for nanorings and nanocubes). We also demonstrate that the optical spectra of silver nanocube suspensions can be fully assigned to dipole-type resonance modes when radiation corrections are introduced. Finally, our method is used to predict the resonance wavelengths for face-to-face silver nanocube dimers on glass substrates. These results may be useful for the indirect measurements of the gaps in the dimers from extinction cross-section observations.

  20. Sexual and Overall Quality of Life Improvements After Surgical Correction of "Buried Penis".

    Science.gov (United States)

    Hughes, Duncan B; Perez, Edgar; Garcia, Ryan M; Aragón, Oriana R; Erdmann, Detlev

    2016-05-01

    "Buried penis" is an increasing burden in our population with many possible etiologies. Although surgical correction of buried penis can be rewarding and successful for the surgeon, the psychological and functional impact of buried penis on the patient is less understood. The study's aim was to evaluate the sexual satisfaction and overall quality of life before and after buried penis surgery in a single-surgeon's patient population using a validated questionnaire (Changes in Sexual Functioning Questionnaire short-form). Using Likert scales generated from the questionnaire and 1-tailed paired t test analysis, we found that there was significantly improved sexual function after correction of a buried penis. Variables individually showed that there was significant improvement with sexual pleasure, urinating, and with genital hygiene postoperatively. There were no significant differences concerning frequency of pain with orgasms. Surgical correction of buried penis significantly improves the functional, sexual, and psychological aspects of patient's lives.

  1. Pile-up correction by Genetic Algorithm and Artificial Neural Network

    Science.gov (United States)

    Kafaee, M.; Saramad, S.

    2009-08-01

    Pile-up distortion is a common problem for high counting rates radiation spectroscopy in many fields such as industrial, nuclear and medical applications. It is possible to reduce pulse pile-up using hardware-based pile-up rejections. However, this phenomenon may not be eliminated completely by this approach and the spectrum distortion caused by pile-up rejection can be increased as well. In addition, inaccurate correction or rejection of pile-up artifacts in applications such as energy dispersive X-ray (EDX) spectrometers can lead to losses of counts, will give poor quantitative results and even false element identification. Therefore, it is highly desirable to use software-based models to predict and correct any recognized pile-up signals in data acquisition systems. The present paper describes two new intelligent approaches for pile-up correction; the Genetic Algorithm (GA) and Artificial Neural Networks (ANNs). The validation and testing results of these new methods have been compared, which shows excellent agreement with the measured data with 60Co source and NaI detector. The Monte Carlo simulation of these new intelligent algorithms also shows their advantages over hardware-based pulse pile-up rejection methods.

  2. Patient motion correction for single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Geckle, W.J.; Becker, L.C.; Links, J.M.; Frank, T.

    1986-01-01

    An investigation has been conducted to develop and validate techniques for the correction of projection images in SPECT studies of the myocardium subject to misalignment due to voluntary patient motion. The problem is frequently encountered due to the uncomfortable position the patient must assume during the 30 minutes required to obtain a 180 degree set of projection images. The reconstruction of misaligned projections can lead to troublesome artifacts in reconstructed images and degrade the diagnostic potential of the procedure. Significant improvement in the quality of heart reconstructions has been realized with the implementation of an algorithm to provide detection of and correction for patient motion. Normal, involuntary motion is not corrected for, however, since such movement is below the spatial resolution of the thallium imaging system under study. The algorithm is based on a comparison of the positions of an object in a set of projection images to the known, sinusoidal trajectory of an off-axis fixed point in space. Projection alignment, therefore, is achieved by shifting the position of a point or set of points in a projection image to the sinusoid of a fixed position in space

  3. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    Science.gov (United States)

    Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.

    2018-01-01

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.

  4. Reliability and validity of a nutrition and physical activity environmental self-assessment for child care

    Directory of Open Access Journals (Sweden)

    Ammerman Alice S

    2007-07-01

    Full Text Available Abstract Background Few assessment instruments have examined the nutrition and physical activity environments in child care, and none are self-administered. Given the emerging focus on child care settings as a target for intervention, a valid and reliable measure of the nutrition and physical activity environment is needed. Methods To measure inter-rater reliability, 59 child care center directors and 109 staff completed the self-assessment concurrently, but independently. Three weeks later, a repeat self-assessment was completed by a sub-sample of 38 directors to assess test-retest reliability. To assess criterion validity, a researcher-administered environmental assessment was conducted at 69 centers and was compared to a self-assessment completed by the director. A weighted kappa test statistic and percent agreement were calculated to assess agreement for each question on the self-assessment. Results For inter-rater reliability, kappa statistics ranged from 0.20 to 1.00 across all questions. Test-retest reliability of the self-assessment yielded kappa statistics that ranged from 0.07 to 1.00. The inter-quartile kappa statistic ranges for inter-rater and test-retest reliability were 0.45 to 0.63 and 0.27 to 0.45, respectively. When percent agreement was calculated, questions ranged from 52.6% to 100% for inter-rater reliability and 34.3% to 100% for test-retest reliability. Kappa statistics for validity ranged from -0.01 to 0.79, with an inter-quartile range of 0.08 to 0.34. Percent agreement for validity ranged from 12.9% to 93.7%. Conclusion This study provides estimates of criterion validity, inter-rater reliability and test-retest reliability for an environmental nutrition and physical activity self-assessment instrument for child care. Results indicate that the self-assessment is a stable and reasonably accurate instrument for use with child care interventions. We therefore recommend the Nutrition and Physical Activity Self-Assessment for

  5. Regression dilution bias: tools for correction methods and sample size calculation.

    Science.gov (United States)

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  6. Escaping the correction for body surface area when calculating glomerular filtration rate in children

    Energy Technology Data Exchange (ETDEWEB)

    Piepsz, Amy; Tondeur, Marianne [CHU St. Pierre, Department of Radioisotopes, Brussels (Belgium); Ham, Hamphrey [University Hospital Ghent, Department of Nuclear Medicine, Ghent (Belgium)

    2008-09-15

    {sup 51}Cr ethylene diamine tetraacetic acid ({sup 51}Cr EDTA) clearance is nowadays considered as an accurate and reproducible method for measuring glomerular filtration rate (GFR) in children. Normal values in function of age, corrected for body surface area, have been recently updated. However, much criticism has been expressed about the validity of body surface area correction. The aim of the present paper was to present the normal GFR values, not corrected for body surface area, with the associated percentile curves. For that purpose, the same patients as in the previous paper were selected, namely those with no recent urinary tract infection, having a normal left to right {sup 99m}Tc MAG3 uptake ratio and a normal kidney morphology on the early parenchymal images. A single blood sample method was used for {sup 51}Cr EDTA clearance measurement. Clearance values, not corrected for body surface area, increased progressively up to the adolescence. The percentile curves were determined and allow, for a single patient, to estimate accurately the level of non-corrected clearance and the evolution with time, whatever the age. (orig.)

  7. Assay Validation For Quantitation of Sn 2+ In Radiopharmaceutical Kits

    International Nuclear Information System (INIS)

    Muthalib, A; Ramli, Martalena; Herlina; Sarmini, Endang; Suharmadi; Besari, Canti

    1998-01-01

    An assay validation for quantitation of Sn2+ in radiopharmaceutical kits based on indirect iodometric titration is described. The method is based on the oxidation of sn2+ using a known excess of iodine and the excess unreacted iodine titrated with thiosulphate. Typical analytical parameters considered in this assay validation are precision, accuracy, selectivity or specificity, range, and linearity. The precision of the analytical method is quit good represented by coefficient of variance in range of 1.0% to 6.9 %, for 10 runs of analysis except one analysis shows the coefficient of 10.2 %. The method has an accuracy of 95.6 % - 99 % as percent recoveries at theoretical Sn2+ amounts of 463 μg to 2318μg

  8. Assessment of Atmospheric Correction Methods for Sentinel-2 MSI Images Applied to Amazon Floodplain Lakes

    Directory of Open Access Journals (Sweden)

    Vitor Souza Martins

    2017-03-01

    Full Text Available Satellite data provide the only viable means for extensive monitoring of remote and large freshwater systems, such as the Amazon floodplain lakes. However, an accurate atmospheric correction is required to retrieve water constituents based on surface water reflectance ( R W . In this paper, we assessed three atmospheric correction methods (Second Simulation of a Satellite Signal in the Solar Spectrum (6SV, ACOLITE and Sen2Cor applied to an image acquired by the MultiSpectral Instrument (MSI on-board of the European Space Agency’s Sentinel-2A platform using concurrent in-situ measurements over four Amazon floodplain lakes in Brazil. In addition, we evaluated the correction of forest adjacency effects based on the linear spectral unmixing model, and performed a temporal evaluation of atmospheric constituents from Multi-Angle Implementation of Atmospheric Correction (MAIAC products. The validation of MAIAC aerosol optical depth (AOD indicated satisfactory retrievals over the Amazon region, with a correlation coefficient (R of ~0.7 and 0.85 for Terra and Aqua products, respectively. The seasonal distribution of the cloud cover and AOD revealed a contrast between the first and second half of the year in the study area. Furthermore, simulation of top-of-atmosphere (TOA reflectance showed a critical contribution of atmospheric effects (>50% to all spectral bands, especially the deep blue (92%–96% and blue (84%–92% bands. The atmospheric correction results of the visible bands illustrate the limitation of the methods over dark lakes ( R W < 1%, and better match of the R W shape compared with in-situ measurements over turbid lakes, although the accuracy varied depending on the spectral bands and methods. Particularly above 705 nm, R W was highly affected by Amazon forest adjacency, and the proposed adjacency effect correction minimized the spectral distortions in R W (RMSE < 0.006. Finally, an extensive validation of the methods is required for

  9. Exploitation of jet properties for energy scale corrections for the CMS calorimeters

    International Nuclear Information System (INIS)

    Kirschenmann, Henning

    2011-02-01

    Jets form important event signatures in proton-proton collisions at the Large Hadron Collider (LHC) and the precise measurement of their energy is a crucial premise for a manifold of physics studies. Jets, which are reconstructed exclusively from calorimeter information, have been widely used within the CMS collaboration. However, the response of the calorimeters to incident particles depends heavily on their energy. In addition, it has been observed at previous experiments that the charged particle multiplicity and the radial distribution of constituents differ for jets induced by light quarks or by gluons. In conjunction with the non-linearity of the CMS calorimeters, this contributes to a mean energy response deviating from unity for calorimeter jets, depending on the jet-flavour. This thesis describes a jet-energy correction to be applied in addition to the default corrections within the CMS collaboration. This correction aims at decreasing the flavour dependence of the jet-energy response and improving the energy resolution. As many different effects contribute to the observed jet-energy response, a set of observables are introduced and corrections based on these observables are tested with respect to the above aims. A jet-width variable, which is defined from energy measured in the calorimeter, shows the best performance: A correction based on this observable improves the energy resolution by up to 20% at high transverse momenta in the central detector region and decreases the flavour dependence of the jet-energy response by a factor of two. A parametrisation of the correction is both derived from and validated on simulated data. First results from experimental data, to which the correction has been applied, are presented. The proposed jet-width correction shows a promising level of performance. (orig.)

  10. Exploitation of jet properties for energy scale corrections for the CMS calorimeters

    Energy Technology Data Exchange (ETDEWEB)

    Kirschenmann, Henning

    2011-02-15

    Jets form important event signatures in proton-proton collisions at the Large Hadron Collider (LHC) and the precise measurement of their energy is a crucial premise for a manifold of physics studies. Jets, which are reconstructed exclusively from calorimeter information, have been widely used within the CMS collaboration. However, the response of the calorimeters to incident particles depends heavily on their energy. In addition, it has been observed at previous experiments that the charged particle multiplicity and the radial distribution of constituents differ for jets induced by light quarks or by gluons. In conjunction with the non-linearity of the CMS calorimeters, this contributes to a mean energy response deviating from unity for calorimeter jets, depending on the jet-flavour. This thesis describes a jet-energy correction to be applied in addition to the default corrections within the CMS collaboration. This correction aims at decreasing the flavour dependence of the jet-energy response and improving the energy resolution. As many different effects contribute to the observed jet-energy response, a set of observables are introduced and corrections based on these observables are tested with respect to the above aims. A jet-width variable, which is defined from energy measured in the calorimeter, shows the best performance: A correction based on this observable improves the energy resolution by up to 20% at high transverse momenta in the central detector region and decreases the flavour dependence of the jet-energy response by a factor of two. A parametrisation of the correction is both derived from and validated on simulated data. First results from experimental data, to which the correction has been applied, are presented. The proposed jet-width correction shows a promising level of performance. (orig.)

  11. Consistency of FMEA used in the validation of analytical procedures.

    Science.gov (United States)

    Oldenhof, M T; van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Vredenbregt, M J; Weda, M; Barends, D M

    2011-02-20

    In order to explore the consistency of the outcome of a Failure Mode and Effects Analysis (FMEA) in the validation of analytical procedures, an FMEA was carried out by two different teams. The two teams applied two separate FMEAs to a High Performance Liquid Chromatography-Diode Array Detection-Mass Spectrometry (HPLC-DAD-MS) analytical procedure used in the quality control of medicines. Each team was free to define their own ranking scales for the probability of severity (S), occurrence (O), and detection (D) of failure modes. We calculated Risk Priority Numbers (RPNs) and we identified the failure modes above the 90th percentile of RPN values as failure modes needing urgent corrective action; failure modes falling between the 75th and 90th percentile of RPN values were identified as failure modes needing necessary corrective action, respectively. Team 1 and Team 2 identified five and six failure modes needing urgent corrective action respectively, with two being commonly identified. Of the failure modes needing necessary corrective actions, about a third were commonly identified by both teams. These results show inconsistency in the outcome of the FMEA. To improve consistency, we recommend that FMEA is always carried out under the supervision of an experienced FMEA-facilitator and that the FMEA team has at least two members with competence in the analytical method to be validated. However, the FMEAs of both teams contained valuable information that was not identified by the other team, indicating that this inconsistency is not always a drawback. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Verification and validation of decision support software: Expert Choice{trademark} and PCM{trademark}

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Q.H.; Martin, J.D.

    1994-11-04

    This report documents the verification and validation of two decision support programs: EXPERT CHOICE{trademark} and PCM{trademark}. Both programs use the Analytic Hierarchy Process (AHP) -- or pairwise comparison technique -- developed by Dr. Thomas L. Saaty. In order to provide an independent method for the validating the two programs, the pairwise comparison algorithm was developed for a standard mathematical program. A standard data set -- selecting a car to purchase -- was used with each of the three programs for validation. The results show that both commercial programs performed correctly.

  13. Development of an approach to correcting MicroPEM baseline drift.

    Science.gov (United States)

    Zhang, Ting; Chillrud, Steven N; Pitiranggon, Masha; Ross, James; Ji, Junfeng; Yan, Beizhan

    2018-07-01

    Fine particulate matter (PM 2.5 ) is associated with various adverse health outcomes. The MicroPEM (RTI, NC), a miniaturized real-time portable particulate sensor with an integrated filter for collecting particles, has been widely used for personal PM 2.5 exposure assessment. Five-day deployments were targeted on a total of 142 deployments (personal or residential) to obtain real-time PM 2.5 levels from children living in New York City and Baltimore. Among these 142 deployments, 79 applied high-efficiency particulate air (HEPA) filters in the field at the beginning and end of each deployment to adjust the zero level of the nephelometer. However, unacceptable baseline drift was observed in a large fraction (> 40%) of acquisitions in this study even after HEPA correction. This drift issue has been observed in several other studies as well. The purpose of the present study is to develop an algorithm to correct the baseline drift in MicroPEM based on central site ambient data during inactive time periods. A running baseline & gravimetric correction (RBGC) method was developed based on the comparison of MicroPEM readings during inactive periods to ambient PM 2.5 levels provided by fixed monitoring sites and the gravimetric weight of PM 2.5 collected on the MicroPEM filters. The results after RBGC correction were compared with those using HEPA approach and gravimetric correction alone. Seven pairs of duplicate acquisitions were used to validate the RBGC method. The percentages of acquisitions with baseline drift problems were 42%, 53% and 10% for raw, HEPA corrected, and RBGC corrected data, respectively. Pearson correlation analysis of duplicates showed an increase in the coefficient of determination from 0.75 for raw data to 0.97 after RBGC correction. In addition, the slope of the regression line increased from 0.60 for raw data to 1.00 after RBGC correction. The RBGC approach corrected the baseline drift issue associated with MicroPEM data. The algorithm developed

  14. The Bioelectromagnetics Society Annual Meeting (11th) Held at Tucson, Arizona on June 18-22, 1989: Abstracts

    Science.gov (United States)

    1989-06-01

    blocker or a Ca + + chelator) to validate the test procedure. Behaviors assessed include the percent correct of first 8 choices, the number of choices...enough sleep, asthenia, sleepiness, eye pain, ear noise, and eyelid tremor ). The estimation of the possible harmful EMF influence on the population’s...35 ENVIRONMENTAL EXPOSURE ......... .. 78 KERATINOCYTES ................ .. 11 ENZYME CONCENTRATION: BETA - LASER ....... ................... 3

  15. Comparative study of chance coincidence correction in measuring 223Ra and 224Ra by delay coincidence method

    International Nuclear Information System (INIS)

    Yan Yongjun; Huang Derong; Zhou Jianliang; Qiu Shoukang

    2013-01-01

    The delay coincidence measurement of 220 Rn and 219 Rn has been proved to be a valid indirect method for measuring 224 Ra and 223 Ra extracted from natural water, which can provide valuable information on estuarine/ocean mixing, submarine groundwater discharge, and water/soil interactions. In practical operation chance coincidence correction must be considered, mostly Moore's correction method, but Moore's and Giffin's methods were incomplete in some ways. In this paper the modification (method 1) and a new chance coincidence correction formula (method 2) were provided. Experiments results are presented to demonstrate the conclusions. The results show that precision is improved while counting rate is less than 70 min- 1 . (authors)

  16. Research and implementation of the algorithm for unwrapped and distortion correction basing on CORDIC for panoramic image

    Science.gov (United States)

    Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang

    2008-03-01

    The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.

  17. Effect of background correction on peak detection and quantification in online comprehensive two-dimensional liquid chromatography using diode array detection.

    Science.gov (United States)

    Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W

    2012-09-07

    A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Evaluation of speech after correction of rhinophonia with pushback palatoplasty combined with pharyngeal flap.

    Science.gov (United States)

    Dixon, V L; Bzoch, K R; Habal, M B

    1979-07-01

    A comparison is made of the preoperative and postoperative speech evaluations of 15 selected subjects who had pharyngeal flap operations combined with palatal pushback. Postoperatively, 13 of the 15 patients (86 percent) showed no abnormal nasal emission and no evidence of significant hypernasality during word production. Gross substitution errors were also corrected by the surgical repair. While the number of patients is small, this study indicates equal effectiveness of the surgical technique described--regardless of the sex, the medical diagnosis, whether the procedure was primary or secondary, or the amount of postoperative time--providing there is good function of the muscles of the soft palate.

  19. Validation of the XLACS code related to contribution of resolved and unresolved resonances and background cross sections

    International Nuclear Information System (INIS)

    Anaf, J.; Chalhoub, E.S.

    1990-01-01

    The procedures for calculating contributions of resolved and unresolved resonances and background cross sections, in XLACS code, were revised. Constant weighting function and zero Kelvin temperature were considered. Discrepancies found were corrected and now the validated XLACS code generates results that are correct and in accordance with its originally established procedures. (author)

  20. Implementation of Coupled Skin Temperature Analysis and Bias Correction in a Global Atmospheric Data Assimilation System

    Science.gov (United States)

    Radakovich, Jon; Bosilovich, M.; Chern, Jiun-dar; daSilva, Arlindo

    2004-01-01

    The NASA/NCAR Finite Volume GCM (fvGCM) with the NCAR CLM (Community Land Model) version 2.0 was integrated into the NASA/GMAO Finite Volume Data Assimilation System (fvDAS). A new method was developed for coupled skin temperature assimilation and bias correction where the analysis increment and bias correction term is passed into the CLM2 and considered a forcing term in the solution to the energy balance. For our purposes, the fvDAS CLM2 was run at 1 deg. x 1.25 deg. horizontal resolution with 55 vertical levels. We assimilate the ISCCP-DX (30 km resolution) surface temperature product. The atmospheric analysis was performed 6-hourly, while the skin temperature analysis was performed 3-hourly. The bias correction term, which was updated at the analysis times, was added to the skin temperature tendency equation at every timestep. In this presentation, we focus on the validation of the surface energy budget at the in situ reference sites for the Coordinated Enhanced Observation Period (CEOP). We will concentrate on sites that include independent skin temperature measurements and complete energy budget observations for the month of July 2001. In addition, MODIS skin temperature will be used for validation. Several assimilations were conducted and preliminary results will be presented.

  1. EnviroAtlas - Des Moines, IA - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  2. EnviroAtlas - Green Bay, WI - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  3. EnviroAtlas - New York, NY - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  4. EnviroAtlas - New Bedford, MA - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  5. Hallux valgus angle as main predictor for correction of hallux valgus

    Directory of Open Access Journals (Sweden)

    Malefijt Maarten

    2008-05-01

    Full Text Available Abstract Background It is recognized that different types of hallux valgus exist. Classification occurs with radiographic and clinical parameters. Severity of different parameters is used in algorithms to choose between different surgical procedures. Because there is no consensus about each parameter nor their cut-off point we conducted this study to analyze the influence of these variables on the postoperative hallux valgus angle. Methods After informed consent 115 patients (136 feet were included. Bunionectomy, osteotomy, lateralization of the distal fragment, lateral release and medial capsulorraphy were performed in all patients. Data were collected on preoperative and postoperative HVA, IMA and DMAA measurements. Forty cases were included since our findings in a previous article 1, therefore, current data concern an expanded study group with longer follow-up and were not published before. At least two-year follow-up data were evaluated with logistic regression and independent t-tests. Results Preoperative HVA was significant for prediction of postoperative HVA in logistic regression. IMA and DMAA were not significant for prediction of postoperative HVA in logistic regression, although they were significantly increased in larger deformities. In patients with preoperative HVA of 37 degrees or more, satisfactory correction could be obtained in 65 percent. The other nine of these 26 patients developed subluxation. Conclusion The preoperative HVA was the main radiological predictor for correction of hallux valgus, correction rate declined from preoperative HVA of 37. IMA and DMAA did have a minor role in patients with preoperative HVA lower than 37 degrees, however, likely contributed to preoperative HVA of 37 degrees or more.

  6. 40 CFR 60.1445 - What are the emission limits for air curtain incinerators that burn 100 percent yard waste?

    Science.gov (United States)

    2010-07-01

    ... curtain incinerators that burn 100 percent yard waste? 60.1445 Section 60.1445 Protection of Environment... Air Curtain Incinerators That Burn 100 Percent Yard Waste § 60.1445 What are the emission limits for air curtain incinerators that burn 100 percent yard waste? If your air curtain incinerator combusts...

  7. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  8. Higher order corrections to holographic black hole chemistry

    Science.gov (United States)

    Sinamuli, Musema; Mann, Robert B.

    2017-10-01

    We investigate the holographic Smarr relation beyond the large N limit. By making use of the holographic dictionary, we find that the bulk correlates of subleading 1 /N corrections to this relation are related to the couplings in Lovelock gravity theories. We likewise obtain a holographic equation of state and check its validity for a variety of interesting and nontrivial black holes, including rotating planar black holes in Gauss-Bonnet-Born-Infeld gravity, and nonextremal rotating black holes in minimal five-dimensional gauged supergravity. We provide an explanation of the N -dependence of the holographic Smarr relation in terms of contributions due to planar and nonplanar diagrams in the dual theory.

  9. Psychometric properties of the national eye institute refractive error correction quality-of-life questionnaire among Iranian patients

    Directory of Open Access Journals (Sweden)

    Amir H Pakpour

    2013-01-01

    Conclusions: The Iranian version of the NEI-RQL-42 is a valid and reliable instrument to assess refractive error correction quality-of-life in Iranian patients. Moreover this questionnaire can be used to evaluate the effectiveness of interventions in patients with refractive errors.

  10. Asynchronous and corrected-asynchronous numerical solutions of parabolic PDES on MIMD multiprocessors

    Science.gov (United States)

    Amitai, Dganit; Averbuch, Amir; Itzikowitz, Samuel; Turkel, Eli

    1991-01-01

    A major problem in achieving significant speed-up on parallel machines is the overhead involved with synchronizing the concurrent process. Removing the synchronization constraint has the potential of speeding up the computation. The authors present asynchronous (AS) and corrected-asynchronous (CA) finite difference schemes for the multi-dimensional heat equation. Although the discussion concentrates on the Euler scheme for the solution of the heat equation, it has the potential for being extended to other schemes and other parabolic partial differential equations (PDEs). These schemes are analyzed and implemented on the shared memory multi-user Sequent Balance machine. Numerical results for one and two dimensional problems are presented. It is shown experimentally that the synchronization penalty can be about 50 percent of run time: in most cases, the asynchronous scheme runs twice as fast as the parallel synchronous scheme. In general, the efficiency of the parallel schemes increases with processor load, with the time level, and with the problem dimension. The efficiency of the AS may reach 90 percent and over, but it provides accurate results only for steady-state values. The CA, on the other hand, is less efficient, but provides more accurate results for intermediate (non steady-state) values.

  11. 40 CFR 62.15375 - What are the emission limits for air curtain incinerators that burn 100 percent yard waste?

    Science.gov (United States)

    2010-07-01

    ... curtain incinerators that burn 100 percent yard waste? 62.15375 Section 62.15375 Protection of Environment... Combustion Units Constructed on or Before August 30, 1999 Air Curtain Incinerators That Burn 100 Percent Yard Waste § 62.15375 What are the emission limits for air curtain incinerators that burn 100 percent yard...

  12. 40 CFR 62.15380 - How must I monitor opacity for air curtain incinerators that burn 100 percent yard waste?

    Science.gov (United States)

    2010-07-01

    ... curtain incinerators that burn 100 percent yard waste? 62.15380 Section 62.15380 Protection of Environment... Combustion Units Constructed on or Before August 30, 1999 Air Curtain Incinerators That Burn 100 Percent Yard Waste § 62.15380 How must I monitor opacity for air curtain incinerators that burn 100 percent yard...

  13. 40 CFR 60.1920 - What are the emission limits for air curtain incinerators that burn 100 percent yard waste?

    Science.gov (United States)

    2010-07-01

    ... curtain incinerators that burn 100 percent yard waste? 60.1920 Section 60.1920 Protection of Environment... or Before August 30, 1999 Model Rule-Air Curtain Incinerators That Burn 100 Percent Yard Waste § 60.1920 What are the emission limits for air curtain incinerators that burn 100 percent yard waste? If...

  14. Validation of molecular crystal structures from powder diffraction data with dispersion-corrected density functional theory (DFT-D)

    DEFF Research Database (Denmark)

    van de Streek, Jacco; Neumann, Marcus A

    2014-01-01

    In 2010 we energy-minimized 225 high-quality single-crystal (SX) structures with dispersion-corrected density functional theory (DFT-D) to establish a quantitative benchmark. For the current paper, 215 organic crystal structures determined from X-ray powder diffraction (XRPD) data and published...

  15. French validation of the Foot Function Index (FFI).

    Science.gov (United States)

    Pourtier-Piotte, C; Pereira, B; Soubrier, M; Thomas, E; Gerbaud, L; Coudeyre, E

    2015-10-01

    French validation of the Foot Function Index (FFI), self-questionnaire designed to evaluate rheumatoid foot according to 3 domains: pain, disability and activity restriction. The first step consisted of translation/back translation and cultural adaptation according to the validated methodology. The second stage was a prospective validation on 53 patients with rheumatoid arthritis who filled out the FFI. The following data were collected: pain (Visual Analog Scale), disability (Health Assessment Questionnaire) and activity restrictions (McMaster Toronto Arthritis questionnaire). A test/retest procedure was performed 15 days later. The statistical analyses focused on acceptability, internal consistency (Cronbach's alpha and Principal Component Analysis), test-retest reproducibility (concordance coefficients), external validity (correlation coefficients) and responsiveness to change. The FFI-F is a culturally acceptable version for French patients with rheumatoid arthritis. The Cronbach's alpha ranged from 0.85 to 0.97. Reproducibility was correct (correlation coefficients>0.56). External validity and responsiveness to change were good. The use of a rigorous methodology allowed the validation of the FFI in the French language (FFI-F). This tool can be used in routine practice and clinical research for evaluating the rheumatoid foot. The FFI-F could be used in other pathologies with foot-related functional impairments. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  16. Is BMI a valid measure of obesity in postmenopausal women?

    Science.gov (United States)

    Banack, Hailey R; Wactawski-Wende, Jean; Hovey, Kathleen M; Stokes, Andrew

    2018-03-01

    Body mass index (BMI) is a widely used indicator of obesity status in clinical settings and population health research. However, there are concerns about the validity of BMI as a measure of obesity in postmenopausal women. Unlike BMI, which is an indirect measure of obesity and does not distinguish lean from fat mass, dual-energy x-ray absorptiometry (DXA) provides a direct measure of body fat and is considered a gold standard of adiposity measurement. The goal of this study is to examine the validity of using BMI to identify obesity in postmenopausal women relative to total body fat percent measured by DXA scan. Data from 1,329 postmenopausal women participating in the Buffalo OsteoPerio Study were used in this analysis. At baseline, women ranged in age from 53 to 85 years. Obesity was defined as BMI ≥ 30 kg/m and body fat percent (BF%) greater than 35%, 38%, or 40%. We calculated sensitivity, specificity, positive predictive value, and negative predictive value to evaluate the validity of BMI-defined obesity relative BF%. We further explored the validity of BMI relative to BF% using graphical tools, such as scatterplots and receiver-operating characteristic curves. Youden's J index was used to determine the empirical optimal BMI cut-point for each level of BF% defined obesity. The sensitivity of BMI-defined obesity was 32.4% for 35% body fat, 44.6% for 38% body fat, and 55.2% for 40% body fat. Corresponding specificity values were 99.3%, 97.1%, and 94.6%, respectively. The empirical optimal BMI cut-point to define obesity is 24.9 kg/m for 35% BF, 26.49 kg/m for 38% BF, and 27.05 kg/m for 40% BF according to the Youden's index. Results demonstrate that a BMI cut-point of 30 kg/m does not appear to be an appropriate indicator of true obesity status in postmenopausal women. Empirical estimates of the validity of BMI from this study may be used by other investigators to account for BMI-related misclassification in older women.

  17. Bayesian risk-based decision method for model validation under uncertainty

    International Nuclear Information System (INIS)

    Jiang Xiaomo; Mahadevan, Sankaran

    2007-01-01

    This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment

  18. Correction factor to determine total hydrogen+deuterium concentration obtained by inert gas fusion-thermal conductivity detection (IGF- TCD) technique

    International Nuclear Information System (INIS)

    Ramakumar, K.L.; Sesha Sayi, Y.; Shankaran, P.S.; Chhapru, G.C; Yadav, C.S.; Venugopal, V.

    2004-01-01

    The limitation of commercially available dedicated equipment based on Inert Gas Fusion- Thermal Conductivity Detection (IGF - TCD) for the determination of hydrogen+deuterium is described. For a given molar concentration, deuterium is underestimated vis a vis hydrogen because of lower thermal conductivity and not considering its molecular weight in calculations. An empirical correction factor based on the differences between the thermal conductivities of hydrogen, deuterium and the carrier gas argon, and the mole fraction of deuterium in the sample has been derived to correct the observed hydrogen+deuterium concentration. The corrected results obtained by IGF - TCD technique have been validated by determining hydrogen and deuterium contents in a few samples using an independent method based on hot vacuum extraction-quadrupole mass spectrometry (HVE-QMS). Knowledge of mole fraction of deuterium (XD) is necessary to effect the correction. The correction becomes insignificant at low X D values (XD < 0.2) as the precision in the IGF measurements is comparable with the extent of correction. (author)

  19. Does One Know the Properties of a MICE Solid or Liquid Absorber to Better than 0.3 Percent?

    International Nuclear Information System (INIS)

    Green, Michael A.; Yang, Stephanie Q.

    2006-01-01

    This report discusses the report discusses whether the MICE absorbers can be characterized to ±0.3 percent, so that one predict absorber ionization cooling within the absorber. This report shows that most solid absorbers can be characterized to much better than ±0.3 percent. The two issues that dominate the characterization of the liquid cryogen absorbers are the dimensions of the liquid in the vessel and the density of the cryogenic liquid. The thickness of the window also plays a role. This report will show that a liquid hydrogen absorber can be characterized to better than ±0.3 percent, but a liquid helium absorber cannot be characterized to better and ±1 percent

  20. 40 CFR 60.1450 - How must I monitor opacity for air curtain incinerators that burn 100 percent yard waste?

    Science.gov (United States)

    2010-07-01

    ... curtain incinerators that burn 100 percent yard waste? 60.1450 Section 60.1450 Protection of Environment... Air Curtain Incinerators That Burn 100 Percent Yard Waste § 60.1450 How must I monitor opacity for air curtain incinerators that burn 100 percent yard waste? (a) Use EPA Reference Method 9 in appendix A of...

  1. 40 CFR 60.1925 - How must I monitor opacity for air curtain incinerators that burn 100 percent yard waste?

    Science.gov (United States)

    2010-07-01

    ... curtain incinerators that burn 100 percent yard waste? 60.1925 Section 60.1925 Protection of Environment... or Before August 30, 1999 Model Rule-Air Curtain Incinerators That Burn 100 Percent Yard Waste § 60.1925 How must I monitor opacity for air curtain incinerators that burn 100 percent yard waste? (a) Use...

  2. Solar-Diesel Hybrid Power System Optimization and Experimental Validation

    Science.gov (United States)

    Jacobus, Headley Stewart

    As of 2008 1.46 billion people, or 22 percent of the World's population, were without electricity. Many of these people live in remote areas where decentralized generation is the only method of electrification. Most mini-grids are powered by diesel generators, but new hybrid power systems are becoming a reliable method to incorporate renewable energy while also reducing total system cost. This thesis quantifies the measurable Operational Costs for an experimental hybrid power system in Sierra Leone. Two software programs, Hybrid2 and HOMER, are used during the system design and subsequent analysis. Experimental data from the installed system is used to validate the two programs and to quantify the savings created by each component within the hybrid system. This thesis bridges the gap between design optimization studies that frequently lack subsequent validation and experimental hybrid system performance studies.

  3. Stray light correction on array spectroradiometers for optical radiation risk assessment in the workplace

    International Nuclear Information System (INIS)

    Barlier-Salsi, A

    2014-01-01

    The European directive 2006/25/EC requires the employer to assess and, if necessary, measure the levels of exposure to optical radiation in the workplace. Array spectroradiometers can measure optical radiation from various types of sources; however poor stray light rejection affects their accuracy. A stray light correction matrix, using a tunable laser, was developed at the National Institute of Standards and Technology (NIST). As tunable lasers are very expensive, the purpose of this study was to implement this method using only nine low power lasers; other elements of the correction matrix being completed by interpolation and extrapolation. The correction efficiency was evaluated by comparing CCD spectroradiometers with and without correction and a scanning double monochromator device as reference. Similar to findings recorded by NIST, these experiments show that it is possible to reduce the spectral stray light by one or two orders of magnitude. In terms of workplace risk assessment, this spectral stray light correction method helps determine exposure levels, with an acceptable degree of uncertainty, for the majority of workplace situations. The level of uncertainty depends upon the model of spectroradiometers used; the best results are obtained with CCD detectors having an enhanced spectral sensitivity in the UV range. Thus corrected spectroradiometers require a validation against a scanning double monochromator spectroradiometer before using them for risk assessment in the workplace. (paper)

  4. Stray light correction on array spectroradiometers for optical radiation risk assessment in the workplace.

    Science.gov (United States)

    Barlier-Salsi, A

    2014-12-01

    The European directive 2006/25/EC requires the employer to assess and, if necessary, measure the levels of exposure to optical radiation in the workplace. Array spectroradiometers can measure optical radiation from various types of sources; however poor stray light rejection affects their accuracy. A stray light correction matrix, using a tunable laser, was developed at the National Institute of Standards and Technology (NIST). As tunable lasers are very expensive, the purpose of this study was to implement this method using only nine low power lasers; other elements of the correction matrix being completed by interpolation and extrapolation. The correction efficiency was evaluated by comparing CCD spectroradiometers with and without correction and a scanning double monochromator device as reference. Similar to findings recorded by NIST, these experiments show that it is possible to reduce the spectral stray light by one or two orders of magnitude. In terms of workplace risk assessment, this spectral stray light correction method helps determine exposure levels, with an acceptable degree of uncertainty, for the majority of workplace situations. The level of uncertainty depends upon the model of spectroradiometers used; the best results are obtained with CCD detectors having an enhanced spectral sensitivity in the UV range. Thus corrected spectroradiometers require a validation against a scanning double monochromator spectroradiometer before using them for risk assessment in the workplace.

  5. CORRECTION OF SEVERE STIFF SCOLIOSIS THROUGH EXTRAPLEURAL INTERBODY RELEASE AND OSTEOTOMY (LIEPO

    Directory of Open Access Journals (Sweden)

    Cleiton Dias Naves

    Full Text Available ABSTRACT Objective: To report a new technique for extrapleural interbody release with transcorporal osteotomy of the inferior vertebral plateau (LIEPO and to evaluate the correction potential of this technique and its complications. Method: We included patients with scoliosis with Cobb angle greater than 90° and flexibility less than 25% submitted to surgical treatment between 2012 and 2016 by the technique LIEPO at the National Institute of Traumatology and Orthopedics (INTO. Sagittal and coronal alignment, and the translation of the apical vertebra were measured and the degree of correction of the deformity was calculated through the pre and postoperative radiographs, and the complications were described. Results: Patients had an average bleed of 1,525 ml, 8.8 hours of surgical time, 123° of scoliosis in the preoperative period, and a mean correction of 66%. There was no case of permanent neurological damage and no surgical revision. Conclusion: The LIEPO technique proved to be effective and safe in the treatment of severe stiff scoliosis, reaching a correction potential close to the PEISR (Posterior extrapleural intervertebral space release technique and superior to that of the pVCR (posterior Vertebral Column Resection with no presence of infection and permanent neurological deficit. New studies are needed to validate this promising technique.

  6. Skin Temperature Analysis and Bias Correction in a Coupled Land-Atmosphere Data Assimilation System

    Science.gov (United States)

    Bosilovich, Michael G.; Radakovich, Jon D.; daSilva, Arlindo; Todling, Ricardo; Verter, Frances

    2006-01-01

    In an initial investigation, remotely sensed surface temperature is assimilated into a coupled atmosphere/land global data assimilation system, with explicit accounting for biases in the model state. In this scheme, an incremental bias correction term is introduced in the model's surface energy budget. In its simplest form, the algorithm estimates and corrects a constant time mean bias for each gridpoint; additional benefits are attained with a refined version of the algorithm which allows for a correction of the mean diurnal cycle. The method is validated against the assimilated observations, as well as independent near-surface air temperature observations. In many regions, not accounting for the diurnal cycle of bias caused degradation of the diurnal amplitude of background model air temperature. Energy fluxes collected through the Coordinated Enhanced Observing Period (CEOP) are used to more closely inspect the surface energy budget. In general, sensible heat flux is improved with the surface temperature assimilation, and two stations show a reduction of bias by as much as 30 Wm(sup -2) Rondonia station in Amazonia, the Bowen ratio changes direction in an improvement related to the temperature assimilation. However, at many stations the monthly latent heat flux bias is slightly increased. These results show the impact of univariate assimilation of surface temperature observations on the surface energy budget, and suggest the need for multivariate land data assimilation. The results also show the need for independent validation data, especially flux stations in varied climate regimes.

  7. Attenuation correction in pulmonary and myocardial single photon emission computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Almquist, H

    2000-01-01

    The objective was to develop and validate methods for single photon emission computed tomography, SPECT, allowing quantitative physiologic and diagnostic studies of lung and heart. A method for correction of variable attenuation in SPECT, based on transmission measurements before administration of an isotope to the subject, was developed and evaluated. A protocol based upon geometrically well defined phantoms was developed. In a mosaic pattern phantom count rates were corrected from 39-43% to 101-110% of reference. In healthy subjects non-gravitational pulmonary perfusion gradients observed without attenuation correctionwere artefacts caused by attenuation. Pulmonary density in centre of right lung, obtained from the transmission measurement, was 0.28 {+-} 0.03 g/ml in normal subjects. Mean density was lower in large lungs compared to smaller ones. We also showed that regional ventilation/perfusion ratios could be measured with SPECT, using the readily available tracer {sup 133}Xe. Because of the low energy of {sup 133}Xe this relies heavily upon attenuation correction. A commercially available system for attenuation correction with simultaneous emission and transmission, considered to improve myocardial SPECT, performed erroneously. This could lead to clinical misjudgement. We considered that manufacturer-independent pre-clinical tests are required. In a test of two other commercial systems, based on different principles, an adapted variant of our initial protocol was proven useful. Only one of the systems provided correct emission count rates independently on phantom configuration. Errors in the other system were related to inadequate compensation of the influence of emission activity on the transmission study.

  8. A validation of DRAGON based on lattice experiments

    International Nuclear Information System (INIS)

    Marleau, G.

    1996-01-01

    Here we address the validation of DRAGON using the Chalk River Laboratory experimental database which has already been used for the validation of other codes. Because of the large variety of information for different fuel and moderator types compiled on this database, the most basic modules of DRAGON are thoroughly tested. The general behaviour observed with DRAGON is very good. Its main weakness is seen in the self-shielding ,calculation where the correction applied to the inner fuel pin seems to be overevaluated with respect to the outer fuel pins. One question which is left open this paper concerns the need for inserting end-regions in the DRAGON cells when the heterogeneous B, leakage model is used. (author)

  9. Basal area or stocking percent: which works best in controlling density in natural shortleaf pine stands

    Science.gov (United States)

    Ivan L. Sander

    1986-01-01

    Results from a shortleaf pine thinning study in Missouri show that continually thinning a stand to the same basal area will eventually create an understocked stand and reduce yields. Using stocking percent to control thinning intensity allows basal area to increase as stands get older. The best yield should occur when shortleaf pine is repeatedly thinned to 60 percent...

  10. Validation, Proof-of-Concept, and Postaudit of the Groundwater Flow and Transport Model of the Project Shoal Area

    International Nuclear Information System (INIS)

    Ahmed Hassan

    2004-01-01

    The groundwater flow and radionuclide transport model characterizing the Shoal underground nuclear test has been accepted by the State of Nevada Division of Environmental Protection. According to the Federal Facility Agreement and Consent Order (FFACO) between DOE and the State of Nevada, the next steps in the closure process for the site are then model validation (or postaudit), the proof-of-concept, and the long-term monitoring stage. This report addresses the development of the validation strategy for the Shoal model, needed for preparing the subsurface Corrective Action Decision Document-Corrective Action Plan and the development of the proof-of-concept tools needed during the five-year monitoring/validation period. The approach builds on a previous model, but is adapted and modified to the site-specific conditions and challenges of the Shoal site

  11. Validation, Proof-of-Concept, and Postaudit of the Groundwater Flow and Transport Model of the Project Shoal Area

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed Hassan

    2004-09-01

    The groundwater flow and radionuclide transport model characterizing the Shoal underground nuclear test has been accepted by the State of Nevada Division of Environmental Protection. According to the Federal Facility Agreement and Consent Order (FFACO) between DOE and the State of Nevada, the next steps in the closure process for the site are then model validation (or postaudit), the proof-of-concept, and the long-term monitoring stage. This report addresses the development of the validation strategy for the Shoal model, needed for preparing the subsurface Corrective Action Decision Document-Corrective Action Plan and the development of the proof-of-concept tools needed during the five-year monitoring/validation period. The approach builds on a previous model, but is adapted and modified to the site-specific conditions and challenges of the Shoal site.

  12. A Technique for Real-Time Ionospheric Ranging Error Correction Based On Radar Dual-Frequency Detection

    Science.gov (United States)

    Lyu, Jiang-Tao; Zhou, Chen

    2017-12-01

    Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.

  13. Trace-based post-silicon validation for VLSI circuits

    CERN Document Server

    Liu, Xiao

    2014-01-01

    This book first provides a comprehensive coverage of state-of-the-art validation solutions based on real-time signal tracing to guarantee the correctness of VLSI circuits.  The authors discuss several key challenges in post-silicon validation and provide automated solutions that are systematic and cost-effective.  A series of automatic tracing solutions and innovative design for debug (DfD) techniques are described, including techniques for trace signal selection for enhancing visibility of functional errors, a multiplexed signal tracing strategy for improving functional error detection, a tracing solution for debugging electrical errors, an interconnection fabric for increasing data bandwidth and supporting multi-core debug, an interconnection fabric design and optimization technique to increase transfer flexibility and a DfD design and associated tracing solution for improving debug efficiency and expanding tracing window. The solutions presented in this book improve the validation quality of VLSI circuit...

  14. Multiangle Implementation of Atmospheric Correction (MAIAC): 2. Aerosol Algorithm

    Science.gov (United States)

    Lyapustin, A.; Wang, Y.; Laszlo, I.; Kahn, R.; Korkin, S.; Remer, L.; Levy, R.; Reid, J. S.

    2011-01-01

    An aerosol component of a new multiangle implementation of atmospheric correction (MAIAC) algorithm is presented. MAIAC is a generic algorithm developed for the Moderate Resolution Imaging Spectroradiometer (MODIS), which performs aerosol retrievals and atmospheric correction over both dark vegetated surfaces and bright deserts based on a time series analysis and image-based processing. The MAIAC look-up tables explicitly include surface bidirectional reflectance. The aerosol algorithm derives the spectral regression coefficient (SRC) relating surface bidirectional reflectance in the blue (0.47 micron) and shortwave infrared (2.1 micron) bands; this quantity is prescribed in the MODIS operational Dark Target algorithm based on a parameterized formula. The MAIAC aerosol products include aerosol optical thickness and a fine-mode fraction at resolution of 1 km. This high resolution, required in many applications such as air quality, brings new information about aerosol sources and, potentially, their strength. AERONET validation shows that the MAIAC and MOD04 algorithms have similar accuracy over dark and vegetated surfaces and that MAIAC generally improves accuracy over brighter surfaces due to the SRC retrieval and explicit bidirectional reflectance factor characterization, as demonstrated for several U.S. West Coast AERONET sites. Due to its generic nature and developed angular correction, MAIAC performs aerosol retrievals over bright deserts, as demonstrated for the Solar Village Aerosol Robotic Network (AERONET) site in Saudi Arabia.

  15. TEM investigation of irradiated U-7 weight percent Mo dispersion fuel

    International Nuclear Information System (INIS)

    Van den Berghe, S.

    2009-01-01

    In the FUTURE experiment, fuel plates containing U-7 weight percent Mo atomized powder were irradiated in the BR2 reactor. At a burn-up of approximately 33 percent 235 U (6.5 percent FIMA or 1.41 10 21 fissions/cm 3 meat), the fuel plates showed an important deformation and the irradiation was stopped. The plates were submitted to detailed PIE at the Laboratory for High and Medium level Activity. The results of these examinations were reported in the scientific report of last year and published in open literature. Since then, the microstructural aspects of the FUTURE fuel were studied in more detail using transmission electron microscopy (TEM), in an attempt to understand the nature of the interaction phase and the fission gas behavior in the atomized U(Mo) fuel. The FUTURE experiment is regarded as the definitive proof that the classical atomized U(Mo) dispersion fuel is not stable under irradiation, at least in the conditions required for normal operation of plate-type fuel. The main cause for the instability was identified to be the irradiation behavior of the U(Mo)-Al interaction phase which is formed between the U(Mo) particles and the pure aluminum matrix during irradiation. It is assumed to become amorphous under irradiation and as such cannot retain the fission gas in stable bubbles. As a consequence, gas filled voids are generated between the interaction layer and the matrix, resulting in fuel plate pillowing and failure. The objective of the TEM investigation was the confirmation of this assumption of the amorphisation of the interaction phase. A deeper understanding of the actual nature of this layer and the fission gas behaviour in these fuels in general can allow a more oriented search for a solution to the fuel failures

  16. Refined shear correction factor for very thick simply supported and uniformly loaded isosceles right triangular auxetic plates

    International Nuclear Information System (INIS)

    Lim, Teik-Cheng

    2016-01-01

    For moderately thick plates, the use of First order Shear Deformation Theory (FSDT) with a constant shear correction factor of 5/6 is sufficient to take into account the plate deflection arising from transverse shear deformation. For very thick plates, the use of Third order Shear Deformation Theory (TSDT) is preferred as it allows the shear strain distribution to be varied through the plate thickness. Therefore no correction factor is required in TSDT, unlike FSDT. Due to the complexity involved in TSDT, this paper obtains a more accurate shear correction factor for use in FSDT of very thick simply supported and uniformly loaded isosceles right triangular plates based on the TSDT. By matching the maximum deflections for this plate according to FSDT and TSDT, a variable shear correction factor is obtained. Results show that the shear correction factor for the simplified TSDT, i.e. 14/17, is least accurate. The commonly adopted shear correction factor of 5/6 in FSDT is valid only for very thin or highly auxetic plates. This paper provides a variable shear correction for FSDT deflection that matches the plate deflection by TSDT. This variable shear correction factor allows designers to justify the use of a commonly adopted shear correction factor of 5/6 even for very thick plates as long as the Poisson’s ratio of the plate material is sufficiently negative. (paper)

  17. Reliability and Validity of Qualitative and Operational Research Paradigm

    Directory of Open Access Journals (Sweden)

    Muhammad Bashir

    2008-01-01

    Full Text Available Both qualitative and quantitative paradigms try to find the same result; the truth. Qualitative studies are tools used in understanding and describing the world of human experience. Since we maintain our humanity throughout the research process, it is largely impossible to escape the subjective experience, even for the most experienced of researchers. Reliability and Validity are the issue that has been described in great deal by advocates of quantitative researchers. The validity and the norms of rigor that are applied to quantitative research are not entirely applicable to qualitative research. Validity in qualitative research means the extent to which the data is plausible, credible and trustworthy; and thus can be defended when challenged. Reliability and validity remain appropriate concepts for attaining rigor in qualitative research. Qualitative researchers have to salvage responsibility for reliability and validity by implementing verification strategies integral and self-correcting during the conduct of inquiry itself. This ensures the attainment of rigor using strategies inherent within each qualitative design, and moves the responsibility for incorporating and maintaining reliability and validity from external reviewers’ judgments to the investigators themselves. There have different opinions on validity with some suggesting that the concepts of validity is incompatible with qualitative research and should be abandoned while others argue efforts should be made to ensure validity so as to lend credibility to the results. This paper is an attempt to clarify the meaning and use of reliability and validity in the qualitative research paradigm.

  18. Random access to mobile networks with advanced error correction

    Science.gov (United States)

    Dippold, Michael

    1990-01-01

    A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.

  19. Transanal repair of rectocele corrects obstructed defecation if it is not associated with anismus.

    Science.gov (United States)

    Tjandra, J J; Ooi, B S; Tang, C L; Dwyer, P; Carey, M

    1999-12-01

    Rectocele is often associated with anorectal symptoms. Various surgical techniques have been described to repair the rectocele. The surgical results are variable. This study evaluated the results of transanal repair of rectocele, with particular emphasis on the impact of concomitant anismus on postoperative functional outcome. Fifty-nine consecutive females who underwent transanal repair of rectocele for obstructed defecation were prospectively reviewed. All 59 patients were parous with a median parity of 2 (range, 1-6) and a median age of 58 (range, 46-68) years. The median length of follow-up was 19 (range, 6-40) months. Anismus was detected by anorectal physiology and defecography. The functional outcome was assessed by a standard questionnaire, physical examination, anorectal manometry, neurophysiology, and defecography. The quality-of-life index was obtained using a visual analog scale (from 1-10, with 10 being the best). The functional outcome of transanal repair of rectocele was superior in patients without anismus. Forty (93 percent) of the 43 patients without anismus showed improved evacuation after repair compared with 6 (38 percent) of the 16 patients with anismus (Panismus was not present (Panismus, effectively corrects obstructed defecation.

  20. 40 CFR 63.5885 - How do I calculate percent reduction to demonstrate compliance for continuous lamination/casting...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true How do I calculate percent reduction to... Pollutants: Reinforced Plastic Composites Production Testing and Initial Compliance Requirements § 63.5885 How do I calculate percent reduction to demonstrate compliance for continuous lamination/casting...

  1. p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results.

    Science.gov (United States)

    Simonsohn, Uri; Nelson, Leif D; Simmons, Joseph P

    2014-11-01

    Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the "choice overload" literature. © The Author(s) 2014.

  2. A Comparison of Three Approaches to Correct for Direct and Indirect Range Restrictions: A Simulation Study

    Science.gov (United States)

    Pfaffel, Andreas; Schober, Barbara; Spiel, Christiane

    2016-01-01

    A common methodological problem in the evaluation of the predictive validity of selection methods, e.g. in educational and employment selection, is that the correlation between predictor and criterion is biased. Thorndike's (1949) formulas are commonly used to correct for this biased correlation. An alternative approach is to view the selection…

  3. Validation test of advanced technology for IPV nickel-hydrogen flight cells - Update

    Science.gov (United States)

    Smithrick, John J.; Hall, Stephen W.

    1992-01-01

    Individual pressure vessel (IPV) nickel-hydrogen technology was advanced at NASA Lewis and under Lewis contracts with the intention of improving cycle life and performance. One advancement was to use 26 percent potassium hydroxide (KOH) electrolyte to improve cycle life. Another advancement was to modify the state-of-the-art cell design to eliminate identified failure modes. The modified design is referred to as the advanced design. A breakthrough in the LEO cycle life of IPV nickel-hydrogen cells has been previously reported. The cycle life of boiler plate cells containing 26 percent KOH electrolyte was about 40,000 LEO cycles compared to 3,500 cycles for cells containing 31 percent KOH. The boiler plate test results are in the process of being validated using flight hardware and real time LEO testing. The primary function of the advanced cell is to store and deliver energy for long-term, LEO spacecraft missions. The new features of this design are: (1) use of 26 percent rather than 31 percent KOH electrolyte; (2) use of a patented catalyzed wall wick; (3) use of serrated-edge separators to facilitate gaseous oxygen and hydrogen flow within the cell, while still maintaining physical contact with the wall wick for electrolyte management; and (4) use of a floating rather than a fixed stack (state-of-the-art) to accommodate nickel electrode expansion due to charge/discharge cycling. The significant improvements resulting from these innovations are: extended cycle life; enhanced thermal, electrolyte, and oxygen management; and accommodation of nickel electrode expansion.

  4. Paper-pen peer-correction versus wiki-based peer-correction

    Directory of Open Access Journals (Sweden)

    Froldova Vladimira

    2016-01-01

    Full Text Available This study reports on the comparison of the students’ achievement and their attitudes towards the use of paper-pen peer-correction and wiki-based peer-correction within English language lessons and CLIL Social Science lessons at the higher secondary school in Prague. Questionnaires and semi-structured interviews were utilized to gather information. The data suggests that students made considerable use of wikis and showed higher degrees of motivation in wiki-based peer-correction during English language lessons than in CLIL Social Science lessons. In both cases wikis not only contributed to developing students’ writing skills, but also helped students recognize the importance of collaboration.

  5. Validation in the Absence of Observed Events.

    Science.gov (United States)

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  6. Author Correction

    DEFF Research Database (Denmark)

    Grundle, D S; Löscher, C R; Krahmann, G

    2018-01-01

    A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.......A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper....

  7. An Improved Dynamical Downscaling Method with GCM Bias Corrections and Its Validation with 30 Years of Climate Simulations

    KAUST Repository

    Xu, Zhongfeng; Yang, Zong-Liang

    2012-01-01

    An improved dynamical downscaling method (IDD) with general circulation model (GCM) bias corrections is developed and assessed over North America. A set of regional climate simulations is performed with the Weather Research and Forecasting Model

  8. Real-Gas Correction Factors for Hypersonic Flow Parameters in Helium

    Science.gov (United States)

    Erickson, Wayne D.

    1960-01-01

    The real-gas hypersonic flow parameters for helium have been calculated for stagnation temperatures from 0 F to 600 F and stagnation pressures up to 6,000 pounds per square inch absolute. The results of these calculations are presented in the form of simple correction factors which must be applied to the tabulated ideal-gas parameters. It has been shown that the deviations from the ideal-gas law which exist at high pressures may cause a corresponding significant error in the hypersonic flow parameters when calculated as an ideal gas. For example the ratio of the free-stream static to stagnation pressure as calculated from the thermodynamic properties of helium for a stagnation temperature of 80 F and pressure of 4,000 pounds per square inch absolute was found to be approximately 13 percent greater than that determined from the ideal-gas tabulation with a specific heat ratio of 5/3.

  9. A model for growth of beta-phase particles in zirconium-2.5 wt percent niobium

    International Nuclear Information System (INIS)

    Chow, C.K.; Liner, Y.; Rigby, G.L.

    1984-08-01

    The kinetics of the α → β phase change in Zr-2.5 percent Nb pressure-tube material at constant temperature have been studied. The volume-fraction change of the β phase due to diffusion in an infinite α-phase matrix was considered, and a mathematical model with a numerical solution was developed to predict the transient spherical growth of the β-phase region. This model has been applied to Zr-2.5 wt percent Nb, and the calculated results compared to experiment

  10. Dark energy homogeneity in general relativity: Are we applying it correctly?

    Science.gov (United States)

    Duniya, Didam G. A.

    2016-04-01

    Thus far, there does not appear to be an agreed (or adequate) definition of homogeneous dark energy (DE). This paper seeks to define a valid, adequate homogeneity condition for DE. Firstly, it is shown that as long as w_x ≠ -1, DE must have perturbations. It is then argued, independent of w_x, that a correct definition of homogeneous DE is one whose density perturbation vanishes in comoving gauge: and hence, in the DE rest frame. Using phenomenological DE, the consequence of this approach is then investigated in the observed galaxy power spectrum—with the power spectrum being normalized on small scales, at the present epoch z=0. It is found that for high magnification bias, relativistic corrections in the galaxy power spectrum are able to distinguish the concordance model from both a homogeneous DE and a clustering DE—on super-horizon scales.

  11. 26 CFR 1.46-9 - Requirements for taxpayers electing an extra one-half percent additional investment credit.

    Science.gov (United States)

    2010-04-01

    ... percent additional investment credit for property described in section 46(a)(2)(D). Paragraph (c) of this...-half percent additional investment credit. 1.46-9 Section 1.46-9 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY INCOME TAX INCOME TAXES Rules for Computing Credit for Investment in...

  12. Automatic EEG-assisted retrospective motion correction for fMRI (aE-REMCOR).

    Science.gov (United States)

    Wong, Chung-Ki; Zotev, Vadim; Misaki, Masaya; Phillips, Raquel; Luo, Qingfei; Bodurka, Jerzy

    2016-04-01

    Head motions during functional magnetic resonance imaging (fMRI) impair fMRI data quality and introduce systematic artifacts that can affect interpretation of fMRI results. Electroencephalography (EEG) recordings performed simultaneously with fMRI provide high-temporal-resolution information about ongoing brain activity as well as head movements. Recently, an EEG-assisted retrospective motion correction (E-REMCOR) method was introduced. E-REMCOR utilizes EEG motion artifacts to correct the effects of head movements in simultaneously acquired fMRI data on a slice-by-slice basis. While E-REMCOR is an efficient motion correction approach, it involves an independent component analysis (ICA) of the EEG data and identification of motion-related ICs. Here we report an automated implementation of E-REMCOR, referred to as aE-REMCOR, which we developed to facilitate the application of E-REMCOR in large-scale EEG-fMRI studies. The aE-REMCOR algorithm, implemented in MATLAB, enables an automated preprocessing of the EEG data, an ICA decomposition, and, importantly, an automatic identification of motion-related ICs. aE-REMCOR has been used to perform retrospective motion correction for 305 fMRI datasets from 16 subjects, who participated in EEG-fMRI experiments conducted on a 3T MRI scanner. Performance of aE-REMCOR has been evaluated based on improvement in temporal signal-to-noise ratio (TSNR) of the fMRI data, as well as correction efficiency defined in terms of spike reduction in fMRI motion parameters. The results show that aE-REMCOR is capable of substantially reducing head motion artifacts in fMRI data. In particular, when there are significant rapid head movements during the scan, a large TSNR improvement and high correction efficiency can be achieved. Depending on a subject's motion, an average TSNR improvement over the brain upon the application of aE-REMCOR can be as high as 27%, with top ten percent of the TSNR improvement values exceeding 55%. The average

  13. Validation of biomarkers for the study of environmental carcinogens: a review

    DEFF Research Database (Denmark)

    Gallo, Valentina; Khan, Aneire; Gonzales, Carlos

    2008-01-01

    There is a need for validation of biomarkers. Our aim is to review published work on the validation of selected biomarkers: bulky DNA adducts, N-nitroso compounds, 1-hydroxypyrene, and oxidative damage to DNA. A systematic literature search in PubMed was performed. Information on the variability...... and reliability of the laboratory tests used for biomarkers measurements was collected. For the evaluation of the evidence on validation we referred to the ACCE criteria. Little is known about intraindividual variation of DNA adduct measurements, but measurements have a good repeatability irrespective...... of the technique used for their identification; reproducibility improved after the correction for a laboratory factor. A high-sensitivity method is available for the measurement of 1-hydroxypyrene in urine. There is consensus on validation of biomarkers of oxidative damage DNA based on the comet assay...

  14. Genetic Correction and Hepatic Differentiation of Hemophilia B-specific Human Induced Pluripotent Stem Cells.

    Science.gov (United States)

    He, Qiong; Wang, Hui-Hui; Cheng, Tao; Yuan, Wei-Ping; Ma, Yu-Po; Jiang, Yong-Ping; Ren, Zhi-Hua

    2017-09-27

    Objective To genetically correct a disease-causing point mutation in human induced pluripotent stem cells (iPSCs) derived from a hemophilia B patient. Methods First, the disease-causing mutation was detected by sequencing the encoding area of human coagulation factor IX (F IX) gene. Genomic DNA was extracted from the iPSCs, and the primers were designed to amplify the eight exons of F IX. Next, the point mutation in those iPSCs was genetically corrected using CRISPR/Cas9 technology in the presence of a 129-nucleotide homologous repair template that contained two synonymous mutations. Then, top 8 potential off-target sites were subsequently analyzed using Sanger sequencing. Finally, the corrected clones were differentiated into hepatocyte-like cells, and the secretion of F IX was validated by immunocytochemistry and ELISA assay. Results The cell line bore a missense mutation in the 6 th coding exon (c.676 C>T) of F IX gene. Correction of the point mutation was achieved via CRISPR/Cas9 technology in situ with a high efficacy at about 22% (10/45) and no off-target effects detected in the corrected iPSC clones. F IX secretion, which was further visualized by immunocytochemistry and quantified by ELISA in vitro, reached about 6 ng/ml on day 21 of differentiation procedure. Conclusions Mutations in human disease-specific iPSCs could be precisely corrected by CRISPR/Cas9 technology, and corrected cells still maintained hepatic differentiation capability. Our findings might throw a light on iPSC-based personalized therapies in the clinical application, especially for hemophilia B.

  15. A validated methodology for evaluating burn-up credit in spent fuel casks

    International Nuclear Information System (INIS)

    Brady, M.C.; Sanders, T.L.

    1992-01-01

    The concept of allowing reactivity credit for the transmuted state of spent fuel offers both economic and risk incentives. This paper presents a general overview of the technical work being performed in support of the US Department of Energy (USDOE) programme to resolve issues related to the implementation of burn-up credit in spent fuel cask design. An analysis methodology is presented along with information representing the validation of the method against available experimental data. The experimental data that are applicable to burn-up credit include chemical assay data for the validation of the isotopic prediction models, fresh fuel critical experiments for the validation of criticality calculations for various cask geometries, and reactor re-start critical data to validate criticality calculations with spent fuel. The methodology has been specifically developed to be simple and generally applicable, therefore giving rise to uncertainties or sensitivities which are identified and quantified in terms of a percent bias effective multiplication (k eff ). Implementation issues affecting licensing requirements and operational procedures are discussed briefly. (Author)

  16. Examination of temperature-induced shape memory of uranium--5.3-to 6.9 weight percent niobium alloys

    International Nuclear Information System (INIS)

    Hemperly, V.C.

    1976-01-01

    The uranium-niobium alloy system was examined in the range of 5.3-to-6.9 weight percent niobium with respect to shape memory, mechanical properties, metallography, Coefficients of linear thermal expansion, and differential thermal analysis. Shape memory increased with increasing niobium levels in the study range. There were no useful correlations found between shape memory and the other tests. Coefficients of linear thermal expansion tests of as-quenched 5.8 and 6.2 weight percent niobium specimens, but not 5.3 and 6.9 weight percent niobium specimens, had a contraction component on heating, but the phenomenon was not a contributor to shape memory

  17. Publisher Correction

    DEFF Research Database (Denmark)

    Turcot, Valérie; Lu, Yingchang; Highland, Heather M

    2018-01-01

    In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article.......In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article....

  18. An Improved Dynamical Downscaling Method with GCM Bias Corrections and Its Validation with 30 Years of Climate Simulations

    KAUST Repository

    Xu, Zhongfeng

    2012-09-01

    An improved dynamical downscaling method (IDD) with general circulation model (GCM) bias corrections is developed and assessed over North America. A set of regional climate simulations is performed with the Weather Research and Forecasting Model (WRF) version 3.3 embedded in the National Center for Atmospheric Research\\'s (NCAR\\'s) Community Atmosphere Model (CAM). The GCM climatological means and the amplitudes of interannual variations are adjusted based on the National Centers for Environmental Prediction (NCEP)-NCAR global reanalysis products (NNRP) before using them to drive WRF. In this study, the WRF downscaling experiments are identical except the initial and lateral boundary conditions derived from the NNRP, original GCM output, and bias-corrected GCM output, respectively. The analysis finds that the IDD greatly improves the downscaled climate in both climatological means and extreme events relative to the traditional dynamical downscaling approach (TDD). The errors of downscaled climatological mean air temperature, geopotential height, wind vector, moisture, and precipitation are greatly reduced when the GCM bias corrections are applied. In the meantime, IDD also improves the downscaled extreme events characterized by the reduced errors in 2-yr return levels of surface air temperature and precipitation. In comparison with TDD, IDD is also able to produce a more realistic probability distribution in summer daily maximum temperature over the central U.S.-Canada region as well as in summer and winter daily precipitation over the middle and eastern United States. © 2012 American Meteorological Society.

  19. Using individual differences to predict job performance: correcting for direct and indirect restriction of range.

    Science.gov (United States)

    Sjöberg, Sofia; Sjöberg, Anders; Näswall, Katharina; Sverke, Magnus

    2012-08-01

    The present study investigates the relationship between individual differences, indicated by personality (FFM) and general mental ability (GMA), and job performance applying two different methods of correction for range restriction. The results, derived by analyzing meta-analytic correlations, show that the more accurate method of correcting for indirect range restriction increased the operational validity of individual differences in predicting job performance and that this increase primarily was due to general mental ability being a stronger predictor than any of the personality traits. The estimates for single traits can be applied in practice to maximize prediction of job performance. Further, differences in the relative importance of general mental ability in relation to overall personality assessment methods was substantive and the estimates provided enables practitioners to perform a correct utility analysis of their overall selection procedure. © 2012 The Authors. Scandinavian Journal of Psychology © 2012 The Scandinavian Psychological Associations.

  20. Correction to: Multiple Score Comparison: a network meta-analysis approach to comparison and external validation of prognostic scores

    Directory of Open Access Journals (Sweden)

    Sarah R. Haile

    2018-02-01

    Full Text Available Correction Following publication of the original article [1], a member of the writing group reported that his name is misspelt. The paper should appear in Pubmed under “Ter Riet G”, bot as “Riet GT”.

  1. Correction of incomplete charge collection in CdTe detectors using the correlation with the rise time distribution

    International Nuclear Information System (INIS)

    Horovitz, Yossi.

    1994-01-01

    Experimentally and theoretically it was found that there is a correlation between tile pulse rise time and the amount of charge that is collected in the detector contacts. As the rise time becomes longer less charge is collected. In this thesis it has been proven that one can find from this correlation, with the aid of a mathematical function, the theoretical amount of charge that has to be collected in the contacts if no trapping took place. This mathematical function called the correction function, f(t), is dependent on the rise time and the material quality (the trap concentration). In order to find the correction function, a computer, simulation was written. This computer program simulates, based on a phenomenological theoretical model, the charge collection in the detector. This model depends on three parameters (for the holes and for the electrons) that characterized the charge collection quality of the detector. The parameters are: the mean free time to be trapped, the detrapping time and the transit time that depends on the electric field. By a comparison between the simulation output and experimental data, these parameters were found. The correction function was found to be linear with rise time. This conclusion is confirmed experimentally. In this work experiments have been carried out that measured the correlation between two parameters. These experiments measured, for each photon that interacts with the detector, the pulse rise time and the pulse amplitude. A computer program accepts these spectra and substitute each element in the correction function and corrects for the incomplete charge collection. It was found that the correction function does not depend on the energy of the radiation source and source-detector geometry but depends on the material quality. The application of the correction function to the two dimensional spectra gives a correction of tens of percents in charge collection and provides an improvement in the resolution and the peak

  2. Variations in depth-dose data between open and wedge fields for 4-MV x-rays

    International Nuclear Information System (INIS)

    Sewchand, W.; Khan, F.M.; Williamson, J.

    1978-01-01

    Central-axis depth-dose data for 4-MV x rays, including tissue-maximum ratios, were measured for wedge fields. Comparison with corresponding open-field data revealed differences in magnitude which increased with depth, field size, and wedge thickness. However, phantom scatter correction factors for the wedge fields differed less than 1% from corresponding open-field factors. The differences in central-axis percent depth doses between the two types of fields indicate beam hardening by the wedge filter. This study also implies that the derivation of tissue-maximum ratios from central-axis percent depth is as valid for wedge as for open fields

  3. SU-C-201-06: Small Field Correction Factors for the MicroDiamond Detector in the Gamma Knife-Model C Derived Using Monte Carlo Methods

    International Nuclear Information System (INIS)

    Barrett, J C; Knill, C

    2016-01-01

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes. Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to

  4. SU-C-201-06: Small Field Correction Factors for the MicroDiamond Detector in the Gamma Knife-Model C Derived Using Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, J C [Wayne State University, Detroit, MI (United States); Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI (United States); Knill, C [Wayne State University, Detroit, MI (United States); Beaumont Hospital, Canton, MI (United States)

    2016-06-15

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes. Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to

  5. Correction factors for photon spectrometry in nuclear parameters study

    International Nuclear Information System (INIS)

    Patrao, Karla Cristina de Souza

    2004-10-01

    The goal of this work was the determination, using metrologic severity, the factors of correction for coincidences XX, Xγ and γγ and the factors of transference of efficiency for use in gamma spectrometry. On this way, it was carried through by determination of nuclear parameters of a nuclide used in medicine diagnostic ( 201 Tl) and the standardization of two environmental samples, of regular and irregular geometry, proceeding from the residual (ashes and slag) from the nuclear industry. The results shows that this adopted methodology is valid, and it allows its application for many different nuclides, including complex decay schema nuclides, using only photons spectrometry techniques on semiconductor detectors. (author)

  6. The usefulness and the problems of attenuation correction using simultaneous transmission and emission data acquisition method. Studies on normal volunteers and phantom

    International Nuclear Information System (INIS)

    Kijima, Tetsuji; Kumita, Shin-ichiro; Mizumura, Sunao; Cho, Keiichi; Ishihara, Makiko; Toba, Masahiro; Kumazaki, Tatsuo; Takahashi, Munehiro.

    1997-01-01

    Attenuation correction using simultaneous transmission data (TCT) and emission data (ECT) acquisition method was applied to 201 Tl myocardial SPECT with ten normal adults and the phantom in order to validate the efficacy of attenuation correction using this method. Normal adults study demonstrated improved 201 Tl accumulation to the septal wall and the posterior wall of the left ventricle and relative decreased activities in the lateral wall with attenuation correction (p 201 Tl uptake organs such as the liver and the stomach pushed up the activities in the septal wall and the posterior wall. Cardiac dynamic phantom studies showed partial volume effect due to cardiac motion contributed to under-correction of the apex, which might be overcome using gated SPECT. Although simultaneous TCT and ECT acquisition was conceived of the advantageous method for attenuation correction, miss-correction of the special myocardial segments should be taken into account in assessment of attenuation correction compensated images. (author)

  7. Three-dimensional photon dose distributions with and without lung corrections for tangential breast intact treatments

    International Nuclear Information System (INIS)

    Chin, L.M.; Cheng, C.W.; Siddon, R.L.; Rice, R.K.; Mijnheer, B.J.; Harris, J.R.

    1989-01-01

    The influence of lung volume and photon energy on the 3-dimensional dose distribution for patients treated by intact breast irradiation is not well established. To investigate this issue, we studied the 3-dimensional dose distributions calculated for an 'average' breast phantom for 60Co, 4 MV, 6 MV, and 8 MV photon beams. For the homogeneous breast, areas of high dose ('hot spots') lie along the periphery of the breast near the posterior plane and near the apex of the breast. The highest dose occurs at the inferior margin of the breast tissue, and this may exceed 125% of the target dose for lower photon energies. The magnitude of these 'hot spots' decreases for higher energy photons. When lung correction is included in the dose calculation, the doses to areas at the left and right margin of the lung volume increase. The magnitude of the increase depends on energy and the patient anatomy. For the 'average' breast phantom (lung density 0.31 g/cm3), the correction factors are between 1.03 to 1.06 depending on the energy used. Higher energy is associated with lower correction factors. Both the ratio-of-TMR and the Batho lung correction methods can predict these corrections within a few percent. The range of depths of the 100% isodose from the skin surface, measured along the perpendicular to the tangent of the skin surface, were also energy dependent. The range was 0.1-0.4 cm for 60Co and 0.5-1.4 cm for 8 MV. We conclude that the use of higher energy photons in the range used here provides lower value of the 'hot spots' compared to lower energy photons, but this needs to be balanced against a possible disadvantage in decreased dose delivered to the skin and superficial portion of the breast

  8. EnviroAtlas - Minneapolis/St. Paul, MN - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  9. Well Completion Report for Corrective Action Unit 447, Project Shoal Area, Churchill County, Nevada, Rev. No.: 0

    Energy Technology Data Exchange (ETDEWEB)

    Rick Findlay

    2006-09-01

    This Well Completion Report is being provided as part of the implementation of the Corrective Action Decision Document (CADD)/Corrective Action Plan (CAP) for Corrective Action Unit (CAU) 447 (NNSA/NSO, 2006a). The CADD/CAP is part of an ongoing U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO) funded project for the investigation of CAU 447 at the Project Shoal Area (PSA). All work performed on this project was conducted in accordance with the ''Federal Facility Agreement and Consent Order'' (FFACO) (1996), and all applicable Nevada Division of Environmental Protection (NDEP) policies and regulations. Investigation activities included the drilling, construction, and development of three monitoring/validation (MV) wells at the PSA. This report summarizes the field activities and data collected during the investigation.

  10. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    Science.gov (United States)

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Simulation of dynamic pile-up corrections in the ATLAS level-1 calorimeter trigger

    Energy Technology Data Exchange (ETDEWEB)

    Narrias-Villar, Daniel; Wessels, Martin; Brandt, Oleg [Heidelberg University, Heidelberg (Germany)

    2015-07-01

    The Level-1 Calorimeter Trigger is a crucial part of the ATLAS trigger effort to select only relevant physics events out of the large number of interactions at the LHC. In Run II, in which the LHC will double the centre-of-mass energy and further increase the instantaneous luminosity, pile-up is a limiting key factor for triggering and reconstruction of relevant events. The upgraded L1Calo Multi-Chip-Modules (nMCM) will address this problem by applying dynamic pile-up corrections in real-time, of which a precise simulation is crucial for physics analysis. Therefore pile-up effects are studied in order to provide a predictable parametrised baseline correction for the Monte Carlo simulation. Physics validation plots, such as trigger rates and turn-on curves are laid out.

  12. Corrective Action Decision Document for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Boehlecke, Robert

    2004-01-01

    The six bunkers included in CAU 204 were primarily used to monitor atmospheric testing or store munitions. The 'Corrective Action Investigation Plan (CAIP) for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada' (NNSA/NV, 2002a) provides information relating to the history, planning, and scope of the investigation; therefore, it will not be repeated in this CADD. This CADD identifies potential corrective action alternatives and provides a rationale for the selection of a recommended corrective action alternative for each CAS within CAU 204. The evaluation of corrective action alternatives is based on process knowledge and the results of investigative activities conducted in accordance with the CAIP (NNSA/NV, 2002a) that was approved prior to the start of the Corrective Action Investigation (CAI). Record of Technical Change (ROTC) No. 1 to the CAIP (approval pending) documents changes to the preliminary action levels (PALs) agreed to by the Nevada Division of Environmental Protection (NDEP) and DOE, National Nuclear Security Administration Nevada Site Office (NNSA/NSO). This ROTC specifically discusses the radiological PALs and their application to the findings of the CAU 204 corrective action investigation. The scope of this CADD consists of the following: (1) Develop corrective action objectives; (2) Identify corrective action alternative screening criteria; (3) Develop corrective action alternatives; (4) Perform detailed and comparative evaluations of corrective action alternatives in relation to corrective action objectives and screening criteria; and (5) Recommend and justify a preferred corrective action alternative for each CAS within CAU 204

  13. Correction of TRMM 3B42V7 Based on Linear Regression Models over China

    Directory of Open Access Journals (Sweden)

    Shaohua Liu

    2016-01-01

    Full Text Available High temporal-spatial precipitation is necessary for hydrological simulation and water resource management, and remotely sensed precipitation products (RSPPs play a key role in supporting high temporal-spatial precipitation, especially in sparse gauge regions. TRMM 3B42V7 data (TRMM precipitation is an essential RSPP outperforming other RSPPs. Yet the utilization of TRMM precipitation is still limited by the inaccuracy and low spatial resolution at regional scale. In this paper, linear regression models (LRMs have been constructed to correct and downscale the TRMM precipitation based on the gauge precipitation at 2257 stations over China from 1998 to 2013. Then, the corrected TRMM precipitation was validated by gauge precipitation at 839 out of 2257 stations in 2014 at station and grid scales. The results show that both monthly and annual LRMs have obviously improved the accuracy of corrected TRMM precipitation with acceptable error, and monthly LRM performs slightly better than annual LRM in Mideastern China. Although the performance of corrected TRMM precipitation from the LRMs has been increased in Northwest China and Tibetan plateau, the error of corrected TRMM precipitation is still significant due to the large deviation between TRMM precipitation and low-density gauge precipitation.

  14. Six-dimensional correction of intra-fractional prostate motion with CyberKnife stereotactic body radiation therapy

    Directory of Open Access Journals (Sweden)

    Sean eCollins

    2011-12-01

    Full Text Available AbstractLarge fraction radiation therapy offers a shorter course of treatment and radiobiological advantages for prostate cancer treatment. The CyberKnife is an attractive technology for delivering large fraction doses based on the ability to deliver highly conformal radiation therapy to moving targets. In addition to intra-fractional translational motion (left-right, superior-inferior and anterior-posterior, prostate rotation (pitch, roll and yaw can increase geographical miss risk. We describe our experience with six-dimensional (6D intrafraction prostate motion correction using CyberKnife stereotactic body radiation therapy (SBRT. Eighty-eight patients were treated by SBRT alone or with supplemental external radiation therapy. Trans-perineal placement of four gold fiducials within the prostate accommodated X-ray guided prostate localization and beam adjustment. Fiducial separation and non-overlapping positioning permitted the orthogonal imaging required for 6D tracking. Fiducial placement accuracy was assessed using the CyberKnife fiducial extraction algorithm. Acute toxicities were assessed using Common Toxicity Criteria (CTC v3. There were no Grade 3, or higher, complications and acute morbidity was minimal. Ninety-eight percent of patients completed treatment employing 6D prostate motion tracking with intrafractional beam correction. Suboptimal fiducial placement limited treatment to 3D tracking in 2 patients. Our experience may guide others in performing 6D correction of prostate motion with CyberKnife SBRT.

  15. Validation of KENO-based criticality calculations at Rocky Flats

    International Nuclear Information System (INIS)

    Felsher, P.D.; McKamy, J.N.; Monahan, S.P.

    1992-01-01

    In the absence of experimental data, it is necessary to rely on computer-based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two-part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the code's ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper, the authors discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG and G Rocky Flats. The validation process at Rocky Flats consists of both global and local techniques. The global validation resulted in a maximum k eff limit of 0.95 for the limiting-accident scanarios of a criticality evaluation

  16. Enhancement of chemical entity identification in text using semantic similarity validation.

    Directory of Open Access Journals (Sweden)

    Tiago Grego

    Full Text Available With the amount of chemical data being produced and reported in the literature growing at a fast pace, it is increasingly important to efficiently retrieve this information. To tackle this issue text mining tools have been applied, but despite their good performance they still provide many errors that we believe can be filtered by using semantic similarity. Thus, this paper proposes a novel method that receives the results of chemical entity identification systems, such as Whatizit, and exploits the semantic relationships in ChEBI to measure the similarity between the entities found in the text. The method assigns a single validation score to each entity based on its similarities with the other entities also identified in the text. Then, by using a given threshold, the method selects a set of validated entities and a set of outlier entities. We evaluated our method using the results of two state-of-the-art chemical entity identification tools, three semantic similarity measures and two text window sizes. The method was able to increase precision without filtering a significant number of correctly identified entities. This means that the method can effectively discriminate the correctly identified chemical entities, while discarding a significant number of identification errors. For example, selecting a validation set with 75% of all identified entities, we were able to increase the precision by 28% for one of the chemical entity identification tools (Whatizit, maintaining in that subset 97% the correctly identified entities. Our method can be directly used as an add-on by any state-of-the-art entity identification tool that provides mappings to a database, in order to improve their results. The proposed method is included in a freely accessible web tool at www.lasige.di.fc.ul.pt/webtools/ice/.

  17. Higher Order Corrections in the CoLoRFulNNLO Framework

    Science.gov (United States)

    Somogyi, G.; Kardos, A.; Szőr, Z.; Trócsányi, Z.

    We discuss the CoLoRFulNNLO method for computing higher order radiative corrections to jet cross sections in perturbative QCD. We apply our method to the calculation of event shapes and jet rates in three-jet production in electron-positron annihilation. We validate our code by comparing our predictions to previous results in the literature and present the jet cone energy fraction distribution at NNLO accuracy. We also present preliminary NNLO results for the three-jet rate using the Durham jet clustering algorithm matched to resummed predictions at NLL accuracy, and a comparison to LEP data.

  18. Measurement and correction of transverse chromatic offsets for multi-wavelength retinal microscopy in the living eye.

    Science.gov (United States)

    Harmening, Wolf M; Tiruveedhula, Pavan; Roorda, Austin; Sincich, Lawrence C

    2012-09-01

    A special challenge arises when pursuing multi-wavelength imaging of retinal tissue in vivo, because the eye's optics must be used as the main focusing elements, and they introduce significant chromatic dispersion. Here we present an image-based method to measure and correct for the eye's transverse chromatic aberrations rapidly, non-invasively, and with high precision. We validate the technique against hyperacute psychophysical performance and the standard chromatic human eye model. In vivo correction of chromatic dispersion will enable confocal multi-wavelength images of the living retina to be aligned, and allow targeted chromatic stimulation of the photoreceptor mosaic to be performed accurately with sub-cellular resolution.

  19. Temperature corrected-calibration of GRACE's accelerometer

    Science.gov (United States)

    Encarnacao, J.; Save, H.; Siemes, C.; Doornbos, E.; Tapley, B. D.

    2017-12-01

    Since April 2011, the thermal control of the accelerometers on board the GRACE satellites has been turned off. The time series of along-track bias clearly show a drastic change in the behaviour of this parameter, while the calibration model has remained unchanged throughout the entire mission lifetime. In an effort to improve the quality of the gravity field models produced at CSR in future mission-long re-processing of GRACE data, we quantify the added value of different calibration strategies. In one approach, the temperature effects that distort the raw accelerometer measurements collected without thermal control are corrected considering the housekeeping temperature readings. In this way, one single calibration strategy can be consistently applied during the whole mission lifetime, since it is valid to thermal the conditions before and after April 2011. Finally, we illustrate that the resulting calibrated accelerations are suitable for neutral thermospheric density studies.

  20. Development and validation of a toddler silhouette scale.

    Science.gov (United States)

    Hager, Erin R; McGill, Adrienne E; Black, Maureen M

    2010-02-01

    The purpose of this study is to develop and validate a toddler silhouette scale. A seven-point scale was developed by an artist based on photographs of 15 toddlers (6 males, 9 females) varying in race/ethnicity and body size, and a list of phenotypic descriptions of toddlers of varying body sizes. Content validity, age-appropriateness, and gender and race/ethnicity neutrality were assessed among 180 pediatric health professionals and 129 parents of toddlers. Inter- and intrarater reliability and concurrent validity were assessed by having 138 pediatric health professionals match the silhouettes with photographs of toddlers. Assessments of content validity revealed that most health professionals (74.6%) and parents of toddlers (63.6%) ordered all seven silhouettes correctly, and interobserver agreement for weight status classification was high (kappa = 0.710, r = 0.827, P gender (68.5%) and race/ethnicity (77.3%) neutral. The inter-rater reliability, based on matching silhouettes with photographs, was 0.787 (Cronbach's alpha) and the intrarater reliability was 0.855 (P parents' perception of and satisfaction with their toddler's body size. Interventions can be targeted toward parents who have inaccurate perceptions of or are dissatisfied with their toddler's body size.