WorldWideScience

Sample records for diagnosis bias correction

  1. Bias-correction in vector autoregressive models

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    2014-01-01

    We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study......, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable...... improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find...

  2. Skin-deep diagnosis: affective bias and zebra retreat complicating the diagnosis of systemic sclerosis.

    Science.gov (United States)

    Miller, Chad S

    2013-01-01

    Nearly half of medical errors can be attributed to an error of clinical reasoning or decision making. It is estimated that the correct diagnosis is missed or delayed in between 5% and 14% of acute hospital admissions. Through understanding why and how physicians make these errors, it is hoped that strategies can be developed to decrease the number of these errors. In the present case, a patient presented with dyspnea, gastrointestinal symptoms and weight loss; the diagnosis was initially missed when the treating physicians took mental short cuts and used heuristics as in this case. Heuristics have an inherent bias that can lead to faulty reasoning or conclusions, especially in complex or difficult cases. Affective bias, which is the overinvolvement of emotion in clinical decision making, limited the available information for diagnosis because of the hesitancy to acquire a full history and perform a complete physical examination in this patient. Zebra retreat, another type of bias, is when a rare diagnosis figures prominently on the differential diagnosis but the physician retreats for various reasons. Zebra retreat also factored in the delayed diagnosis. Through the description of these clinical reasoning errors in an actual case, it is hoped that future errors can be prevented or inspiration for additional research in this area will develop.

  3. Estimation bias and bias correction in reduced rank autoregressions

    DEFF Research Database (Denmark)

    Nielsen, Heino Bohn

    2017-01-01

    This paper characterizes the finite-sample bias of the maximum likelihood estimator (MLE) in a reduced rank vector autoregression and suggests two simulation-based bias corrections. One is a simple bootstrap implementation that approximates the bias at the MLE. The other is an iterative root...

  4. Determination and Correction of Persistent Biases in Quantum Annealers

    Science.gov (United States)

    2016-08-25

    for all of the qubits. Narrowing of the bias distribution. To show the correctability of the persistent biases , we ran the experiment described above...this is a promising application for bias correction . Importantly, while the J biases determined here are in general smaller than the h biases , numerical...1Scientific RepoRts | 6:18628 | DOI: 10.1038/srep18628 www.nature.com/scientificreports Determination and correction of persistent biases in quantum

  5. Combination of biased forecasts: Bias correction or bias based weights?

    OpenAIRE

    Wenzel, Thomas

    1999-01-01

    Most of the literature on combination of forecasts deals with the assumption of unbiased individual forecasts. Here, we consider the case of biased forecasts and discuss two different combination techniques resulting in an unbiased forecast. On the one hand we correct the individual forecasts, and on the other we calculate bias based weights. A simulation study gives some insight in the situations where we should use the different methods.

  6. Implementation of linear bias corrections for calorimeters at Mound

    International Nuclear Information System (INIS)

    Barnett, T.M.

    1993-01-01

    In the past, Mound has generally made relative bias corrections as part of the calibration of individual calorimeters. The correction made was the same over the entire operating range of the calorimeter, regardless of the magnitude of the range. Recently, an investigation was performed to check the relevancy of using linear bias corrections to calibrate the calorimeters. The bias is obtained by measuring calibrated plutonium and/or electrical heat standards over the operating range of the calorimeter. The bias correction is then calculated using a simple least squares fit (y = mx + b) of the bias in milliwatts over the operating range of the calorimeter in watts. The equation used is B i = B 0 + (B w * W m ), where B i is the bias at any given power in milliwatts, B 0 is the intercept (absolute bias in milliwatts), B w is the slope (relative bias in milliwatts per watt), and W m is the measured power in watts. The results of the study showed a decrease in the random error of bias corrected data for most of the calorimeters which are operated over a large wattage range (greater than an order of magnitude). The linear technique for bias correction has been fully implemented at Mound and has been included in the Technical Manual, ''A Measurement Control Program for Radiometric Calorimeters at Mound'' (MD-21900)

  7. Assessing atmospheric bias correction for dynamical consistency using potential vorticity

    International Nuclear Information System (INIS)

    Rocheta, Eytan; Sharma, Ashish; Evans, Jason P

    2014-01-01

    Correcting biases in atmospheric variables prior to impact studies or dynamical downscaling can lead to new biases as dynamical consistency between the ‘corrected’ fields is not maintained. Use of these bias corrected fields for subsequent impact studies and dynamical downscaling provides input conditions that do not appropriately represent intervariable relationships in atmospheric fields. Here we investigate the consequences of the lack of dynamical consistency in bias correction using a measure of model consistency—the potential vorticity (PV). This paper presents an assessment of the biases present in PV using two alternative correction techniques—an approach where bias correction is performed individually on each atmospheric variable, thereby ignoring the physical relationships that exists between the multiple variables that are corrected, and a second approach where bias correction is performed directly on the PV field, thereby keeping the system dynamically coherent throughout the correction process. In this paper we show that bias correcting variables independently results in increased errors above the tropopause in the mean and standard deviation of the PV field, which are improved when using the alternative proposed. Furthermore, patterns of spatial variability are improved over nearly all vertical levels when applying the alternative approach. Results point to a need for a dynamically consistent atmospheric bias correction technique which results in fields that can be used as dynamically consistent lateral boundaries in follow-up downscaling applications. (letter)

  8. Bias-correction in vector autoregressive models: A simulation study

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    We analyze and compare the properties of various methods for bias-correcting parameter estimates in vector autoregressions. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that this simple...... and easy-to-use analytical bias formula compares very favorably to the more standard but also more computer intensive bootstrap bias-correction method, both in terms of bias and mean squared error. Both methods yield a notable improvement over both OLS and a recently proposed WLS estimator. We also...... of pushing an otherwise stationary model into the non-stationary region of the parameter space during the process of correcting for bias....

  9. Bias-Correction in Vector Autoregressive Models: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tom Engsted

    2014-03-01

    Full Text Available We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it compares very favorably in non-stationary models.

  10. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  11. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  12. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Directory of Open Access Journals (Sweden)

    Haris Akram Bhatti

    2016-06-01

    Full Text Available With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA Climate Prediction Centre (CPC morphing technique (CMORPH satellite rainfall product (CMORPH in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW sizes and for sequential windows (SW’s of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE. To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r and standard deviation (SD. Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  13. On the Limitations of Variational Bias Correction

    Science.gov (United States)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  14. Improved Correction of Misclassification Bias With Bootstrap Imputation.

    Science.gov (United States)

    van Walraven, Carl

    2018-07-01

    Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.

  15. Bias Correction with Jackknife, Bootstrap, and Taylor Series

    OpenAIRE

    Jiao, Jiantao; Han, Yanjun; Weissman, Tsachy

    2017-01-01

    We analyze the bias correction methods using jackknife, bootstrap, and Taylor series. We focus on the binomial model, and consider the problem of bias correction for estimating $f(p)$, where $f \\in C[0,1]$ is arbitrary. We characterize the supremum norm of the bias of general jackknife and bootstrap estimators for any continuous functions, and demonstrate the in delete-$d$ jackknife, different values of $d$ may lead to drastically different behavior in jackknife. We show that in the binomial ...

  16. Evaluation of bias-correction methods for ensemble streamflow volume forecasts

    Directory of Open Access Journals (Sweden)

    T. Hashino

    2007-01-01

    Full Text Available Ensemble prediction systems are used operationally to make probabilistic streamflow forecasts for seasonal time scales. However, hydrological models used for ensemble streamflow prediction often have simulation biases that degrade forecast quality and limit the operational usefulness of the forecasts. This study evaluates three bias-correction methods for ensemble streamflow volume forecasts. All three adjust the ensemble traces using a transformation derived with simulated and observed flows from a historical simulation. The quality of probabilistic forecasts issued when using the three bias-correction methods is evaluated using a distributions-oriented verification approach. Comparisons are made of retrospective forecasts of monthly flow volumes for a north-central United States basin (Des Moines River, Iowa, issued sequentially for each month over a 48-year record. The results show that all three bias-correction methods significantly improve forecast quality by eliminating unconditional biases and enhancing the potential skill. Still, subtle differences in the attributes of the bias-corrected forecasts have important implications for their use in operational decision-making. Diagnostic verification distinguishes these attributes in a context meaningful for decision-making, providing criteria to choose among bias-correction methods with comparable skill.

  17. Efficient bias correction for magnetic resonance image denoising.

    Science.gov (United States)

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Recent advances in precipitation-bias correction and application

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Significant progresses have been made in recent years in precipitation data analyses at regional to global scales. This paper re-views and synthesizes recent advances in precipitation-bias corrections and applications in many countries and over the cold re-gions. The main objective of this review is to identify and examine gaps in regional and national precipitation-error analyses. This paper also discusses and recommends future research needs and directions. More effort and coordination are necessary in the determinations of precipitation biases on large regions across national borders. It is important to emphasize that bias cor-rections of precipitation measurements affect both water budget and energy balance calculations, particularly over the cold regions.

  19. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    Science.gov (United States)

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  20. Spatial uncertainty in bias corrected climate change projections and hydrogeological impacts

    DEFF Research Database (Denmark)

    Seaby, Lauren Paige; Refsgaard, Jens Christian; Sonnenborg, Torben

    2015-01-01

    Model pairing, this paper analyses the relationship between complexity and robustness of three distribution-based scaling (DBS) bias correction methods applied to daily precipitation at various spatial scales. Hydrological simulations are forced by CM inputs to assess the spatial uncertainty......The question of which climate model bias correction methods and spatial scales for correction are optimal for both projecting future hydrological changes as well as removing initial model bias has so far received little attention. For 11 climate models (CMs), or GCM/RCM – Global/Regional Climate...... signals. The magnitude of spatial bias seen in precipitation inputs does not necessarily correspond to the magnitude of biases seen in hydrological outputs. Variables that integrate basin responses over time and space are more sensitive to mean spatial biases and less so on extremes. Hydrological...

  1. Magnification bias corrections to galaxy-lensing cross-correlations

    International Nuclear Information System (INIS)

    Ziour, Riad; Hui, Lam

    2008-01-01

    Galaxy-galaxy or galaxy-quasar lensing can provide important information on the mass distribution in the Universe. It consists of correlating the lensing signal (either shear or magnification) of a background galaxy/quasar sample with the number density of a foreground galaxy sample. However, the foreground galaxy density is inevitably altered by the magnification bias due to the mass between the foreground and the observer, leading to a correction to the observed galaxy-lensing signal. The aim of this paper is to quantify this correction. The single most important determining factor is the foreground redshift z f : the correction is small if the foreground galaxies are at low redshifts but can become non-negligible for sufficiently high redshifts. For instance, we find that for the multipole l=1000, the correction is above 1%x(5s f -2)/b f for z f > or approx. 0.37, and above 5%x(5s f -2)/b f for z f > or approx. 0.67, where s f is the number count slope of the foreground sample and b f its galaxy bias. These considerations are particularly important for geometrical measures, such as the Jain and Taylor ratio or its generalization by Zhang et al. Assuming (5s f -2)/b f =1, we find that the foreground redshift should be limited to z f < or approx. 0.45 in order to avoid biasing the inferred dark energy equation of state w by more than 5%, and that even for a low foreground redshift (<0.45), the background samples must be well separated from the foreground to avoid incurring a bias of similar magnitude. Lastly, we briefly comment on the possibility of obtaining these geometrical measures without using galaxy shapes, using instead magnification bias itself.

  2. Bias correction of daily satellite precipitation data using genetic algorithm

    Science.gov (United States)

    Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.

    2018-05-01

    Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.

  3. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Performance of automated methods to isolate brain from nonbrain tissues in magnetic resonance (MR) structural images may be influenced by MR signal inhomogeneities, type of MR image set, regional anatomy, and age and diagnosis of subjects studied. The present study compared the performance of four...... methods: Brain Extraction Tool (BET; Smith [2002]: Hum Brain Mapp 17:143-155); 3dIntracranial (Ward [1999] Milwaukee: Biophysics Research Institute, Medical College of Wisconsin; in AFNI); a Hybrid Watershed algorithm (HWA, Segonne et al. [2004] Neuroimage 22:1060-1075; in FreeSurfer); and Brain Surface...... Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...

  4. Practical estimate of gradient nonlinearity for implementation of apparent diffusion coefficient bias correction.

    Science.gov (United States)

    Malkyarenko, Dariya I; Chenevert, Thomas L

    2014-12-01

    To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.

  5. Significance of Bias Correction in Drought Frequency and Scenario Analysis Based on Climate Models

    Science.gov (United States)

    Aryal, Y.; Zhu, J.

    2015-12-01

    Assessment of future drought characteristics is difficult as climate models usually have bias in simulating precipitation frequency and intensity. To overcome this limitation, output from climate models need to be bias corrected based on the specific purpose of applications. In this study, we examine the significance of bias correction in the context of drought frequency and scenario analysis using output from climate models. In particular, we investigate the performance of three widely used bias correction techniques: (1) monthly bias correction (MBC), (2) nested bias correction (NBC), and (3) equidistance quantile mapping (EQM) The effect of bias correction in future scenario of drought frequency is also analyzed. The characteristics of drought are investigated in terms of frequency and severity in nine representative locations in different climatic regions across the United States using regional climate model (RCM) output from the North American Regional Climate Change Assessment Program (NARCCAP). The Standardized Precipitation Index (SPI) is used as the means to compare and forecast drought characteristics at different timescales. Systematic biases in the RCM precipitation output are corrected against the National Centers for Environmental Prediction (NCEP) North American Regional Reanalysis (NARR) data. The results demonstrate that bias correction significantly decreases the RCM errors in reproducing drought frequency derived from the NARR data. Preserving mean and standard deviation is essential for climate models in drought frequency analysis. RCM biases both have regional and timescale dependence. Different timescale of input precipitation in the bias corrections show similar results. Drought frequency obtained from the RCM future (2040-2070) scenarios is compared with that from the historical simulations. The changes in drought characteristics occur in all climatic regions. The relative changes in drought frequency in future scenario in relation to

  6. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...... distances, and an Expectation-Maximization algorithm. Methods tended to perform better on contemporary datasets; bias correction did not significantly improve method performance. Mesial sections were most difficult for all methods. Although AD image sets were most difficult to strip, HWA and BSE were more...

  7. Estimation and correction of visibility bias in aerial surveys of wintering ducks

    Science.gov (United States)

    Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.

    2008-01-01

    Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.

  8. Noise Induces Biased Estimation of the Correction Gain.

    Directory of Open Access Journals (Sweden)

    Jooeun Ahn

    Full Text Available The detection of an error in the motor output and the correction in the next movement are critical components of any form of motor learning. Accordingly, a variety of iterative learning models have assumed that a fraction of the error is adjusted in the next trial. This critical fraction, the correction gain, learning rate, or feedback gain, has been frequently estimated via least-square regression of the obtained data set. Such data contain not only the inevitable noise from motor execution, but also noise from measurement. It is generally assumed that this noise averages out with large data sets and does not affect the parameter estimation. This study demonstrates that this is not the case and that in the presence of noise the conventional estimate of the correction gain has a significant bias, even with the simplest model. Furthermore, this bias does not decrease with increasing length of the data set. This study reveals this limitation of current system identification methods and proposes a new method that overcomes this limitation. We derive an analytical form of the bias from a simple regression method (Yule-Walker and develop an improved identification method. This bias is discussed as one of other examples for how the dynamics of noise can introduce significant distortions in data analysis.

  9. A method of bias correction for maximal reliability with dichotomous measures.

    Science.gov (United States)

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  10. An introduction to Bartlett correction and bias reduction

    CERN Document Server

    Cordeiro, Gauss M

    2014-01-01

    This book presents a concise introduction to Bartlett and Bartlett-type corrections of statistical tests and bias correction of point estimators. The underlying idea behind both groups of corrections is to obtain higher accuracy in small samples. While the main focus is on corrections that can be analytically derived, the authors also present alternative strategies for improving estimators and tests based on bootstrap, a data resampling technique, and discuss concrete applications to several important statistical models.

  11. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups.

    Science.gov (United States)

    Zwaan, Laura; Monteiro, Sandra; Sherbino, Jonathan; Ilgen, Jonathan; Howey, Betty; Norman, Geoffrey

    2017-02-01

    Many authors have implicated cognitive biases as a primary cause of diagnostic error. If this is so, then physicians already familiar with common cognitive biases should consistently identify biases present in a clinical workup. The aim of this paper is to determine whether physicians agree on the presence or absence of particular biases in a clinical case workup and how case outcome knowledge affects bias identification. We conducted a web survey of 37 physicians. Each participant read eight cases and listed which biases were present from a list provided. In half the cases the outcome implied a correct diagnosis; in the other half, it implied an incorrect diagnosis. We compared the number of biases identified when the outcome implied a correct or incorrect primary diagnosis. Additionally, the agreement among participants about presence or absence of specific biases was assessed. When the case outcome implied a correct diagnosis, an average of 1.75 cognitive biases were reported; when incorrect, 3.45 biases (F=71.3, p<0.00001). Individual biases were reported from 73% to 125% more often when an incorrect diagnosis was implied. There was no agreement on presence or absence of individual biases, with κ ranging from 0.000 to 0.044. Individual physicians are unable to agree on the presence or absence of individual cognitive biases. Their judgements are heavily influenced by hindsight bias; when the outcome implies a diagnostic error, twice as many biases are identified. The results present challenges for current error reduction strategies based on identification of cognitive biases. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  12. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    Science.gov (United States)

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  13. Correction of gene expression data: Performance-dependency on inter-replicate and inter-treatment biases.

    Science.gov (United States)

    Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren

    2014-10-20

    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Analysis and correction of gradient nonlinearity bias in apparent diffusion coefficient measurements.

    Science.gov (United States)

    Malyarenko, Dariya I; Ross, Brian D; Chenevert, Thomas L

    2014-03-01

    Gradient nonlinearity of MRI systems leads to spatially dependent b-values and consequently high non-uniformity errors (10-20%) in apparent diffusion coefficient (ADC) measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Spatial dependence of nonlinearity correction terms accounts for the bulk (75-95%) of ADC bias for FA = 0.3-0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. Copyright © 2013 Wiley Periodicals, Inc.

  15. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    Science.gov (United States)

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  16. Bias correction for magnetic resonance images via joint entropy regularization.

    Science.gov (United States)

    Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang

    2014-01-01

    Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.

  17. Estimation of satellite position, clock and phase bias corrections

    Science.gov (United States)

    Henkel, Patrick; Psychas, Dimitrios; Günther, Christoph; Hugentobler, Urs

    2018-05-01

    Precise point positioning with integer ambiguity resolution requires precise knowledge of satellite position, clock and phase bias corrections. In this paper, a method for the estimation of these parameters with a global network of reference stations is presented. The method processes uncombined and undifferenced measurements of an arbitrary number of frequencies such that the obtained satellite position, clock and bias corrections can be used for any type of differenced and/or combined measurements. We perform a clustering of reference stations. The clustering enables a common satellite visibility within each cluster and an efficient fixing of the double difference ambiguities within each cluster. Additionally, the double difference ambiguities between the reference stations of different clusters are fixed. We use an integer decorrelation for ambiguity fixing in dense global networks. The performance of the proposed method is analysed with both simulated Galileo measurements on E1 and E5a and real GPS measurements of the IGS network. We defined 16 clusters and obtained satellite position, clock and phase bias corrections with a precision of better than 2 cm.

  18. Radar Rainfall Bias Correction based on Deep Learning Approach

    Science.gov (United States)

    Song, Yang; Han, Dawei; Rico-Ramirez, Miguel A.

    2017-04-01

    Radar rainfall measurement errors can be considerably attributed to various sources including intricate synoptic regimes. Temperature, humidity and wind are typically acknowledged as critical meteorological factors in inducing the precipitation discrepancies aloft and on the ground. The conventional practices mainly use the radar-gauge or geostatistical techniques by direct weighted interpolation algorithms as bias correction schemes whereas rarely consider the atmospheric effects. This study aims to comprehensively quantify those meteorological elements' impacts on radar-gauge rainfall bias correction based on a deep learning approach. The deep learning approach employs deep convolutional neural networks to automatically extract three-dimensional meteorological features for target recognition based on high range resolution profiles. The complex nonlinear relationships between input and target variables can be implicitly detected by such a scheme, which is validated on the test dataset. The proposed bias correction scheme is expected to be a promising improvement in systematically minimizing the synthesized atmospheric effects on rainfall discrepancies between radar and rain gauges, which can be useful in many meteorological and hydrological applications (e.g., real-time flood forecasting) especially for regions with complex atmospheric conditions.

  19. Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method

    Science.gov (United States)

    Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil

    2014-05-01

    Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.

  20. Regression dilution bias: tools for correction methods and sample size calculation.

    Science.gov (United States)

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  1. A new dynamical downscaling approach with GCM bias corrections and spectral nudging

    Science.gov (United States)

    Xu, Zhongfeng; Yang, Zong-Liang

    2015-04-01

    To improve confidence in regional projections of future climate, a new dynamical downscaling (NDD) approach with both general circulation model (GCM) bias corrections and spectral nudging is developed and assessed over North America. GCM biases are corrected by adjusting GCM climatological means and variances based on reanalysis data before the GCM output is used to drive a regional climate model (RCM). Spectral nudging is also applied to constrain RCM-based biases. Three sets of RCM experiments are integrated over a 31 year period. In the first set of experiments, the model configurations are identical except that the initial and lateral boundary conditions are derived from either the original GCM output, the bias-corrected GCM output, or the reanalysis data. The second set of experiments is the same as the first set except spectral nudging is applied. The third set of experiments includes two sensitivity runs with both GCM bias corrections and nudging where the nudging strength is progressively reduced. All RCM simulations are assessed against North American Regional Reanalysis. The results show that NDD significantly improves the downscaled mean climate and climate variability relative to other GCM-driven RCM downscaling approach in terms of climatological mean air temperature, geopotential height, wind vectors, and surface air temperature variability. In the NDD approach, spectral nudging introduces the effects of GCM bias corrections throughout the RCM domain rather than just limiting them to the initial and lateral boundary conditions, thereby minimizing climate drifts resulting from both the GCM and RCM biases.

  2. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    Science.gov (United States)

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  3. Can the variability in precipitation simulations across GCMs be reduced through sensible bias correction?

    Science.gov (United States)

    Nguyen, Ha; Mehrotra, Rajeshwar; Sharma, Ashish

    2017-11-01

    This work investigates the performance of four bias correction alternatives for representing persistence characteristics of precipitation across 37 General Circulation Models (GCMs) from the CMIP5 data archive. The first three correction approaches are the Simple Monthly Bias Correction (SMBC), Equidistance Quantile Mapping (EQM), and Nested Bias Correction (NBC), all of which operate in the time domain, with a focus on representing distributional and moment attributes in the observed precipitation record. The fourth approach corrects for the biases in high- and low-frequency variability or persistence of the GCM time series in the frequency domain and is named as Frequency-based Bias Correction (FBC). The Climatic Research Unit (CRU) gridded precipitation data covering the global land surface is used as a reference dataset. The assessment focusses on current and future means, variability, and drought-related characteristics at different temporal and spatial scales. For the current climate, all bias correction approaches perform reasonably well at the global scale by reproducing the observed precipitation statistics. For the future climate, focus is drawn on the agreement of the attributes across the GCMs considered. The inter-model difference/spread of each attribute across the GCMs is used as a measure of this agreement. Our results indicate that out of the four bias correction approaches used, FBC provides the lowest inter-model spreads, specifically for persistence attributes, over most regions/ parts over the global land surface. This has significant implications for most hydrological studies where the effect of low-frequency variability is of considerable importance.

  4. A Realization of Bias Correction Method in the GMAO Coupled System

    Science.gov (United States)

    Chang, Yehui; Koster, Randal; Wang, Hailan; Schubert, Siegfried; Suarez, Max

    2018-01-01

    Over the past several decades, a tremendous effort has been made to improve model performance in the simulation of the climate system. The cold or warm sea surface temperature (SST) bias in the tropics is still a problem common to most coupled ocean atmosphere general circulation models (CGCMs). The precipitation biases in CGCMs are also accompanied by SST and surface wind biases. The deficiencies and biases over the equatorial oceans through their influence on the Walker circulation likely contribute the precipitation biases over land surfaces. In this study, we introduce an approach in the CGCM modeling to correct model biases. This approach utilizes the history of the model's short-term forecasting errors and their seasonal dependence to modify model's tendency term and to minimize its climate drift. The study shows that such an approach removes most of model climate biases. A number of other aspects of the model simulation (e.g. extratropical transient activities) are also improved considerably due to the imposed pre-processed initial 3-hour model drift corrections. Because many regional biases in the GEOS-5 CGCM are common amongst other current models, our approaches and findings are applicable to these other models as well.

  5. Fat fraction bias correction using T1 estimates and flip angle mapping.

    Science.gov (United States)

    Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A

    2014-01-01

    To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.

  6. Bias Correction in a Stable AD (1,1) Model: Weak versus Strong Exogeneity

    NARCIS (Netherlands)

    van Giersbergen, N.P.A.

    2001-01-01

    This paper compares the behaviour of a bias-corrected estimator assuming strongly exogenous regressors to the behaviour of a bias-corrected estimator assuming weakly exogenous regressors, when in fact the marginal model contains a feedback mechanism. To this end, the effects of a feedback mechanism

  7. Bias-corrected estimation of stable tail dependence function

    DEFF Research Database (Denmark)

    Beirlant, Jan; Escobar-Bach, Mikael; Goegebeur, Yuri

    2016-01-01

    We consider the estimation of the stable tail dependence function. We propose a bias-corrected estimator and we establish its asymptotic behaviour under suitable assumptions. The finite sample performance of the proposed estimator is evaluated by means of an extensive simulation study where...

  8. Statistical bias correction modelling for seasonal rainfall forecast for the case of Bali island

    Science.gov (United States)

    Lealdi, D.; Nurdiati, S.; Sopaheluwakan, A.

    2018-04-01

    Rainfall is an element of climate which is highly influential to the agricultural sector. Rain pattern and distribution highly determines the sustainability of agricultural activities. Therefore, information on rainfall is very useful for agriculture sector and farmers in anticipating the possibility of extreme events which often cause failures of agricultural production. This research aims to identify the biases from seasonal forecast products from ECMWF (European Centre for Medium-Range Weather Forecasts) rainfall forecast and to build a transfer function in order to correct the distribution biases as a new prediction model using quantile mapping approach. We apply this approach to the case of Bali Island, and as a result, the use of bias correction methods in correcting systematic biases from the model gives better results. The new prediction model obtained with this approach is better than ever. We found generally that during rainy season, the bias correction approach performs better than in dry season.

  9. Incorporating circulation statistics in bias correction of GCM ensembles: Hydrological application for the Rhine basin

    NARCIS (Netherlands)

    Photiadou, C.; van den Hurk, B.J.J.M.; Delden, A. van; Weerts, A.

    2016-01-01

    An adapted statistical bias correction method is introduced to incorporate circulation-dependence of the model precipitation bias, and its influence on estimated discharges for the Rhine basin is analyzed for a historical period. The bias correction method is tailored to time scales relevant to

  10. Incorporating circulation statistics in bias correction of GCM ensembles: hydrological application for the Rhine basin

    NARCIS (Netherlands)

    Photiadou, C.; Hurk, van den B.; Delden, van A.; Weerts, A.H.

    2016-01-01

    An adapted statistical bias correction method is introduced to incorporate circulation-dependence of the model precipitation bias, and its influence on estimated discharges for the Rhine basin is analyzed for a historical period. The bias correction method is tailored to time scales relevant to

  11. Incorporating circulation statistics in bias correction of GCM ensembles: hydrological application for the Rhine basin

    NARCIS (Netherlands)

    Photiadou, Christiana; van den Hurk, Bart; van Delden, Aarnout; Weerts, Albrecht

    2015-01-01

    An adapted statistical bias correction method is introduced to incorporate circulation-dependence of the model precipitation bias, and its influence on estimated discharges for the Rhine basin is analyzed for a histori- cal period. The bias correction method is tailored to time scales relevant to

  12. Effect of precipitation bias correction on water budget calculation in Upper Yellow River, China

    International Nuclear Information System (INIS)

    Ye Baisheng; Yang Daqing; Ma Lijuan

    2012-01-01

    This study quantifies the effect of precipitation bias corrections on basin water balance calculations for the Yellow River Source region (YRS). We analyse long-term (1959–2001) monthly and yearly data of precipitation, runoff, and ERA-40 water budget variables and define a water balance regime. Basin precipitation, evapotranspiration and runoff are high in summer and low in winter. The basin water storage change is positive in summer and negative in winter. Monthly precipitation bias corrections, ranging from 2 to 16 mm, do not significantly alter the pattern of the seasonal water budget. The annual bias correction of precipitation is about 98 mm (19%); this increase leads to the same amount of evapotranspiration increase, since yearly runoff remains unchanged and the long-term storage change is assumed to be zero. Annual runoff and evapotranspiration coefficients change, due to precipitation bias corrections, from 0.33 and 0.67 to 0.28 and 0.72, respectively. These changes will impact the parameterization and calibration of land surface and hydrological models. The bias corrections of precipitation data also improve the relationship between annual precipitation and runoff. (letter)

  13. Local linear density estimation for filtered survival data, with bias correction

    DEFF Research Database (Denmark)

    Nielsen, Jens Perch; Tanggaard, Carsten; Jones, M.C.

    2009-01-01

    it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a 'pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias-correction methods...... within our framework. The multiplicative bias-correction method proves to be the best in a simulation study comparing the performance of the considered estimators. An example concerning old-age mortality demonstrates the importance of the improvements provided....

  14. Local Linear Density Estimation for Filtered Survival Data with Bias Correction

    DEFF Research Database (Denmark)

    Tanggaard, Carsten; Nielsen, Jens Perch; Jones, M.C.

    it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a ‘pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias correction methods...... within our framework. The multiplicative bias correction method proves to be best in a simulation study comparing the performance of the considered estimators. An example concerning old age mortality demonstrates the importance of the improvements provided....

  15. Bias correction of surface downwelling longwave and shortwave radiation for the EWEMBI dataset

    Science.gov (United States)

    Lange, Stefan

    2018-05-01

    Many meteorological forcing datasets include bias-corrected surface downwelling longwave and shortwave radiation (rlds and rsds). Methods used for such bias corrections range from multi-year monthly mean value scaling to quantile mapping at the daily timescale. An additional downscaling is necessary if the data to be corrected have a higher spatial resolution than the observational data used to determine the biases. This was the case when EartH2Observe (E2OBS; Calton et al., 2016) rlds and rsds were bias-corrected using more coarsely resolved Surface Radiation Budget (SRB; Stackhouse Jr. et al., 2011) data for the production of the meteorological forcing dataset EWEMBI (Lange, 2016). This article systematically compares various parametric quantile mapping methods designed specifically for this purpose, including those used for the production of EWEMBI rlds and rsds. The methods vary in the timescale at which they operate, in their way of accounting for physical upper radiation limits, and in their approach to bridging the spatial resolution gap between E2OBS and SRB. It is shown how temporal and spatial variability deflation related to bilinear interpolation and other deterministic downscaling approaches can be overcome by downscaling the target statistics of quantile mapping from the SRB to the E2OBS grid such that the sub-SRB-grid-scale spatial variability present in the original E2OBS data is retained. Cross validations at the daily and monthly timescales reveal that it is worthwhile to take empirical estimates of physical upper limits into account when adjusting either radiation component and that, overall, bias correction at the daily timescale is more effective than bias correction at the monthly timescale if sampling errors are taken into account.

  16. Bias correction of surface downwelling longwave and shortwave radiation for the EWEMBI dataset

    Directory of Open Access Journals (Sweden)

    S. Lange

    2018-05-01

    Full Text Available Many meteorological forcing datasets include bias-corrected surface downwelling longwave and shortwave radiation (rlds and rsds. Methods used for such bias corrections range from multi-year monthly mean value scaling to quantile mapping at the daily timescale. An additional downscaling is necessary if the data to be corrected have a higher spatial resolution than the observational data used to determine the biases. This was the case when EartH2Observe (E2OBS; Calton et al., 2016 rlds and rsds were bias-corrected using more coarsely resolved Surface Radiation Budget (SRB; Stackhouse Jr. et al., 2011 data for the production of the meteorological forcing dataset EWEMBI (Lange, 2016. This article systematically compares various parametric quantile mapping methods designed specifically for this purpose, including those used for the production of EWEMBI rlds and rsds. The methods vary in the timescale at which they operate, in their way of accounting for physical upper radiation limits, and in their approach to bridging the spatial resolution gap between E2OBS and SRB. It is shown how temporal and spatial variability deflation related to bilinear interpolation and other deterministic downscaling approaches can be overcome by downscaling the target statistics of quantile mapping from the SRB to the E2OBS grid such that the sub-SRB-grid-scale spatial variability present in the original E2OBS data is retained. Cross validations at the daily and monthly timescales reveal that it is worthwhile to take empirical estimates of physical upper limits into account when adjusting either radiation component and that, overall, bias correction at the daily timescale is more effective than bias correction at the monthly timescale if sampling errors are taken into account.

  17. Bias correction for the least squares estimator of Weibull shape parameter with complete and censored data

    International Nuclear Information System (INIS)

    Zhang, L.F.; Xie, M.; Tang, L.C.

    2006-01-01

    Estimation of the Weibull shape parameter is important in reliability engineering. However, commonly used methods such as the maximum likelihood estimation (MLE) and the least squares estimation (LSE) are known to be biased. Bias correction methods for MLE have been studied in the literature. This paper investigates the methods for bias correction when model parameters are estimated with LSE based on probability plot. Weibull probability plot is very simple and commonly used by practitioners and hence such a study is useful. The bias of the LS shape parameter estimator for multiple censored data is also examined. It is found that the bias can be modeled as the function of the sample size and the censoring level, and is mainly dependent on the latter. A simple bias function is introduced and bias correcting formulas are proposed for both complete and censored data. Simulation results are also presented. The bias correction methods proposed are very easy to use and they can typically reduce the bias of the LSE of the shape parameter to less than half percent

  18. Correction of stream quality trends for the effects of laboratory measurement bias

    Science.gov (United States)

    Alexander, Richard B.; Smith, Richard A.; Schwarz, Gregory E.

    1993-01-01

    We present a statistical model relating measurements of water quality to associated errors in laboratory methods. Estimation of the model allows us to correct trends in water quality for long-term and short-term variations in laboratory measurement errors. An illustration of the bias correction method for a large national set of stream water quality and quality assurance data shows that reductions in the bias of estimates of water quality trend slopes are achieved at the expense of increases in the variance of these estimates. Slight improvements occur in the precision of estimates of trend in bias by using correlative information on bias and water quality to estimate random variations in measurement bias. The results of this investigation stress the need for reliable, long-term quality assurance data and efficient statistical methods to assess the effects of measurement errors on the detection of water quality trends.

  19. Reduction of density-modification bias by β correction

    International Nuclear Information System (INIS)

    Skubák, Pavol; Pannu, Navraj S.

    2011-01-01

    A cross-validation-based method for bias reduction in ‘classical’ iterative density modification of experimental X-ray crystallography maps provides significantly more accurate phase-quality estimates and leads to improved automated model building. Density modification often suffers from an overestimation of phase quality, as seen by escalated figures of merit. A new cross-validation-based method to address this estimation bias by applying a bias-correction parameter ‘β’ to maximum-likelihood phase-combination functions is proposed. In tests on over 100 single-wavelength anomalous diffraction data sets, the method is shown to produce much more reliable figures of merit and improved electron-density maps. Furthermore, significantly better results are obtained in automated model building iterated with phased refinement using the more accurate phase probability parameters from density modification

  20. Bias correction for the estimation of sensitivity indices based on random balance designs

    International Nuclear Information System (INIS)

    Tissot, Jean-Yves; Prieur, Clémentine

    2012-01-01

    This paper deals with the random balance design method (RBD) and its hybrid approach, RBD-FAST. Both these global sensitivity analysis methods originate from Fourier amplitude sensitivity test (FAST) and consequently face the main problems inherent to discrete harmonic analysis. We present here a general way to correct a bias which occurs when estimating sensitivity indices (SIs) of any order – except total SI of single factor or group of factors – by the random balance design method (RBD) and its hybrid version, RBD-FAST. In the RBD case, this positive bias has been recently identified in a paper by Xu and Gertner [1]. Following their work, we propose a bias correction method for first-order SIs estimates in RBD. We then extend the correction method to the SIs of any order in RBD-FAST. At last, we suggest an efficient strategy to estimate all the first- and second-order SIs using RBD-FAST. - Highlights: ► We provide a bias correction method for the global sensitivity analysis methods: RBD and RBD-FAST. ► In RBD, first-order sensitivity estimates are corrected. ► In RBD-FAST, sensitivity indices of any order and closed sensitivity indices are corrected. ► We propose an efficient strategy to estimate all the first- and second-order indices of a model.

  1. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  2. Sequence-specific bias correction for RNA-seq data using recurrent neural networks.

    Science.gov (United States)

    Zhang, Yao-Zhong; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru

    2017-01-25

    The recent success of deep learning techniques in machine learning and artificial intelligence has stimulated a great deal of interest among bioinformaticians, who now wish to bring the power of deep learning to bare on a host of bioinformatical problems. Deep learning is ideally suited for biological problems that require automatic or hierarchical feature representation for biological data when prior knowledge is limited. In this work, we address the sequence-specific bias correction problem for RNA-seq data redusing Recurrent Neural Networks (RNNs) to model nucleotide sequences without pre-determining sequence structures. The sequence-specific bias of a read is then calculated based on the sequence probabilities estimated by RNNs, and used in the estimation of gene abundance. We explore the application of two popular RNN recurrent units for this task and demonstrate that RNN-based approaches provide a flexible way to model nucleotide sequences without knowledge of predetermined sequence structures. Our experiments show that training a RNN-based nucleotide sequence model is efficient and RNN-based bias correction methods compare well with the-state-of-the-art sequence-specific bias correction method on the commonly used MAQC-III data set. RNNs provides an alternative and flexible way to calculate sequence-specific bias without explicitly pre-determining sequence structures.

  3. Improved Model for Depth Bias Correction in Airborne LiDAR Bathymetry Systems

    Directory of Open Access Journals (Sweden)

    Jianhu Zhao

    2017-07-01

    Full Text Available Airborne LiDAR bathymetry (ALB is efficient and cost effective in obtaining shallow water topography, but often produces a low-accuracy sounding solution due to the effects of ALB measurements and ocean hydrological parameters. In bathymetry estimates, peak shifting of the green bottom return caused by pulse stretching induces depth bias, which is the largest error source in ALB depth measurements. The traditional depth bias model is often applied to reduce the depth bias, but it is insufficient when used with various ALB system parameters and ocean environments. Therefore, an accurate model that considers all of the influencing factors must be established. In this study, an improved depth bias model is developed through stepwise regression in consideration of the water depth, laser beam scanning angle, sensor height, and suspended sediment concentration. The proposed improved model and a traditional one are used in an experiment. The results show that the systematic deviation of depth bias corrected by the traditional and improved models is reduced significantly. Standard deviations of 0.086 and 0.055 m are obtained with the traditional and improved models, respectively. The accuracy of the ALB-derived depth corrected by the improved model is better than that corrected by the traditional model.

  4. Bias correction in the realized stochastic volatility model for daily volatility on the Tokyo Stock Exchange

    Science.gov (United States)

    Takaishi, Tetsuya

    2018-06-01

    The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.

  5. Correction of bias in belt transect studies of immotile objects

    Science.gov (United States)

    Anderson, D.R.; Pospahala, R.S.

    1970-01-01

    Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.

  6. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  7. Model Consistent Pseudo-Observations of Precipitation and Their Use for Bias Correcting Regional Climate Models

    Directory of Open Access Journals (Sweden)

    Peter Berg

    2015-01-01

    Full Text Available Lack of suitable observational data makes bias correction of high space and time resolution regional climate models (RCM problematic. We present a method to construct pseudo-observational precipitation data bymerging a large scale constrained RCMreanalysis downscaling simulation with coarse time and space resolution observations. The large scale constraint synchronizes the inner domain solution to the driving reanalysis model, such that the simulated weather is similar to observations on a monthly time scale. Monthly biases for each single month are corrected to the corresponding month of the observational data, and applied to the finer temporal resolution of the RCM. A low-pass filter is applied to the correction factors to retain the small spatial scale information of the RCM. The method is applied to a 12.5 km RCM simulation and proven successful in producing a reliable pseudo-observational data set. Furthermore, the constructed data set is applied as reference in a quantile mapping bias correction, and is proven skillful in retaining small scale information of the RCM, while still correcting the large scale spatial bias. The proposed method allows bias correction of high resolution model simulations without changing the fine scale spatial features, i.e., retaining the very information required by many impact models.

  8. A New Bias Corrected Version of Heteroscedasticity Consistent Covariance Estimator

    Directory of Open Access Journals (Sweden)

    Munir Ahmed

    2016-06-01

    Full Text Available In the presence of heteroscedasticity, different available flavours of the heteroscedasticity consistent covariance estimator (HCCME are used. However, the available literature shows that these estimators can be considerably biased in small samples. Cribari–Neto et al. (2000 introduce a bias adjustment mechanism and give the modified White estimator that becomes almost bias-free even in small samples. Extending these results, Cribari-Neto and Galvão (2003 present a similar bias adjustment mechanism that can be applied to a wide class of HCCMEs’. In the present article, we follow the same mechanism as proposed by Cribari-Neto and Galvão to give bias-correction version of HCCME but we use adaptive HCCME rather than the conventional HCCME. The Monte Carlo study is used to evaluate the performance of our proposed estimators.

  9. MRI non-uniformity correction through interleaved bias estimation and B-spline deformation with a template.

    Science.gov (United States)

    Fletcher, E; Carmichael, O; Decarli, C

    2012-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.

  10. MRI Non-Uniformity Correction Through Interleaved Bias Estimation and B-Spline Deformation with a Template*

    Science.gov (United States)

    Fletcher, E.; Carmichael, O.; DeCarli, C.

    2013-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843

  11. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  12. CD-SEM real time bias correction using reference metrology based modeling

    Science.gov (United States)

    Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.

    2018-03-01

    Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.

  13. Bias-corrected estimation in potentially mildly explosive autoregressive models

    DEFF Research Database (Denmark)

    Haufmann, Hendrik; Kruse, Robinson

    This paper provides a comprehensive Monte Carlo comparison of different finite-sample bias-correction methods for autoregressive processes. We consider classic situations where the process is either stationary or exhibits a unit root. Importantly, the case of mildly explosive behaviour is studied...... that the indirect inference approach oers a valuable alternative to other existing techniques. Its performance (measured by its bias and root mean squared error) is balanced and highly competitive across many different settings. A clear advantage is its applicability for mildly explosive processes. In an empirical...

  14. The Detection and Correction of Bias in Student Ratings of Instruction.

    Science.gov (United States)

    Haladyna, Thomas; Hess, Robert K.

    1994-01-01

    A Rasch model was used to detect and correct bias in Likert rating scales used to assess student perceptions of college teaching, using a database of ratings. Statistical corrections were significant, supporting the model's potential utility. Recommendations are made for a theoretical rationale and further research on the model. (Author/MSE)

  15. Multisite bias correction of precipitation data from regional climate models

    Czech Academy of Sciences Publication Activity Database

    Hnilica, Jan; Hanel, M.; Puš, V.

    2017-01-01

    Roč. 37, č. 6 (2017), s. 2934-2946 ISSN 0899-8418 R&D Projects: GA ČR GA16-05665S Grant - others:Grantová agentura ČR - GA ČR(CZ) 16-16549S Institutional support: RVO:67985874 Keywords : bias correction * regional climate model * correlation * covariance * multivariate data * multisite correction * principal components * precipitation Subject RIV: DA - Hydrology ; Limnology OBOR OECD: Climatic research Impact factor: 3.760, year: 2016

  16. An experimental verification of laser-velocimeter sampling bias and its correction

    Science.gov (United States)

    Johnson, D. A.; Modarress, D.; Owen, F. K.

    1982-01-01

    The existence of 'sampling bias' in individual-realization laser velocimeter measurements is experimentally verified and shown to be independent of sample rate. The experiments were performed in a simple two-stream mixing shear flow with the standard for comparison being laser-velocimeter results obtained under continuous-wave conditions. It is also demonstrated that the errors resulting from sampling bias can be removed by a proper interpretation of the sampling statistics. In addition, data obtained in a shock-induced separated flow and in the near-wake of airfoils are presented, both bias-corrected and uncorrected, to illustrate the effects of sampling bias in the extreme.

  17. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: a postmortem study.

    Science.gov (United States)

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee

    2013-12-01

    Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method

  18. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: A postmortem study

    International Nuclear Information System (INIS)

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee

    2013-01-01

    Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson'sr, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson'sr increased from 0.86 to 0.92 with the bias field correction

  19. Correction of Selection Bias in Survey Data: Is the Statistical Cure Worse Than the Bias?

    Science.gov (United States)

    Hanley, James A

    2017-04-01

    In previous articles in the American Journal of Epidemiology (Am J Epidemiol. 2013;177(5):431-442) and American Journal of Public Health (Am J Public Health. 2013;103(10):1895-1901), Masters et al. reported age-specific hazard ratios for the contrasts in mortality rates between obesity categories. They corrected the observed hazard ratios for selection bias caused by what they postulated was the nonrepresentativeness of the participants in the National Health Interview Study that increased with age, obesity, and ill health. However, it is possible that their regression approach to remove the alleged bias has not produced, and in general cannot produce, sensible hazard ratio estimates. First, we must consider how many nonparticipants there might have been in each category of obesity and of age at entry and how much higher the mortality rates would have to be in nonparticipants than in participants in these same categories. What plausible set of numerical values would convert the ("biased") decreasing-with-age hazard ratios seen in the data into the ("unbiased") increasing-with-age ratios that they computed? Can these values be encapsulated in (and can sensible values be recovered from) one additional internal variable in a regression model? Second, one must examine the age pattern of the hazard ratios that have been adjusted for selection. Without the correction, the hazard ratios are attenuated with increasing age. With it, the hazard ratios at older ages are considerably higher, but those at younger ages are well below one. Third, one must test whether the regression approach suggested by Masters et al. would correct the nonrepresentativeness that increased with age and ill health that I introduced into real and hypothetical data sets. I found that the approach did not recover the hazard ratio patterns present in the unselected data sets: the corrections overshot the target at older ages and undershot it at lower ages.

  20. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  1. Correcting Biases in a lower resolution global circulation model with data assimilation

    Science.gov (United States)

    Canter, Martin; Barth, Alexander

    2016-04-01

    With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model's equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have

  2. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    Science.gov (United States)

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  3. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  4. Bias correction for rainrate retrievals from satellite passive microwave sensors

    Science.gov (United States)

    Short, David A.

    1990-01-01

    Rainrates retrieved from past and present satellite-borne microwave sensors are affected by a fundamental remote sensing problem. Sensor fields-of-view are typically large enough to encompass substantial rainrate variability, whereas the retrieval algorithms, based on radiative transfer calculations, show a non-linear relationship between rainrate and microwave brightness temperature. Retrieved rainrates are systematically too low. A statistical model of the bias problem shows that bias correction factors depend on the probability distribution of instantaneous rainrate and on the average thickness of the rain layer.

  5. Timing group delay and differential code bias corrections for BeiDou positioning

    Science.gov (United States)

    Guo, Fei; Zhang, Xiaohong; Wang, Jinling

    2015-05-01

    This article first clearly figures out the relationship between parameters of timing group delay (TGD) and differential code bias (DCB) for BDS, and demonstrates the equivalence of TGD and DCB correction models combining theory with practice. The TGD/DCB correction models have been extended to various occasions for BDS positioning, and such models have been evaluated by real triple-frequency datasets. To test the effectiveness of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both standard point positioning (SPP) and precise point positioning (PPP) tests are carried out for BDS signals with different schemes. Furthermore, the influence of differential code biases on BDS positioning estimates such as coordinates, receiver clock biases, tropospheric delays and carrier phase ambiguities is investigated comprehensively. Comparative analysis show that the unmodeled differential code biases degrade the performance of BDS SPP by a factor of two or more, whereas the estimates of PPP are subject to varying degrees of influences. For SPP, the accuracy of dual-frequency combinations is slightly worse than that of single-frequency, and they are much more sensitive to the differential code biases, particularly for the B2B3 combination. For PPP, the uncorrected differential code biases are mostly absorbed into the receiver clock bias and carrier phase ambiguities and thus resulting in a much longer convergence time. Even though the influence of the differential code biases could be mitigated over time and comparable positioning accuracy could be achieved after convergence, it is suggested to properly handle with the differential code biases since it is vital for PPP convergence and integer ambiguity resolution.

  6. A New Online Calibration Method Based on Lord's Bias-Correction.

    Science.gov (United States)

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  7. Correction for dynamic bias error in transmission measurements of void fraction

    International Nuclear Information System (INIS)

    Andersson, P.; Sundén, E. Andersson; Svärd, S. Jacobsson; Sjöstrand, H.

    2012-01-01

    Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision.

  8. The Systematic Bias of Ingestible Core Temperature Sensors Requires a Correction by Linear Regression.

    Science.gov (United States)

    Hunt, Andrew P; Bach, Aaron J E; Borg, David N; Costello, Joseph T; Stewart, Ian B

    2017-01-01

    An accurate measure of core body temperature is critical for monitoring individuals, groups and teams undertaking physical activity in situations of high heat stress or prolonged cold exposure. This study examined the range in systematic bias of ingestible temperature sensors compared to a certified and traceable reference thermometer. A total of 119 ingestible temperature sensors were immersed in a circulated water bath at five water temperatures (TEMP A: 35.12 ± 0.60°C, TEMP B: 37.33 ± 0.56°C, TEMP C: 39.48 ± 0.73°C, TEMP D: 41.58 ± 0.97°C, and TEMP E: 43.47 ± 1.07°C) along with a certified traceable reference thermometer. Thirteen sensors (10.9%) demonstrated a systematic bias > ±0.1°C, of which 4 (3.3%) were > ± 0.5°C. Limits of agreement (95%) indicated that systematic bias would likely fall in the range of -0.14 to 0.26°C, highlighting that it is possible for temperatures measured between sensors to differ by more than 0.4°C. The proportion of sensors with systematic bias > ±0.1°C (10.9%) confirms that ingestible temperature sensors require correction to ensure their accuracy. An individualized linear correction achieved a mean systematic bias of 0.00°C, and limits of agreement (95%) to 0.00-0.00°C, with 100% of sensors achieving ±0.1°C accuracy. Alternatively, a generalized linear function (Corrected Temperature (°C) = 1.00375 × Sensor Temperature (°C) - 0.205549), produced as the average slope and intercept of a sub-set of 51 sensors and excluding sensors with accuracy outside ±0.5°C, reduced the systematic bias to Correction of sensor temperature to a reference thermometer by linear function eliminates this systematic bias (individualized functions) or ensures systematic bias is within ±0.1°C in 98% of the sensors (generalized function).

  9. A simple correction to remove the bias of the gini coefficient due to grouping

    NARCIS (Netherlands)

    T.G.M. van Ourti (Tom); Ph. Clarke (Philip)

    2011-01-01

    textabstractAbstract-We propose a first-order bias correction term for the Gini index to reduce the bias due to grouping. It depends on only the number of individuals in each group and is derived from a measurement error framework. We also provide a formula for the remaining second-order bias. Both

  10. Bias-corrected Pearson estimating functions for Taylor's power law applied to benthic macrofauna data

    DEFF Research Database (Denmark)

    Jørgensen, Bent; Demétrio, Clarice G. B.; Kristensen, Erik

    2011-01-01

    Estimation of Taylor’s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating...

  11. QIN DAWG Validation of Gradient Nonlinearity Bias Correction Workflow for Quantitative Diffusion-Weighted Imaging in Multicenter Trials.

    Science.gov (United States)

    Malyarenko, Dariya I; Wilmes, Lisa J; Arlinghaus, Lori R; Jacobs, Michael A; Huang, Wei; Helmer, Karl G; Taouli, Bachir; Yankeelov, Thomas E; Newitt, David; Chenevert, Thomas L

    2016-12-01

    Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, -35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI.

  12. Improving RNA-Seq expression estimates by correcting for fragment bias

    Science.gov (United States)

    2011-01-01

    The biochemistry of RNA-Seq library preparation results in cDNA fragments that are not uniformly distributed within the transcripts they represent. This non-uniformity must be accounted for when estimating expression levels, and we show how to perform the needed corrections using a likelihood based approach. We find improvements in expression estimates as measured by correlation with independently performed qRT-PCR and show that correction of bias leads to improved replicability of results across libraries and sequencing technologies. PMID:21410973

  13. Use of bias correction techniques to improve seasonal forecasts for reservoirs - A case-study in northwestern Mediterranean.

    Science.gov (United States)

    Marcos, Raül; Llasat, Ma Carmen; Quintana-Seguí, Pere; Turco, Marco

    2018-01-01

    In this paper, we have compared different bias correction methodologies to assess whether they could be advantageous for improving the performance of a seasonal prediction model for volume anomalies in the Boadella reservoir (northwestern Mediterranean). The bias correction adjustments have been applied on precipitation and temperature from the European Centre for Middle-range Weather Forecasting System 4 (S4). We have used three bias correction strategies: two linear (mean bias correction, BC, and linear regression, LR) and one non-linear (Model Output Statistics analogs, MOS-analog). The results have been compared with climatology and persistence. The volume-anomaly model is a previously computed Multiple Linear Regression that ingests precipitation, temperature and in-flow anomaly data to simulate monthly volume anomalies. The potential utility for end-users has been assessed using economic value curve areas. We have studied the S4 hindcast period 1981-2010 for each month of the year and up to seven months ahead considering an ensemble of 15 members. We have shown that the MOS-analog and LR bias corrections can improve the original S4. The application to volume anomalies points towards the possibility to introduce bias correction methods as a tool to improve water resource seasonal forecasts in an end-user context of climate services. Particularly, the MOS-analog approach gives generally better results than the other approaches in late autumn and early winter. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    Science.gov (United States)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological

  15. Performance of bias corrected MPEG rainfall estimate for rainfall-runoff simulation in the upper Blue Nile Basin, Ethiopia

    Science.gov (United States)

    Worqlul, Abeyou W.; Ayana, Essayas K.; Maathuis, Ben H. P.; MacAlister, Charlotte; Philpot, William D.; Osorio Leyton, Javier M.; Steenhuis, Tammo S.

    2018-01-01

    In many developing countries and remote areas of important ecosystems, good quality precipitation data are neither available nor readily accessible. Satellite observations and processing algorithms are being extensively used to produce satellite rainfall products (SREs). Nevertheless, these products are prone to systematic errors and need extensive validation before to be usable for streamflow simulations. In this study, we investigated and corrected the bias of Multi-Sensor Precipitation Estimate-Geostationary (MPEG) data. The corrected MPEG dataset was used as input to a semi-distributed hydrological model Hydrologiska Byråns Vattenbalansavdelning (HBV) for simulation of discharge of the Gilgel Abay and Gumara watersheds in the Upper Blue Nile basin, Ethiopia. The result indicated that the MPEG satellite rainfall captured 81% and 78% of the gauged rainfall variability with a consistent bias of underestimating the gauged rainfall by 60%. A linear bias correction applied significantly reduced the bias while maintaining the coefficient of correlation. The simulated flow using bias corrected MPEG SRE resulted in a simulated flow comparable to the gauge rainfall for both watersheds. The study indicated the potential of MPEG SRE in water budget studies after applying a linear bias correction.

  16. Bias atlases for segmentation-based PET attenuation correction using PET-CT and MR.

    Science.gov (United States)

    Ouyang, Jinsong; Chun, Se Young; Petibon, Yoann; Bonab, Ali A; Alpert, Nathaniel; Fakhri, Georges El

    2013-10-01

    This study was to obtain voxel-wise PET accuracy and precision using tissue-segmentation for attenuation correction. We applied multiple thresholds to the CTs of 23 patients to classify tissues. For six of the 23 patients, MR images were also acquired. The MR fat/in-phase ratio images were used for fat segmentation. Segmented tissue classes were used to create attenuation maps, which were used for attenuation correction in PET reconstruction. PET bias images were then computed using the PET reconstructed with the original CT as the reference. We registered the CTs for all the patients and transformed the corresponding bias images accordingly. We then obtained the mean and standard deviation bias atlas using all the registered bias images. Our CT-based study shows that four-class segmentation (air, lungs, fat, other tissues), which is available on most PET-MR scanners, yields 15.1%, 4.1%, 6.6%, and 12.9% RMSE bias in lungs, fat, non-fat soft-tissues, and bones, respectively. An accurate fat identification is achievable using fat/in-phase MR images. Furthermore, we have found that three-class segmentation (air, lungs, other tissues) yields less than 5% standard deviation of bias within the heart, liver, and kidneys. This implies that three-class segmentation can be sufficient to achieve small variation of bias for imaging these three organs. Finally, we have found that inter- and intra-patient lung density variations contribute almost equally to the overall standard deviation of bias within the lungs.

  17. Skin Temperature Analysis and Bias Correction in a Coupled Land-Atmosphere Data Assimilation System

    Science.gov (United States)

    Bosilovich, Michael G.; Radakovich, Jon D.; daSilva, Arlindo; Todling, Ricardo; Verter, Frances

    2006-01-01

    In an initial investigation, remotely sensed surface temperature is assimilated into a coupled atmosphere/land global data assimilation system, with explicit accounting for biases in the model state. In this scheme, an incremental bias correction term is introduced in the model's surface energy budget. In its simplest form, the algorithm estimates and corrects a constant time mean bias for each gridpoint; additional benefits are attained with a refined version of the algorithm which allows for a correction of the mean diurnal cycle. The method is validated against the assimilated observations, as well as independent near-surface air temperature observations. In many regions, not accounting for the diurnal cycle of bias caused degradation of the diurnal amplitude of background model air temperature. Energy fluxes collected through the Coordinated Enhanced Observing Period (CEOP) are used to more closely inspect the surface energy budget. In general, sensible heat flux is improved with the surface temperature assimilation, and two stations show a reduction of bias by as much as 30 Wm(sup -2) Rondonia station in Amazonia, the Bowen ratio changes direction in an improvement related to the temperature assimilation. However, at many stations the monthly latent heat flux bias is slightly increased. These results show the impact of univariate assimilation of surface temperature observations on the surface energy budget, and suggest the need for multivariate land data assimilation. The results also show the need for independent validation data, especially flux stations in varied climate regimes.

  18. Reduction of determinate errors in mass bias-corrected isotope ratios measured using a multi-collector plasma mass spectrometer

    International Nuclear Information System (INIS)

    Doherty, W.

    2015-01-01

    A nebulizer-centric instrument response function model of the plasma mass spectrometer was combined with a signal drift model, and the result was used to identify the causes of the non-spectroscopic determinate errors remaining in mass bias-corrected Pb isotope ratios (Tl as internal standard) measured using a multi-collector plasma mass spectrometer. Model calculations, confirmed by measurement, show that the detectable time-dependent errors are a result of the combined effect of signal drift and differences in the coordinates of the Pb and Tl response function maxima (horizontal offset effect). If there are no horizontal offsets, then the mass bias-corrected isotope ratios are approximately constant in time. In the absence of signal drift, the response surface curvature and horizontal offset effects are responsible for proportional errors in the mass bias-corrected isotope ratios. The proportional errors will be different for different analyte isotope ratios and different at every instrument operating point. Consequently, mass bias coefficients calculated using different isotope ratios are not necessarily equal. The error analysis based on the combined model provides strong justification for recommending a three step correction procedure (mass bias correction, drift correction and a proportional error correction, in that order) for isotope ratio measurements using a multi-collector plasma mass spectrometer

  19. [Retrospective analysis of Mexican National Addictions Survey, 2008. Bias identification and correction].

    Science.gov (United States)

    Romero-Martínez, Martín; Téllez-Rojo Solís, Martha María; Sandoval-Zárate, América Andrea; Zurita-Luna, Juan Manuel; Gutiérrez-Reyes, Juan Pablo

    2013-01-01

    To determine the presence of bias on the estimation of the consumption sometime in life of alcohol, tobacco or illegal drugs and inhalable substances, and to propose a correction for this in the case it is present. Mexican National Addictions Surveys (NAS) 2002, 2008, and 2011 were analyzed to compare population estimations of consumption sometime in life of tobacco, alcohol or illegal drugs and inhalable substances. A couple of alternative approaches for bias correction were developed. Estimated national prevalences of consumption sometime in life of alcohol and tobacco in the NAS 2008 are not plausible. There was no evidence of bias on the consumption sometime in life of illegal drugs and inhalable substances. New estimations for tobacco and alcohol consumption sometime in life were made, which resulted in plausible values when compared to other data available. Future analyses regarding tobacco and alcohol using NAS 2008 data will have to rely on these newly generated data weights, that are able to reproduce the new (plausible) estimations.

  20. Approximate Bias Correction in Econometrics

    OpenAIRE

    James G. MacKinnon; Anthony A. Smith Jr.

    1995-01-01

    This paper discusses ways to reduce the bias of consistent estimators that are biased in finite samples. It is necessary that the bias function, which relates parameter values to bias, should be estimable by computer simulation or by some other method. If so, bias can be reduced or, in some cases that may not be unrealistic, even eliminated. In general, several evaluations of the bias function will be required to do this. Unfortunately, reducing bias may increase the variance, or even the mea...

  1. Silicon photomultiplier's gain stabilization by bias correction for compensation of the temperature fluctuations

    International Nuclear Information System (INIS)

    Dorosz, P.; Baszczyk, M.; Glab, S.; Kucewicz, W.; Mik, L.; Sapor, M.

    2013-01-01

    Gain of the silicon photomultiplier is strongly dependent on the value of bias voltage and temperature. This paper proposes a method for gain stabilization just by compensation of temperature fluctuations by bias correction. It has been confirmed that this approach gives good results and the gain can be kept very stable

  2. Performance evaluation and bias correction of DBS measurements for a 1290-MHz boundary layer profiler.

    Science.gov (United States)

    Liu, Zhao; Zheng, Chaorong; Wu, Yue

    2018-02-01

    Recently, the government installed a boundary layer profiler (BLP), which is operated under the Doppler beam swinging mode, in a coastal area of China, to acquire useful wind field information in the atmospheric boundary layer for several purposes. And under strong wind conditions, the performance of the BLP is evaluated. It is found that, even though the quality controlled BLP data show good agreement with the balloon observations, a systematic bias can always be found for the BLP data. For the low wind velocities, the BLP data tend to overestimate the atmospheric wind. However, with the increment of wind velocity, the BLP data show a tendency of underestimation. In order to remove the effect of poor quality data on bias correction, the probability distribution function of the differences between the two instruments is discussed, and it is found that the t location scale distribution is the most suitable probability model when compared to other probability models. After the outliers with a large discrepancy, which are outside of 95% confidence interval of the t location scale distribution, are discarded, the systematic bias can be successfully corrected using a first-order polynomial correction function. The methodology of bias correction used in the study not only can be referred for the correction of other wind profiling radars, but also can lay a solid basis for further analysis of the wind profiles.

  3. Performance evaluation and bias correction of DBS measurements for a 1290-MHz boundary layer profiler

    Science.gov (United States)

    Liu, Zhao; Zheng, Chaorong; Wu, Yue

    2018-02-01

    Recently, the government installed a boundary layer profiler (BLP), which is operated under the Doppler beam swinging mode, in a coastal area of China, to acquire useful wind field information in the atmospheric boundary layer for several purposes. And under strong wind conditions, the performance of the BLP is evaluated. It is found that, even though the quality controlled BLP data show good agreement with the balloon observations, a systematic bias can always be found for the BLP data. For the low wind velocities, the BLP data tend to overestimate the atmospheric wind. However, with the increment of wind velocity, the BLP data show a tendency of underestimation. In order to remove the effect of poor quality data on bias correction, the probability distribution function of the differences between the two instruments is discussed, and it is found that the t location scale distribution is the most suitable probability model when compared to other probability models. After the outliers with a large discrepancy, which are outside of 95% confidence interval of the t location scale distribution, are discarded, the systematic bias can be successfully corrected using a first-order polynomial correction function. The methodology of bias correction used in the study not only can be referred for the correction of other wind profiling radars, but also can lay a solid basis for further analysis of the wind profiles.

  4. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    Science.gov (United States)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of

  5. Measurement of the $B^-$ lifetime using a simulation free approach for trigger bias correction

    Energy Technology Data Exchange (ETDEWEB)

    Aaltonen, T.; /Helsinki Inst. of Phys.; Adelman, J.; /Chicago U., EFI; Alvarez Gonzalez, B.; /Cantabria Inst. of Phys.; Amerio, S.; /INFN, Padua; Amidei, D.; /Michigan U.; Anastassov, A.; /Northwestern U.; Annovi, A.; /Frascati; Antos, J.; /Comenius U.; Apollinari, G.; /Fermilab; Appel, J.; /Fermilab; Apresyan, A.; /Purdue U. /Waseda U.

    2010-04-01

    The collection of a large number of B hadron decays to hadronic final states at the CDF II detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper decay time distribution. A lifetime measurement must correct for this bias and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. In this paper they present an analytic method for bias correction without using simulation, thereby removing any uncertainty between data and simulation. This method is presented in the form of a measurement of the lifetime of the B{sup -} using the mode B{sup -} {yields} D{sup 0}{pi}{sup -}. The B{sup -} lifetime is measured as {tau}{sub B{sup -}} = 1.663 {+-} 0.023 {+-} 0.015 ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.

  6. Measurement of the B- lifetime using a simulation free approach for trigger bias correction

    International Nuclear Information System (INIS)

    2010-01-01

    The collection of a large number of B hadron decays to hadronic final states at the CDF II detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper decay time distribution. A lifetime measurement must correct for this bias and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. In this paper they present an analytic method for bias correction without using simulation, thereby removing any uncertainty between data and simulation. This method is presented in the form of a measurement of the lifetime of the B - using the mode B - → D 0 π - . The B - lifetime is measured as τ B# sup -# = 1.663 ± 0.023 ± 0.015 ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.

  7. Bias Corrections for Standardized Effect Size Estimates Used with Single-Subject Experimental Designs

    Science.gov (United States)

    Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2014-01-01

    A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…

  8. The Systematic Bias of Ingestible Core Temperature Sensors Requires a Correction by Linear Regression

    Directory of Open Access Journals (Sweden)

    Andrew P. Hunt

    2017-04-01

    Full Text Available An accurate measure of core body temperature is critical for monitoring individuals, groups and teams undertaking physical activity in situations of high heat stress or prolonged cold exposure. This study examined the range in systematic bias of ingestible temperature sensors compared to a certified and traceable reference thermometer. A total of 119 ingestible temperature sensors were immersed in a circulated water bath at five water temperatures (TEMP A: 35.12 ± 0.60°C, TEMP B: 37.33 ± 0.56°C, TEMP C: 39.48 ± 0.73°C, TEMP D: 41.58 ± 0.97°C, and TEMP E: 43.47 ± 1.07°C along with a certified traceable reference thermometer. Thirteen sensors (10.9% demonstrated a systematic bias > ±0.1°C, of which 4 (3.3% were > ± 0.5°C. Limits of agreement (95% indicated that systematic bias would likely fall in the range of −0.14 to 0.26°C, highlighting that it is possible for temperatures measured between sensors to differ by more than 0.4°C. The proportion of sensors with systematic bias > ±0.1°C (10.9% confirms that ingestible temperature sensors require correction to ensure their accuracy. An individualized linear correction achieved a mean systematic bias of 0.00°C, and limits of agreement (95% to 0.00–0.00°C, with 100% of sensors achieving ±0.1°C accuracy. Alternatively, a generalized linear function (Corrected Temperature (°C = 1.00375 × Sensor Temperature (°C − 0.205549, produced as the average slope and intercept of a sub-set of 51 sensors and excluding sensors with accuracy outside ±0.5°C, reduced the systematic bias to < ±0.1°C in 98.4% of the remaining sensors (n = 64. In conclusion, these data show that using an uncalibrated ingestible temperature sensor may provide inaccurate data that still appears to be statistically, physiologically, and clinically meaningful. Correction of sensor temperature to a reference thermometer by linear function eliminates this systematic bias (individualized functions or ensures

  9. Detecting and correcting for publication bias in meta-analysis - A truncated normal distribution approach.

    Science.gov (United States)

    Zhu, Qiaohao; Carriere, K C

    2016-01-01

    Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.

  10. Copula-based assimilation of radar and gauge information to derive bias-corrected precipitation fields

    Directory of Open Access Journals (Sweden)

    S. Vogl

    2012-07-01

    Full Text Available This study addresses the problem of combining radar information and gauge measurements. Gauge measurements are the best available source of absolute rainfall intensity albeit their spatial availability is limited. Precipitation information obtained by radar mimics well the spatial patterns but is biased for their absolute values.

    In this study copula models are used to describe the dependence structure between gauge observations and rainfall derived from radar reflectivity at the corresponding grid cells. After appropriate time series transformation to generate "iid" variates, only the positive pairs (radar >0, gauge >0 of the residuals are considered. As not each grid cell can be assigned to one gauge, the integration of point information, i.e. gauge rainfall intensities, is achieved by considering the structure and the strength of dependence between the radar pixels and all the gauges within the radar image. Two different approaches, namely Maximum Theta and Multiple Theta, are presented. They finally allow for generating precipitation fields that mimic the spatial patterns of the radar fields and correct them for biases in their absolute rainfall intensities. The performance of the approach, which can be seen as a bias-correction for radar fields, is demonstrated for the Bavarian Alps. The bias-corrected rainfall fields are compared to a field of interpolated gauge values (ordinary kriging and are validated with available gauge measurements. The simulated precipitation fields are compared to an operationally corrected radar precipitation field (RADOLAN. The copula-based approach performs similarly well as indicated by different validation measures and successfully corrects for errors in the radar precipitation.

  11. Bias-correction and Spatial Disaggregation for Climate Change Impact Assessments at a basin scale

    Science.gov (United States)

    Nyunt, Cho; Koike, Toshio; Yamamoto, Akio; Nemoto, Toshihoro; Kitsuregawa, Masaru

    2013-04-01

    Basin-scale climate change impact studies mainly rely on general circulation models (GCMs) comprising the related emission scenarios. Realistic and reliable data from GCM is crucial for national scale or basin scale impact and vulnerability assessments to build safety society under climate change. However, GCM fail to simulate regional climate features due to the imprecise parameterization schemes in atmospheric physics and coarse resolution scale. This study describes how to exclude some unsatisfactory GCMs with respect to focused basin, how to minimize the biases of GCM precipitation through statistical bias correction and how to cover spatial disaggregation scheme, a kind of downscaling, within in a basin. GCMs rejection is based on the regional climate features of seasonal evolution as a bench mark and mainly depends on spatial correlation and root mean square error of precipitation and atmospheric variables over the target region. Global Precipitation Climatology Project (GPCP) and Japanese 25-uear Reanalysis Project (JRA-25) are specified as references in figuring spatial pattern and error of GCM. Statistical bias-correction scheme comprises improvements of three main flaws of GCM precipitation such as low intensity drizzled rain days with no dry day, underestimation of heavy rainfall and inter-annual variability of local climate. Biases of heavy rainfall are conducted by generalized Pareto distribution (GPD) fitting over a peak over threshold series. Frequency of rain day error is fixed by rank order statistics and seasonal variation problem is solved by using a gamma distribution fitting in each month against insi-tu stations vs. corresponding GCM grids. By implementing the proposed bias-correction technique to all insi-tu stations and their respective GCM grid, an easy and effective downscaling process for impact studies at the basin scale is accomplished. The proposed method have been examined its applicability to some of the basins in various climate

  12. Autocalibration method for non-stationary CT bias correction.

    Science.gov (United States)

    Vegas-Sánchez-Ferrero, Gonzalo; Ledesma-Carbayo, Maria J; Washko, George R; Estépar, Raúl San José

    2018-02-01

    Computed tomography (CT) is a widely used imaging modality for screening and diagnosis. However, the deleterious effects of radiation exposure inherent in CT imaging require the development of image reconstruction methods which can reduce exposure levels. The development of iterative reconstruction techniques is now enabling the acquisition of low-dose CT images whose quality is comparable to that of CT images acquired with much higher radiation dosages. However, the characterization and calibration of the CT signal due to changes in dosage and reconstruction approaches is crucial to provide clinically relevant data. Although CT scanners are calibrated as part of the imaging workflow, the calibration is limited to select global reference values and does not consider other inherent factors of the acquisition that depend on the subject scanned (e.g. photon starvation, partial volume effect, beam hardening) and result in a non-stationary noise response. In this work, we analyze the effect of reconstruction biases caused by non-stationary noise and propose an autocalibration methodology to compensate it. Our contributions are: 1) the derivation of a functional relationship between observed bias and non-stationary noise, 2) a robust and accurate method to estimate the local variance, 3) an autocalibration methodology that does not necessarily rely on a calibration phantom, attenuates the bias caused by noise and removes the systematic bias observed in devices from different vendors. The validation of the proposed methodology was performed with a physical phantom and clinical CT scans acquired with different configurations (kernels, doses, algorithms including iterative reconstruction). The results confirmed the suitability of the proposed methods for removing the intra-device and inter-device reconstruction biases. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Diagnostic Bias and Conduct Disorder: Improving Culturally Sensitive Diagnosis

    Science.gov (United States)

    Mizock, Lauren; Harkins, Debra

    2011-01-01

    Disproportionately high rates of Conduct Disorder are diagnosed in African American and Latino youth of color. Diagnostic bias contributes to overdiagnosis of Conduct Disorder in these adolescents of color. Following a diagnosis of Conduct Disorder, adolescents of color face poorer outcomes than their White counterparts. These negative outcomes…

  14. Correcting for particle counting bias error in turbulent flow

    Science.gov (United States)

    Edwards, R. V.; Baratuci, W.

    1985-01-01

    An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.

  15. Retrospective analysis of Mexican National Addictions Survey, 2008. Bias identification and correction.

    Directory of Open Access Journals (Sweden)

    Martín Romero-Martínez

    2013-05-01

    Full Text Available Objective. To determine the presence of bias on the estimation of the consumption sometime in life of alcohol, tobacco or illegal drugs and inhalable substances, and to propose a correction for this in the case it is present. Materials and methods. Mexican National Addictions Surveys (NAS 2002, 2008, and 2011 were analyzed to compare population estimations of consumption sometime in life of tobacco, alcohol or illegal drugs and inhalable substances. A couple of alternative approaches for bias correction were developed. Results. Estimated national prevalences of consumption sometime in life of alcohol and tobacco in the NAS 2008 are not plausible. There was no evidence of bias on the consumption sometime in life of illegal drugs and inhalable substances. New estimations for tobacco and alcohol consumption sometime in life were made, which resulted in plausible values when compared to other data available. Conclusion. Future analyses regarding tobacco and alcohol using NAS 2008 data will have to rely on these newly generated data weights, that are able to reproduce the new (plausible estimations.

  16. Non-iterative relative bias correction for 3D reconstruction of in utero fetal brain MR imaging.

    Science.gov (United States)

    Kim, Kio; Habas, Piotr; Rajagopalan, Vidya; Scott, Julia; Corbett-Detig, James; Rousseau, Francois; Glenn, Orit; Barkovich, James; Studholme, Colin

    2010-01-01

    The slice intersection motion correction (SIMC) method is a powerful tool to compensate for motion that occurs during in utero acquisition of the multislice magnetic resonance (MR) images of the human fetal brain. The SIMC method makes use of the slice intersection intensity profiles of orthogonally planned slice pairs to simultaneously correct for the relative motion occurring between all the acquired slices. This approach is based on the assumption that the bias field is consistent between slices. However, for some clinical studies where there is a strong bias field combined with significant fetal motion relative to the coils, this assumption is broken and the resulting motion estimate and the reconstruction to a 3D volume can both contain errors. In this work, we propose a method to correct for the relative differences in bias field between all slice pairs. For this, we define the energy function as the mean square difference of the intersection profiles, that is then minimized with respect to the bias field parameters of the slices. A non iterative method which considers the relative bias between each slice simultaneously is used to efficiently remove inconsistencies. The method, when tested on synthetic simulations and actual clinical imaging studies where bias was an issue, brought a significant improvement to the final reconstructed image.

  17. Effect of Bias Correction of Satellite-Rainfall Estimates on Runoff Simulations at the Source of the Upper Blue Nile

    Directory of Open Access Journals (Sweden)

    Emad Habib

    2014-07-01

    Full Text Available Results of numerous evaluation studies indicated that satellite-rainfall products are contaminated with significant systematic and random errors. Therefore, such products may require refinement and correction before being used for hydrologic applications. In the present study, we explore a rainfall-runoff modeling application using the Climate Prediction Center-MORPHing (CMORPH satellite rainfall product. The study area is the Gilgel Abbay catchment situated at the source basin of the Upper Blue Nile basin in Ethiopia, Eastern Africa. Rain gauge networks in such area are typically sparse. We examine different bias correction schemes applied locally to the CMORPH product. These schemes vary in the degree to which spatial and temporal variability in the CMORPH bias fields are accounted for. Three schemes are tested: space and time-invariant, time-variant and spatially invariant, and space and time variant. Bias-corrected CMORPH products were used to calibrate and drive the Hydrologiska Byråns Vattenbalansavdelning (HBV rainfall-runoff model. Applying the space and time-fixed bias correction scheme resulted in slight improvement of the CMORPH-driven runoff simulations, but in some instances caused deterioration. Accounting for temporal variation in the bias reduced the rainfall bias by up to 50%. Additional improvements were observed when both the spatial and temporal variability in the bias was accounted for. The rainfall bias was found to have a pronounced effect on model calibration. The calibrated model parameters changed significantly when using rainfall input from gauges alone, uncorrected, and bias-corrected CMORPH estimates. Changes of up to 81% were obtained for model parameters controlling the stream flow volume.

  18. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Bias Correction of Satellite Precipitation Products (SPPs) using a User-friendly Tool: A Step in Enhancing Technical Capacity

    Science.gov (United States)

    Rushi, B. R.; Ellenburg, W. L.; Adams, E. C.; Flores, A.; Limaye, A. S.; Valdés-Pineda, R.; Roy, T.; Valdés, J. B.; Mithieu, F.; Omondi, S.

    2017-12-01

    SERVIR, a joint NASA-USAID initiative, works to build capacity in Earth observation technologies in developing countries for improved environmental decision making in the arena of: weather and climate, water and disasters, food security and land use/land cover. SERVIR partners with leading regional organizations in Eastern and Southern Africa, Hindu Kush-Himalaya, Mekong region, and West Africa to achieve its objectives. SERVIR develops hydrological applications to address specific needs articulated by key stakeholders and daily rainfall estimates are a vital input for these applications. Satellite-derived rainfall is subjected to systemic biases which need to be corrected before it can be used for any hydrologic application such as real-time or seasonal forecasting. SERVIR and the SWAAT team at the University of Arizona, have co-developed an open-source and user friendly tool of rainfall bias correction approaches for SPPs. Bias correction tools were developed based on Linear Scaling and Quantile Mapping techniques. A set of SPPs, such as PERSIANN-CCS, TMPA-RT, and CMORPH, are bias corrected using Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) data which incorporates ground based precipitation observations. This bias correction tools also contains a component, which is included to improve monthly mean of CHIRPS using precipitation products of the Global Surface Summary of the Day (GSOD) database developed by the National Climatic Data Center (NCDC). This tool takes input from command-line which makes it user-friendly and applicable in any operating platform without prior programming skills. This presentation will focus on this bias-correction tool for SPPs, including application scenarios.

  20. A Variational Approach to Simultaneous Image Segmentation and Bias Correction.

    Science.gov (United States)

    Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong

    2015-08-01

    This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.

  1. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function. In t...

  2. Process-conditioned bias correction for seasonal forecasting: a case-study with ENSO in Peru

    Science.gov (United States)

    Manzanas, R.; Gutiérrez, J. M.

    2018-05-01

    This work assesses the suitability of a first simple attempt for process-conditioned bias correction in the context of seasonal forecasting. To do this, we focus on the northwestern part of Peru and bias correct 1- and 4-month lead seasonal predictions of boreal winter (DJF) precipitation from the ECMWF System4 forecasting system for the period 1981-2010. In order to include information about the underlying large-scale circulation which may help to discriminate between precipitation affected by different processes, we introduce here an empirical quantile-quantile mapping method which runs conditioned on the state of the Southern Oscillation Index (SOI), which is accurately predicted by System4 and is known to affect the local climate. Beyond the reduction of model biases, our results show that the SOI-conditioned method yields better ROC skill scores and reliability than the raw model output over the entire region of study, whereas the standard unconditioned implementation provides no added value for any of these metrics. This suggests that conditioning the bias correction on simple but well-simulated large-scale processes relevant to the local climate may be a suitable approach for seasonal forecasting. Yet, further research on the suitability of the application of similar approaches to the one considered here for other regions, seasons and/or variables is needed.

  3. Complex differential variance angiography with noise-bias correction for optical coherence tomography of the retina.

    Science.gov (United States)

    Braaf, Boy; Donner, Sabine; Nam, Ahhyun S; Bouma, Brett E; Vakoc, Benjamin J

    2018-02-01

    Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented.

  4. The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.

    Science.gov (United States)

    Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin

    2015-11-01

    We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.

  5. Statistical methods to correct for verification bias in diagnostic studies are inadequate when there are few false negatives: a simulation study

    Directory of Open Access Journals (Sweden)

    Vickers Andrew J

    2008-11-01

    Full Text Available Abstract Background A common feature of diagnostic research is that results for a diagnostic gold standard are available primarily for patients who are positive for the test under investigation. Data from such studies are subject to what has been termed "verification bias". We evaluated statistical methods for verification bias correction when there are few false negatives. Methods A simulation study was conducted of a screening study subject to verification bias. We compared estimates of the area-under-the-curve (AUC corrected for verification bias varying both the rate and mechanism of verification. Results In a single simulated data set, varying false negatives from 0 to 4 led to verification bias corrected AUCs ranging from 0.550 to 0.852. Excess variation associated with low numbers of false negatives was confirmed in simulation studies and by analyses of published studies that incorporated verification bias correction. The 2.5th – 97.5th centile range constituted as much as 60% of the possible range of AUCs for some simulations. Conclusion Screening programs are designed such that there are few false negatives. Standard statistical methods for verification bias correction are inadequate in this circumstance.

  6. p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results.

    Science.gov (United States)

    Simonsohn, Uri; Nelson, Leif D; Simmons, Joseph P

    2014-11-01

    Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the "choice overload" literature. © The Author(s) 2014.

  7. Correction of Gradient Nonlinearity Bias in Quantitative Diffusion Parameters of Renal Tissue with Intra Voxel Incoherent Motion.

    Science.gov (United States)

    Malyarenko, Dariya I; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K; Ross, Brian D; Chenevert, Thomas L

    2015-12-01

    Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b -maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction.

  8. Stochastic bias-correction of daily rainfall scenarios for hydrological applications

    Directory of Open Access Journals (Sweden)

    I. Portoghese

    2011-09-01

    Full Text Available The accuracy of rainfall predictions provided by climate models is crucial for the assessment of climate change impacts on hydrological processes. In fact, the presence of bias in downscaled precipitation may produce large bias in the assessment of soil moisture dynamics, river flows and groundwater recharge.

    In this study, a comparison between statistical properties of rainfall observations and model control simulations from a Regional Climate Model (RCM was performed through a robust and meaningful representation of the precipitation process. The output of the adopted RCM was analysed and re-scaled exploiting the structure of a stochastic model of the point rainfall process. In particular, the stochastic model is able to adequately reproduce the rainfall intermittency at the synoptic scale, which is one of the crucial aspects for the Mediterranean environments. Possible alteration in the local rainfall regime was investigated by means of the historical daily time-series from a dense rain-gauge network, which were also used for the analysis of the RCM bias in terms of dry and wet periods and storm intensity. The result is a stochastic scheme for bias-correction at the RCM-cell scale, which produces a realistic representation of the daily rainfall intermittency and precipitation depths, though a residual bias in the storm intensity of longer storm events persists.

  9. Bias correction of satellite precipitation products for flood forecasting application at the Upper Mahanadi River Basin in Eastern India

    Science.gov (United States)

    Beria, H.; Nanda, T., Sr.; Chatterjee, C.

    2015-12-01

    High resolution satellite precipitation products such as Tropical Rainfall Measuring Mission (TRMM), Climate Forecast System Reanalysis (CFSR), European Centre for Medium-Range Weather Forecasts (ECMWF), etc., offer a promising alternative to flood forecasting in data scarce regions. At the current state-of-art, these products cannot be used in the raw form for flood forecasting, even at smaller lead times. In the current study, these precipitation products are bias corrected using statistical techniques, such as additive and multiplicative bias corrections, and wavelet multi-resolution analysis (MRA) with India Meteorological Department (IMD) gridded precipitation product,obtained from gauge-based rainfall estimates. Neural network based rainfall-runoff modeling using these bias corrected products provide encouraging results for flood forecasting upto 48 hours lead time. We will present various statistical and graphical interpretations of catchment response to high rainfall events using both the raw and bias corrected precipitation products at different lead times.

  10. Nonlinear bias analysis and correction of microwave temperature sounder observations for FY-3C meteorological satellite

    Science.gov (United States)

    Hu, Taiyang; Lv, Rongchuan; Jin, Xu; Li, Hao; Chen, Wenxin

    2018-01-01

    The nonlinear bias analysis and correction of receiving channels in Chinese FY-3C meteorological satellite Microwave Temperature Sounder (MWTS) is a key technology of data assimilation for satellite radiance data. The thermal-vacuum chamber calibration data acquired from the MWTS can be analyzed to evaluate the instrument performance, including radiometric temperature sensitivity, channel nonlinearity and calibration accuracy. Especially, the nonlinearity parameters due to imperfect square-law detectors will be calculated from calibration data and further used to correct the nonlinear bias contributions of microwave receiving channels. Based upon the operational principles and thermalvacuum chamber calibration procedures of MWTS, this paper mainly focuses on the nonlinear bias analysis and correction methods for improving the calibration accuracy of the important instrument onboard FY-3C meteorological satellite, from the perspective of theoretical and experimental studies. Furthermore, a series of original results are presented to demonstrate the feasibility and significance of the methods.

  11. Correcting estimators of theta and Tajima's D for ascertainment biases caused by the single-nucleotide polymorphism discovery process

    DEFF Research Database (Denmark)

    Ramírez-Soriano, Anna; Nielsen, Rasmus

    2009-01-01

    Most single-nucleotide polymorphism (SNP) data suffer from an ascertainment bias caused by the process of SNP discovery followed by SNP genotyping. The final genotyped data are biased toward an excess of common alleles compared to directly sequenced data, making standard genetic methods of analysis...... the variances and covariances of these estimators and provide a corrected version of Tajima's D statistic. We reanalyze a human genomewide SNP data set and find substantial differences in the results with or without ascertainment bias correction....

  12. Statistical Downscaling and Bias Correction of Climate Model Outputs for Climate Change Impact Assessment in the U.S. Northeast

    Science.gov (United States)

    Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard

    2013-01-01

    Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.

  13. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection.

    Science.gov (United States)

    Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C

    2011-09-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.

  14. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    Science.gov (United States)

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  15. A model-based correction for outcome reporting bias in meta-analysis.

    Science.gov (United States)

    Copas, John; Dwan, Kerry; Kirkham, Jamie; Williamson, Paula

    2014-04-01

    It is often suspected (or known) that outcomes published in medical trials are selectively reported. A systematic review for a particular outcome of interest can only include studies where that outcome was reported and so may omit, for example, a study that has considered several outcome measures but only reports those giving significant results. Using the methodology of the Outcome Reporting Bias (ORB) in Trials study of (Kirkham and others, 2010. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. British Medical Journal 340, c365), we suggest a likelihood-based model for estimating the effect of ORB on confidence intervals and p-values in meta-analysis. Correcting for bias has the effect of moving estimated treatment effects toward the null and hence more cautious assessments of significance. The bias can be very substantial, sometimes sufficient to completely overturn previous claims of significance. We re-analyze two contrasting examples, and derive a simple fixed effects approximation that can be used to give an initial estimate of the effect of ORB in practice.

  16. Validation of the AMSU-B Bias Corrections Based on Satellite Measurements from SSM/T-2

    Science.gov (United States)

    Kolodner, Marc A.

    1999-01-01

    The NOAA-15 Advanced Microwave Sounding Unit-B (AMSU-B) was designed in the same spirit as the Special Sensor Microwave Water Vapor Profiler (SSM/T-2) on board the DMSP F11-14 satellites, to perform remote sensing of spatial and temporal variations in mid and upper troposphere humidity. While the SSM/T-2 instruments have a 48 km spatial resolution at nadir and 28 beam positions per scan, AMSU-B provides an improvement with a 16 km spatial resolution at nadir and 90 beam positions per scan. The AMSU-B instrument, though, has been experiencing radio frequency interference (RFI) contamination from the NOAA-15 transmitters whose effect is dependent upon channel, geographic location, and current spacecraft antenna configuration. This has lead to large cross-track biases reaching as high as 100 Kelvin for channel 17 (150 GHz) and 50 Kelvin for channel 19 (183 +/-3 GHz). NOAA-NESDIS has recently provided a series of bias corrections for AMSU-B data starting from March, 1999. These corrections are available for each of the five channels, for every third field of view, and for three cycles within an eight second period. There is also a quality indicator in each data record to indicate whether or not the bias corrections should be applied. As a precursor to performing retrievals of mid and upper troposphere humidity, a validation study is performed by statistically analyzing the differences between the F14 SSM/T-2 and the bias corrected AMSU-B brightness temperatures for three months in the spring of 1999.

  17. Silicon photomultiplier's gain stabilization by bias correction for compensation of the temperature fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Dorosz, P., E-mail: pdorosz@agh.edu.pl [AGH University of Science and Technology, Faculty of Electrical Engineering, Automatics, Computer Science and Electronics, Department of Electronics, 30-059 Krakow (Poland); Baszczyk, M.; Glab, S. [AGH University of Science and Technology, Faculty of Electrical Engineering, Automatics, Computer Science and Electronics, Department of Electronics, 30-059 Krakow (Poland); Kucewicz, W., E-mail: kucewicz@agh.edu.pl [AGH University of Science and Technology, Faculty of Electrical Engineering, Automatics, Computer Science and Electronics, Department of Electronics, 30-059 Krakow (Poland); Mik, L.; Sapor, M. [AGH University of Science and Technology, Faculty of Electrical Engineering, Automatics, Computer Science and Electronics, Department of Electronics, 30-059 Krakow (Poland)

    2013-08-01

    Gain of the silicon photomultiplier is strongly dependent on the value of bias voltage and temperature. This paper proposes a method for gain stabilization just by compensation of temperature fluctuations by bias correction. It has been confirmed that this approach gives good results and the gain can be kept very stable.

  18. Potential of bias correction for downscaling passive microwave and soil moisture data

    Science.gov (United States)

    Passive microwave satellites such as SMOS (Soil Moisture and Ocean Salinity) or SMAP (Soil Moisture Active Passive) observe brightness temperature (TB) and retrieve soil moisture at a spatial resolution greater than most hydrological processes. Bias correction is proposed as a simple method to disag...

  19. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter

    Directory of Open Access Journals (Sweden)

    Qingjiao Sun

    2016-01-01

    Full Text Available Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR pathological image enhancement method based on improved bias field correction and guided image filter (GIF. Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work.

  20. Bias correction method for climate change impact assessment at a basin scale

    Science.gov (United States)

    Nyunt, C.; Jaranilla-sanchez, P. A.; Yamamoto, A.; Nemoto, T.; Kitsuregawa, M.; Koike, T.

    2012-12-01

    Climate change impact studies are mainly based on the general circulation models GCM and these studies play an important role to define suitable adaptation strategies for resilient environment in a basin scale management. For this purpose, this study summarized how to select appropriate GCM to decrease the certain uncertainty amount in analysis. This was applied to the Pampanga, Angat and Kaliwa rivers in Luzon Island, the main island of Philippine and these three river basins play important roles in irrigation water supply, municipal water source for Metro Manila. According to the GCM scores of both seasonal evolution of Asia summer monsoon and spatial correlation and root mean squared error of atmospheric variables over the region, finally six GCM is chosen. Next, we develop a complete, efficient and comprehensive statistical bias correction scheme covering extremes events, normal rainfall and frequency of dry period. Due to the coarse resolution and parameterization scheme of GCM, extreme rainfall underestimation, too many rain days with low intensity and poor representation of local seasonality have been known as bias of GCM. Extreme rainfall has unusual characteristics and it should be focused specifically. Estimated maximum extreme rainfall is crucial for planning and design of infrastructures in river basin. Developing countries have limited technical, financial and management resources for implementing adaptation measures and they need detailed information of drought and flood for near future. Traditionally, the analysis of extreme has been examined using annual maximum series (AMS) adjusted to a Gumbel or Lognormal distribution. The drawback is the loss of the second, third etc, largest rainfall. Another approach is partial duration series (PDS) constructed using the values above a selected threshold and permit more than one event per year. The generalized Pareto distribution (GPD) has been used to model PDS and it is the series of excess over a threshold

  1. WATER AVAILABILITY IN SOUTHERN PORTUGAL FOR DIFFERENT CLIMATE CHANGE SCENARIOS SUBJECTED TO BIAS CORRECTION

    Directory of Open Access Journals (Sweden)

    Sandra Mourato

    2014-01-01

    Full Text Available Regional climate models provided precipitation and temperature time series for control (1961–1990 and scenario (2071–2100 periods. At southern Portu gal, the climate models in the control period systematically present higher temp eratures and lower precipitation than the observations. Therefore, the direct inpu t of climate model data into hydrological models might result in more severe scenarios for future water availability. Three bias correction methods (Delta Change, Dire ct Forcing and Hybrid are analysed and their performances in water availability impac t studies are assessed. The Delta Change method assumes that the observed series variab ility is maintained in the scenario period and is corrected by the evolution predicted by the climate models. The Direct Forcing method maintains the scenario series variabi lity, which is corrected by the bias found in the control period, and the Hybrid method maintains the control model series variability, which is corrected by the bias found in the control period and by the evolution predicted by the climate models. To assess the climate impacts in the water resources expected for the scenario period, a physically based spatially distributed hydrological model, SHETRAN, is used for runoff pro jections in a southern Portugal basin. The annual and seasonal runoff shows a runoff d ecrease in the scenario period, increasing the water shor tage that is already experienc ed. The overall annual reduction varies between –80% and –35%. In general, the results show that the runoff reductions obtained with climate models corrected with the Delt a Change method are highest but with a narrow range that varies between –80% and –5 2%.

  2. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection

    OpenAIRE

    Kwan, Johnny S. H.; Kung, Annie W. C.; Sham, Pak C.

    2011-01-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias. © The Author(s) 2011.

  3. Correction of misclassification bias induced by the residential mobility in studies examining the link between socioeconomic environment and cancer incidence.

    Science.gov (United States)

    Bryere, Josephine; Pornet, Carole; Dejardin, Olivier; Launay, Ludivine; Guittet, Lydia; Launoy, Guy

    2015-04-01

    Many international ecological studies that examine the link between social environment and cancer incidence use a deprivation index based on the subjects' address at the time of diagnosis to evaluate socioeconomic status. Thus, social past details are ignored, which leads to misclassification bias in the estimations. The objectives of this study were to include the latency delay in such estimations and to observe the effects. We adapted a previous methodology to correct estimates of the influence of socioeconomic environment on cancer incidence considering the latency delay in measuring socioeconomic status. We implemented this method using French data. We evaluated the misclassification due to social mobility with census data and corrected the relative risks. Inclusion of misclassification affected the values of relative risks, and the corrected values showed a greater departure from the value 1 than the uncorrected ones. For cancer of lung, colon-rectum, lips-mouth-pharynx, kidney and esophagus in men, the over incidence in the deprived categories was augmented by the correction. By not taking into account the latency period in measuring socioeconomic status, the burden of cancer associated with social inequality may be underestimated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Addressing the mischaracterization of extreme rainfall in regional climate model simulations - A synoptic pattern based bias correction approach

    Science.gov (United States)

    Li, Jingwan; Sharma, Ashish; Evans, Jason; Johnson, Fiona

    2018-01-01

    Addressing systematic biases in regional climate model simulations of extreme rainfall is a necessary first step before assessing changes in future rainfall extremes. Commonly used bias correction methods are designed to match statistics of the overall simulated rainfall with observations. This assumes that change in the mix of different types of extreme rainfall events (i.e. convective and non-convective) in a warmer climate is of little relevance in the estimation of overall change, an assumption that is not supported by empirical or physical evidence. This study proposes an alternative approach to account for the potential change of alternate rainfall types, characterized here by synoptic weather patterns (SPs) using self-organizing maps classification. The objective of this study is to evaluate the added influence of SPs on the bias correction, which is achieved by comparing the corrected distribution of future extreme rainfall with that using conventional quantile mapping. A comprehensive synthetic experiment is first defined to investigate the conditions under which the additional information of SPs makes a significant difference to the bias correction. Using over 600,000 synthetic cases, statistically significant differences are found to be present in 46% cases. This is followed by a case study over the Sydney region using a high-resolution run of the Weather Research and Forecasting (WRF) regional climate model, which indicates a small change in the proportions of the SPs and a statistically significant change in the extreme rainfall over the region, although the differences between the changes obtained from the two bias correction methods are not statistically significant.

  5. Bias field inconsistency correction of motion-scattered multislice MRI for improved 3D image reconstruction.

    Science.gov (United States)

    Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin

    2011-09-01

    A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.

  6. A bias-corrected CMIP5 dataset for Africa using the CDF-t method - a contribution to agricultural impact studies

    Science.gov (United States)

    Moise Famien, Adjoua; Janicot, Serge; Delfin Ochou, Abe; Vrac, Mathieu; Defrance, Dimitri; Sultan, Benjamin; Noël, Thomas

    2018-03-01

    The objective of this paper is to present a new dataset of bias-corrected CMIP5 global climate model (GCM) daily data over Africa. This dataset was obtained using the cumulative distribution function transform (CDF-t) method, a method that has been applied to several regions and contexts but never to Africa. Here CDF-t has been applied over the period 1950-2099 combining Historical runs and climate change scenarios for six variables: precipitation, mean near-surface air temperature, near-surface maximum air temperature, near-surface minimum air temperature, surface downwelling shortwave radiation, and wind speed, which are critical variables for agricultural purposes. WFDEI has been used as the reference dataset to correct the GCMs. Evaluation of the results over West Africa has been carried out on a list of priority user-based metrics that were discussed and selected with stakeholders. It includes simulated yield using a crop model simulating maize growth. These bias-corrected GCM data have been compared with another available dataset of bias-corrected GCMs using WATCH Forcing Data as the reference dataset. The impact of WFD, WFDEI, and also EWEMBI reference datasets has been also examined in detail. It is shown that CDF-t is very effective at removing the biases and reducing the high inter-GCM scattering. Differences with other bias-corrected GCM data are mainly due to the differences among the reference datasets. This is particularly true for surface downwelling shortwave radiation, which has a significant impact in terms of simulated maize yields. Projections of future yields over West Africa are quite different, depending on the bias-correction method used. However all these projections show a similar relative decreasing trend over the 21st century.

  7. Temperature effects on pitfall catches of epigeal arthropods: a model and method for bias correction.

    Science.gov (United States)

    Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael

    2013-02-01

    Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance ('activity-density') because individual activity - which is affected by climatic factors - affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data.Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r , consistent with an exponential Q 10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies.The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches. Synthesis and Applications . The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for temperature

  8. Bias Correction Methods Explain Much of the Variation Seen in Breast Cancer Risks of BRCA1/2 Mutation Carriers.

    Science.gov (United States)

    Vos, Janet R; Hsu, Li; Brohet, Richard M; Mourits, Marian J E; de Vries, Jakob; Malone, Kathleen E; Oosterwijk, Jan C; de Bock, Geertruida H

    2015-08-10

    Recommendations for treating patients who carry a BRCA1/2 gene are mainly based on cumulative lifetime risks (CLTRs) of breast cancer determined from retrospective cohorts. These risks vary widely (27% to 88%), and it is important to understand why. We analyzed the effects of methods of risk estimation and bias correction and of population factors on CLTRs in this retrospective clinical cohort of BRCA1/2 carriers. The following methods to estimate the breast cancer risk of BRCA1/2 carriers were identified from the literature: Kaplan-Meier, frailty, and modified segregation analyses with bias correction consisting of including or excluding index patients combined with including or excluding first-degree relatives (FDRs) or different conditional likelihoods. These were applied to clinical data of BRCA1/2 families derived from our family cancer clinic for whom a simulation was also performed to evaluate the methods. CLTRs and 95% CIs were estimated and compared with the reference CLTRs. CLTRs ranged from 35% to 83% for BRCA1 and 41% to 86% for BRCA2 carriers at age 70 years width of 95% CIs: 10% to 35% and 13% to 46%, respectively). Relative bias varied from -38% to +16%. Bias correction with inclusion of index patients and untested FDRs gave the smallest bias: +2% (SD, 2%) in BRCA1 and +0.9% (SD, 3.6%) in BRCA2. Much of the variation in breast cancer CLTRs in retrospective clinical BRCA1/2 cohorts is due to the bias-correction method, whereas a smaller part is due to population differences. Kaplan-Meier analyses with bias correction that includes index patients and a proportion of untested FDRs provide suitable CLTRs for carriers counseled in the clinic. © 2015 by American Society of Clinical Oncology.

  9. Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.

    Science.gov (United States)

    Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping

    2016-01-01

    Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.

  10. Correct acceptance weighs more than correct rejection: a decision bias induced by question framing.

    Science.gov (United States)

    Kareev, Yaakov; Trope, Yaacov

    2011-02-01

    We propose that in attempting to detect whether an effect exists or not, people set their decision criterion so as to increase the number of hits and decrease the number of misses, at the cost of increasing false alarms and decreasing correct rejections. As a result, we argue, if one of two complementary events is framed as the positive response to a question and the other as the negative response, people will tend to predict the former more often than the latter. Performance in a prediction task with symmetric payoffs and equal base rates supported our proposal. Positive responses were indeed more prevalent than negative responses, irrespective of the phrasing of the question. The bias, slight but consistent and significant, was evident from early in a session and then remained unchanged to the end. A regression analysis revealed that, in addition, individuals' decision criteria reflected their learning experiences, with the weight of hits being greater than that of correct rejections.

  11. Illustrating, Quantifying, and Correcting for Bias in Post-hoc Analysis of Gene-Based Rare Variant Tests of Association

    Science.gov (United States)

    Grinde, Kelsey E.; Arbet, Jaron; Green, Alden; O'Connell, Michael; Valcarcel, Alessandra; Westra, Jason; Tintle, Nathan

    2017-01-01

    To date, gene-based rare variant testing approaches have focused on aggregating information across sets of variants to maximize statistical power in identifying genes showing significant association with diseases. Beyond identifying genes that are associated with diseases, the identification of causal variant(s) in those genes and estimation of their effect is crucial for planning replication studies and characterizing the genetic architecture of the locus. However, we illustrate that straightforward single-marker association statistics can suffer from substantial bias introduced by conditioning on gene-based test significance, due to the phenomenon often referred to as “winner's curse.” We illustrate the ramifications of this bias on variant effect size estimation and variant prioritization/ranking approaches, outline parameters of genetic architecture that affect this bias, and propose a bootstrap resampling method to correct for this bias. We find that our correction method significantly reduces the bias due to winner's curse (average two-fold decrease in bias, p bias and improve inference in post-hoc analysis of gene-based tests under a wide variety of genetic architectures. PMID:28959274

  12. Correction of technical bias in clinical microarray data improves concordance with known biological information

    DEFF Research Database (Denmark)

    Eklund, Aron Charles; Szallasi, Zoltan Imre

    2008-01-01

    The performance of gene expression microarrays has been well characterized using controlled reference samples, but the performance on clinical samples remains less clear. We identified sources of technical bias affecting many genes in concert, thus causing spurious correlations in clinical data...... sets and false associations between genes and clinical variables. We developed a method to correct for technical bias in clinical microarray data, which increased concordance with known biological relationships in multiple data sets....

  13. A Novel Bias Correction Method for Soil Moisture and Ocean Salinity (SMOS Soil Moisture: Retrieval Ensembles

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2015-12-01

    Full Text Available Bias correction is a very important pre-processing step in satellite data assimilation analysis, as data assimilation itself cannot circumvent satellite biases. We introduce a retrieval algorithm-specific and spatially heterogeneous Instantaneous Field of View (IFOV bias correction method for Soil Moisture and Ocean Salinity (SMOS soil moisture. To the best of our knowledge, this is the first paper to present the probabilistic presentation of SMOS soil moisture using retrieval ensembles. We illustrate that retrieval ensembles effectively mitigated the overestimation problem of SMOS soil moisture arising from brightness temperature errors over West Africa in a computationally efficient way (ensemble size: 12, no time-integration. In contrast, the existing method of Cumulative Distribution Function (CDF matching considerably increased the SMOS biases, due to the limitations of relying on the imperfect reference data. From the validation at two semi-arid sites, Benin (moderately wet and vegetated area and Niger (dry and sandy bare soils, it was shown that the SMOS errors arising from rain and vegetation attenuation were appropriately corrected by ensemble approaches. In Benin, the Root Mean Square Errors (RMSEs decreased from 0.1248 m3/m3 for CDF matching to 0.0678 m3/m3 for the proposed ensemble approach. In Niger, the RMSEs decreased from 0.14 m3/m3 for CDF matching to 0.045 m3/m3 for the ensemble approach.

  14. Robust multi-site MR data processing: iterative optimization of bias correction, tissue classification, and registration.

    Science.gov (United States)

    Young Kim, Eun; Johnson, Hans J

    2013-01-01

    A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.

  15. Non-stationary Bias Correction of Monthly CMIP5 Temperature Projections over China using a Residual-based Bagging Tree Model

    Science.gov (United States)

    Yang, T.; Lee, C.

    2017-12-01

    The biases in the Global Circulation Models (GCMs) are crucial for understanding future climate changes. Currently, most bias correction methodologies suffer from the assumption that model bias is stationary. This paper provides a non-stationary bias correction model, termed Residual-based Bagging Tree (RBT) model, to reduce simulation biases and to quantify the contributions of single models. Specifically, the proposed model estimates the residuals between individual models and observations, and takes the differences between observations and the ensemble mean into consideration during the model training process. A case study is conducted for 10 major river basins in Mainland China during different seasons. Results show that the proposed model is capable of providing accurate and stable predictions while including the non-stationarities into the modeling framework. Significant reductions in both bias and root mean squared error are achieved with the proposed RBT model, especially for the central and western parts of China. The proposed RBT model has consistently better performance in reducing biases when compared to the raw ensemble mean, the ensemble mean with simple additive bias correction, and the single best model for different seasons. Furthermore, the contribution of each single GCM in reducing the overall bias is quantified. The single model importance varies between 3.1% and 7.2%. For different future scenarios (RCP 2.6, RCP 4.5, and RCP 8.5), the results from RBT model suggest temperature increases of 1.44 ºC, 2.59 ºC, and 4.71 ºC by the end of the century, respectively, when compared to the average temperature during 1970 - 1999.

  16. A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.

    Science.gov (United States)

    Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping

    2017-03-01

    Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.

  17. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  18. Correction Technique for Raman Water Vapor Lidar Signal-Dependent Bias and Suitability for Water Wapor Trend Monitoring in the Upper Troposphere

    Science.gov (United States)

    Whiteman, D. N.; Cadirola, M.; Venable, D.; Calhoun, M.; Miloshevich, L; Vermeesch, K.; Twigg, L.; Dirisu, A.; Hurst, D.; Hall, E.; hide

    2012-01-01

    The MOHAVE-2009 campaign brought together diverse instrumentation for measuring atmospheric water vapor. We report on the participation of the ALVICE (Atmospheric Laboratory for Validation, Interagency Collaboration and Education) mobile laboratory in the MOHAVE-2009 campaign. In appendices we also report on the performance of the corrected Vaisala RS92 radiosonde measurements during the campaign, on a new radiosonde based calibration algorithm that reduces the influence of atmospheric variability on the derived calibration constant, and on other results of the ALVICE deployment. The MOHAVE-2009 campaign permitted the Raman lidar systems participating to discover and address measurement biases in the upper troposphere and lower stratosphere. The ALVICE lidar system was found to possess a wet bias which was attributed to fluorescence of insect material that was deposited on the telescope early in the mission. Other sources of wet biases are discussed and data from other Raman lidar systems are investigated, revealing that wet biases in upper tropospheric (UT) and lower stratospheric (LS) water vapor measurements appear to be quite common in Raman lidar systems. Lower stratospheric climatology of water vapor is investigated both as a means to check for the existence of these wet biases in Raman lidar data and as a source of correction for the bias. A correction technique is derived and applied to the ALVICE lidar water vapor profiles. Good agreement is found between corrected ALVICE lidar measurments and those of RS92, frost point hygrometer and total column water. The correction is offered as a general method to both quality control Raman water vapor lidar data and to correct those data that have signal-dependent bias. The influence of the correction is shown to be small at regions in the upper troposphere where recent work indicates detection of trends in atmospheric water vapor may be most robust. The correction shown here holds promise for permitting useful upper

  19. Bias correction for estimated QTL effects using the penalized maximum likelihood method.

    Science.gov (United States)

    Zhang, J; Yue, C; Zhang, Y-M

    2012-04-01

    A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.

  20. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    Science.gov (United States)

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  1. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    Science.gov (United States)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin

  2. Correction of Spatial Bias in Oligonucleotide Array Data

    Directory of Open Access Journals (Sweden)

    Philippe Serhal

    2013-01-01

    Full Text Available Background. Oligonucleotide microarrays allow for high-throughput gene expression profiling assays. The technology relies on the fundamental assumption that observed hybridization signal intensities (HSIs for each intended target, on average, correlate with their target’s true concentration in the sample. However, systematic, nonbiological variation from several sources undermines this hypothesis. Background hybridization signal has been previously identified as one such important source, one manifestation of which appears in the form of spatial autocorrelation. Results. We propose an algorithm, pyn, for the elimination of spatial autocorrelation in HSIs, exploiting the duality of desirable mutual information shared by probes in a common probe set and undesirable mutual information shared by spatially proximate probes. We show that this correction procedure reduces spatial autocorrelation in HSIs; increases HSI reproducibility across replicate arrays; increases differentially expressed gene detection power; and performs better than previously published methods. Conclusions. The proposed algorithm increases both precision and accuracy, while requiring virtually no changes to users’ current analysis pipelines: the correction consists merely of a transformation of raw HSIs (e.g., CEL files for Affymetrix arrays. A free, open-source implementation is provided as an R package, compatible with standard Bioconductor tools. The approach may also be tailored to other platform types and other sources of bias.

  3. Correction of Spatial Bias in Oligonucleotide Array Data

    Science.gov (United States)

    Lemieux, Sébastien

    2013-01-01

    Background. Oligonucleotide microarrays allow for high-throughput gene expression profiling assays. The technology relies on the fundamental assumption that observed hybridization signal intensities (HSIs) for each intended target, on average, correlate with their target's true concentration in the sample. However, systematic, nonbiological variation from several sources undermines this hypothesis. Background hybridization signal has been previously identified as one such important source, one manifestation of which appears in the form of spatial autocorrelation. Results. We propose an algorithm, pyn, for the elimination of spatial autocorrelation in HSIs, exploiting the duality of desirable mutual information shared by probes in a common probe set and undesirable mutual information shared by spatially proximate probes. We show that this correction procedure reduces spatial autocorrelation in HSIs; increases HSI reproducibility across replicate arrays; increases differentially expressed gene detection power; and performs better than previously published methods. Conclusions. The proposed algorithm increases both precision and accuracy, while requiring virtually no changes to users' current analysis pipelines: the correction consists merely of a transformation of raw HSIs (e.g., CEL files for Affymetrix arrays). A free, open-source implementation is provided as an R package, compatible with standard Bioconductor tools. The approach may also be tailored to other platform types and other sources of bias. PMID:23573083

  4. Implementation of Coupled Skin Temperature Analysis and Bias Correction in a Global Atmospheric Data Assimilation System

    Science.gov (United States)

    Radakovich, Jon; Bosilovich, M.; Chern, Jiun-dar; daSilva, Arlindo

    2004-01-01

    The NASA/NCAR Finite Volume GCM (fvGCM) with the NCAR CLM (Community Land Model) version 2.0 was integrated into the NASA/GMAO Finite Volume Data Assimilation System (fvDAS). A new method was developed for coupled skin temperature assimilation and bias correction where the analysis increment and bias correction term is passed into the CLM2 and considered a forcing term in the solution to the energy balance. For our purposes, the fvDAS CLM2 was run at 1 deg. x 1.25 deg. horizontal resolution with 55 vertical levels. We assimilate the ISCCP-DX (30 km resolution) surface temperature product. The atmospheric analysis was performed 6-hourly, while the skin temperature analysis was performed 3-hourly. The bias correction term, which was updated at the analysis times, was added to the skin temperature tendency equation at every timestep. In this presentation, we focus on the validation of the surface energy budget at the in situ reference sites for the Coordinated Enhanced Observation Period (CEOP). We will concentrate on sites that include independent skin temperature measurements and complete energy budget observations for the month of July 2001. In addition, MODIS skin temperature will be used for validation. Several assimilations were conducted and preliminary results will be presented.

  5. No man's land: gender bias and social constructivism in the diagnosis of borderline personality disorder.

    Science.gov (United States)

    Bjorklund, Pamela

    2006-01-01

    The literature on borderline personality disorder (BPD), including its epidemiology, biology, phenomenology, causes, correlates, consequences, costs, treatments, and outcomes is vast. Thousands of articles and books have been published. Because the true prevalence of BPD by sex (gender) in the general population is still unknown, the important question of why women, rather than men, are more frequently diagnosed with BPD remains largely unanswered-despite current evidence for the origin of personality disorder in genetics and neurobiology, and despite recent suggestions that biased sampling is the most likely explanation for gender bias in the diagnosis of BPD. This paper reviews selected literature on (a) the epidemiology of BPD, (b) gender bias in the diagnosis of BPD, and (c) the social construction of diagnosis, particularly the diagnostic entity labeled "Borderline Personality Disorder." It attempts a synthesis of diverse, multidisciplinary literature to address the question of why women outnumber men by a ratio of 3:1 in the diagnosis of BPD. It rests on assumptions that (a) to varying degrees sociocultural factors inevitably play a role in the expression of disease conditions, and that (b) personality disorders, including BPD, have cultural histories. It also rests on the belief, for which there is considerable scholarly support, that the phenomenon called BPD has multiple, complex, interactive, biological, psychological, and constructed sociocultural determinants. Nurses must understand the phenomenon at this level of complexity to provide appropriate care.

  6. Bias correction for selecting the minimal-error classifier from many machine learning models.

    Science.gov (United States)

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. A two-phase sampling survey for nonresponse and its paradata to correct nonresponse bias in a health surveillance survey.

    Science.gov (United States)

    Santin, G; Bénézet, L; Geoffroy-Perez, B; Bouyer, J; Guéguen, A

    2017-02-01

    The decline in participation rates in surveys, including epidemiological surveillance surveys, has become a real concern since it may increase nonresponse bias. The aim of this study is to estimate the contribution of a complementary survey among a subsample of nonrespondents, and the additional contribution of paradata in correcting for nonresponse bias in an occupational health surveillance survey. In 2010, 10,000 workers were randomly selected and sent a postal questionnaire. Sociodemographic data were available for the whole sample. After data collection of the questionnaires, a complementary survey among a random subsample of 500 nonrespondents was performed using a questionnaire administered by an interviewer. Paradata were collected for the complete subsample of the complementary survey. Nonresponse bias in the initial sample and in the combined samples were assessed using variables from administrative databases available for the whole sample, not subject to differential measurement errors. Corrected prevalences by reweighting technique were estimated by first using the initial survey alone and then the initial and complementary surveys combined, under several assumptions regarding the missing data process. Results were compared by computing relative errors. The response rates of the initial and complementary surveys were 23.6% and 62.6%, respectively. For the initial and the combined surveys, the relative errors decreased after correction for nonresponse on sociodemographic variables. For the combined surveys without paradata, relative errors decreased compared with the initial survey. The contribution of the paradata was weak. When a complex descriptive survey has a low response rate, a short complementary survey among nonrespondents with a protocol which aims to maximize the response rates, is useful. The contribution of sociodemographic variables in correcting for nonresponse bias is important whereas the additional contribution of paradata in

  8. An improved level set method for brain MR images segmentation and bias correction.

    Science.gov (United States)

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  9. bcROCsurface: an R package for correcting verification bias in estimation of the ROC surface and its volume for continuous diagnostic tests.

    Science.gov (United States)

    To Duc, Khanh

    2017-11-18

    Receiver operating characteristic (ROC) surface analysis is usually employed to assess the accuracy of a medical diagnostic test when there are three ordered disease status (e.g. non-diseased, intermediate, diseased). In practice, verification bias can occur due to missingness of the true disease status and can lead to a distorted conclusion on diagnostic accuracy. In such situations, bias-corrected inference tools are required. This paper introduce an R package, named bcROCsurface, which provides utility functions for verification bias-corrected ROC surface analysis. The shiny web application of the correction for verification bias in estimation of the ROC surface analysis is also developed. bcROCsurface may become an important tool for the statistical evaluation of three-class diagnostic markers in presence of verification bias. The R package, readme and example data are available on CRAN. The web interface enables users less familiar with R to evaluate the accuracy of diagnostic tests, and can be found at http://khanhtoduc.shinyapps.io/bcROCsurface_shiny/ .

  10. A new bias field correction method combining N3 and FCM for improved segmentation of breast density on MRI.

    Science.gov (United States)

    Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying

    2011-01-01

    Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3

  11. Discharge simulations performed with a hydrological model using bias corrected regional climate model input

    NARCIS (Netherlands)

    Pelt, van S.C.; Kabat, P.; Maat, ter H.W.; Hurk, van den B.J.J.M.; Weerts, A.H.

    2009-01-01

    Studies have demonstrated that precipitation on Northern Hemisphere mid-latitudes has increased in the last decades and that it is likely that this trend will continue. This will have an influence on discharge of the river Meuse. The use of bias correction methods is important when the effect of

  12. Experimenter Confirmation Bias and the Correction of Science Misconceptions

    Science.gov (United States)

    Allen, Michael; Coole, Hilary

    2012-06-01

    This paper describes a randomised educational experiment ( n = 47) that examined two different teaching methods and compared their effectiveness at correcting one science misconception using a sample of trainee primary school teachers. The treatment was designed to promote engagement with the scientific concept by eliciting emotional responses from learners that were triggered by their own confirmation biases. The treatment group showed superior learning gains to control at post-test immediately after the lesson, although benefits had dissipated after 6 weeks. Findings are discussed with reference to the conceptual change paradigm and to the importance of feeling emotion during a learning experience, having implications for the teaching of pedagogies to adults that have been previously shown to be successful with children.

  13. Length bias correction in one-day cross-sectional assessments - The nutritionDay study.

    Science.gov (United States)

    Frantal, Sophie; Pernicka, Elisabeth; Hiesmayr, Michael; Schindler, Karin; Bauer, Peter

    2016-04-01

    A major problem occurring in cross-sectional studies is sampling bias. Length of hospital stay (LOS) differs strongly between patients and causes a length bias as patients with longer LOS are more likely to be included and are therefore overrepresented in this type of study. To adjust for the length bias higher weights are allocated to patients with shorter LOS. We determined the effect of length-bias adjustment in two independent populations. Length-bias correction is applied to the data of the nutritionDay project, a one-day multinational cross-sectional audit capturing data on disease and nutrition of patients admitted to hospital wards with right-censoring after 30 days follow-up. We applied the weighting method for estimating the distribution function of patient baseline variables based on the method of non-parametric maximum likelihood. Results are validated using data from all patients admitted to the General Hospital of Vienna between 2005 and 2009, where the distribution of LOS can be assumed to be known. Additionally, a simplified calculation scheme for estimating the adjusted distribution function of LOS is demonstrated on a small patient example. The crude median (lower quartile; upper quartile) LOS in the cross-sectional sample was 14 (8; 24) and decreased to 7 (4; 12) when adjusted. Hence, adjustment for length bias in cross-sectional studies is essential to get appropriate estimates. Copyright © 2015 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  14. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    Science.gov (United States)

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  15. Bias and equivalence of the Strengths Use and Deficit Correction Questionnaire

    Directory of Open Access Journals (Sweden)

    Crizelle Els

    2016-11-01

    Full Text Available Orientation: For optimal outcomes, it is suggested that employees receive support from their organisation to use their strengths and improve their deficits. Employees also engage in proactive behaviour to use their strengths and improve their deficits. Following this conversation, the Strengths Use and Deficit Correction Questionnaire (SUDCO was developed. However, the cultural suitability of the SUDCO has not been confirmed. Research purpose: The purpose of this study was to examine the bias and structural equivalence of the SUDCO. Motivation for the study: In a diverse cultural context such as South Africa, it is important to establish that a similar score on a psychological test has the same psychological meaning across ethnic groups. Research design, approach and method: A cross-sectional survey design was followed to collect data among a convenience sample of 858 employees from various occupational sectors in South Africa. Main findings: Confirmatory multigroup analysis was used to test for item and construct bias. None of the items were biased, neither uniform nor non-uniform. The most restrictive model accounted for similarities in weights, intercepts and means; only residuals were different. Practical/managerial implications: The results suggest that the SUDCO is suitable for use among the major ethnic groups included in this study. These results increase the probability that future studies with the SUDCO among other ethnic groups will be unbiased and equivalent. Contribution: This study contributed to existing literature because no previous research has assessed the bias and equivalence of the SUDCO among ethnic groups in South Africa.

  16. Assessing climate change impacts on the rape stem weevil, Ceutorhynchus napi Gyll., based on bias- and non-bias-corrected regional climate change projections

    Science.gov (United States)

    Junk, J.; Ulber, B.; Vidal, S.; Eickermann, M.

    2015-11-01

    Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.

  17. Assessing climate change impacts on the rape stem weevil, Ceutorhynchus napi Gyll., based on bias- and non-bias-corrected regional climate change projections.

    Science.gov (United States)

    Junk, J; Ulber, B; Vidal, S; Eickermann, M

    2015-11-01

    Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.

  18. Implementing a generic method for bias correction in statistical models using random effects, with spatial and population dynamics examples

    DEFF Research Database (Denmark)

    Thorson, James T.; Kristensen, Kasper

    2016-01-01

    Statistical models play an important role in fisheries science when reconciling ecological theory with available data for wild populations or experimental studies. Ecological models increasingly include both fixed and random effects, and are often estimated using maximum likelihood techniques...... configurations of an age-structured population dynamics model. This simulation experiment shows that the epsilon-method and the existing bias-correction method perform equally well in data-rich contexts, but the epsilon-method is slightly less biased in data-poor contexts. We then apply the epsilon......-method to a spatial regression model when estimating an index of population abundance, and compare results with an alternative bias-correction algorithm that involves Markov-chain Monte Carlo sampling. This example shows that the epsilon-method leads to a biologically significant difference in estimates of average...

  19. Bias correction of bounded location errors in presence-only data

    Science.gov (United States)

    Hefley, Trevor J.; Brost, Brian M.; Hooten, Mevin B.

    2017-01-01

    Location error occurs when the true location is different than the reported location. Because habitat characteristics at the true location may be different than those at the reported location, ignoring location error may lead to unreliable inference concerning species–habitat relationships.We explain how a transformation known in the spatial statistics literature as a change of support (COS) can be used to correct for location errors when the true locations are points with unknown coordinates contained within arbitrary shaped polygons.We illustrate the flexibility of the COS by modelling the resource selection of Whooping Cranes (Grus americana) using citizen contributed records with locations that were reported with error. We also illustrate the COS with a simulation experiment.In our analysis of Whooping Crane resource selection, we found that location error can result in up to a five-fold change in coefficient estimates. Our simulation study shows that location error can result in coefficient estimates that have the wrong sign, but a COS can efficiently correct for the bias.

  20. [Inverse probability weighting (IPW) for evaluating and "correcting" selection bias].

    Science.gov (United States)

    Narduzzi, Silvia; Golini, Martina Nicole; Porta, Daniela; Stafoggia, Massimo; Forastiere, Francesco

    2014-01-01

    the Inverse probability weighting (IPW) is a methodology developed to account for missingness and selection bias caused by non-randomselection of observations, or non-random lack of some information in a subgroup of the population. to provide an overview of IPW methodology and an application in a cohort study of the association between exposure to traffic air pollution (nitrogen dioxide, NO₂) and 7-year children IQ. this methodology allows to correct the analysis by weighting the observations with the probability of being selected. The IPW is based on the assumption that individual information that can predict the probability of inclusion (non-missingness) are available for the entire study population, so that, after taking account of them, we can make inferences about the entire target population starting from the nonmissing observations alone.The procedure for the calculation is the following: firstly, we consider the entire population at study and calculate the probability of non-missing information using a logistic regression model, where the response is the nonmissingness and the covariates are its possible predictors.The weight of each subject is given by the inverse of the predicted probability. Then the analysis is performed only on the non-missing observations using a weighted model. IPW is a technique that allows to embed the selection process in the analysis of the estimates, but its effectiveness in "correcting" the selection bias depends on the availability of enough information, for the entire population, to predict the non-missingness probability. In the example proposed, the IPW application showed that the effect of exposure to NO2 on the area of verbal intelligence quotient of children is stronger than the effect showed from the analysis performed without regard to the selection processes.

  1. A propensity score approach to correction for bias due to population stratification using genetic and non-genetic factors.

    Science.gov (United States)

    Zhao, Huaqing; Rebbeck, Timothy R; Mitra, Nandita

    2009-12-01

    Confounding due to population stratification (PS) arises when differences in both allele and disease frequencies exist in a population of mixed racial/ethnic subpopulations. Genomic control, structured association, principal components analysis (PCA), and multidimensional scaling (MDS) approaches have been proposed to address this bias using genetic markers. However, confounding due to PS can also be due to non-genetic factors. Propensity scores are widely used to address confounding in observational studies but have not been adapted to deal with PS in genetic association studies. We propose a genomic propensity score (GPS) approach to correct for bias due to PS that considers both genetic and non-genetic factors. We compare the GPS method with PCA and MDS using simulation studies. Our results show that GPS can adequately adjust and consistently correct for bias due to PS. Under no/mild, moderate, and severe PS, GPS yielded estimated with bias close to 0 (mean=-0.0044, standard error=0.0087). Under moderate or severe PS, the GPS method consistently outperforms the PCA method in terms of bias, coverage probability (CP), and type I error. Under moderate PS, the GPS method consistently outperforms the MDS method in terms of CP. PCA maintains relatively high power compared to both MDS and GPS methods under the simulated situations. GPS and MDS are comparable in terms of statistical properties such as bias, type I error, and power. The GPS method provides a novel and robust tool for obtaining less-biased estimates of genetic associations that can consider both genetic and non-genetic factors. 2009 Wiley-Liss, Inc.

  2. Illustrating, Quantifying, and Correcting for Bias in Post-hoc Analysis of Gene-Based Rare Variant Tests of Association

    Directory of Open Access Journals (Sweden)

    Kelsey E. Grinde

    2017-09-01

    Full Text Available To date, gene-based rare variant testing approaches have focused on aggregating information across sets of variants to maximize statistical power in identifying genes showing significant association with diseases. Beyond identifying genes that are associated with diseases, the identification of causal variant(s in those genes and estimation of their effect is crucial for planning replication studies and characterizing the genetic architecture of the locus. However, we illustrate that straightforward single-marker association statistics can suffer from substantial bias introduced by conditioning on gene-based test significance, due to the phenomenon often referred to as “winner's curse.” We illustrate the ramifications of this bias on variant effect size estimation and variant prioritization/ranking approaches, outline parameters of genetic architecture that affect this bias, and propose a bootstrap resampling method to correct for this bias. We find that our correction method significantly reduces the bias due to winner's curse (average two-fold decrease in bias, p < 2.2 × 10−6 and, consequently, substantially improves mean squared error and variant prioritization/ranking. The method is particularly helpful in adjustment for winner's curse effects when the initial gene-based test has low power and for relatively more common, non-causal variants. Adjustment for winner's curse is recommended for all post-hoc estimation and ranking of variants after a gene-based test. Further work is necessary to continue seeking ways to reduce bias and improve inference in post-hoc analysis of gene-based tests under a wide variety of genetic architectures.

  3. A Hybrid Framework to Bias Correct and Empirically Downscale Daily Temperature and Precipitation from Regional Climate Models

    Science.gov (United States)

    Tan, P.; Abraham, Z.; Winkler, J. A.; Perdinan, P.; Zhong, S. S.; Liszewska, M.

    2013-12-01

    Bias correction and statistical downscaling are widely used approaches for postprocessing climate simulations generated by global and/or regional climate models. The skills of these approaches are typically assessed in terms of their ability to reproduce historical climate conditions as well as the plausibility and consistency of the derived statistical indicators needed by end users. Current bias correction and downscaling approaches often do not adequately satisfy the two criteria of accurate prediction and unbiased estimation. To overcome this limitation, a hybrid regression framework was developed to both minimize prediction errors and preserve the distributional characteristics of climate observations. Specifically, the framework couples the loss functions of standard (linear or nonlinear) regression methods with a regularization term that penalizes for discrepancies between the predicted and observed distributions. The proposed framework can also be extended to generate physically-consistent outputs across multiple response variables, and to incorporate both reanalysis-driven and GCM-driven RCM outputs into a unified learning framework. The effectiveness of the framework is demonstrated using daily temperature and precipitation simulations from the North American Regional Climate Change Program (NARCCAP) . The accuracy of the framework is comparable to standard regression methods, but, unlike the standard regression methods, the proposed framework is able to preserve many of the distribution properties of the response variables, akin to bias correction approaches such as quantile mapping and bivariate geometric quantile mapping.

  4. Atlas-based analysis of cardiac shape and function: correction of regional shape bias due to imaging protocol for population studies.

    Science.gov (United States)

    Medrano-Gracia, Pau; Cowan, Brett R; Bluemke, David A; Finn, J Paul; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Suinesiaputra, Avan; Young, Alistair A

    2013-09-13

    Cardiovascular imaging studies generate a wealth of data which is typically used only for individual study endpoints. By pooling data from multiple sources, quantitative comparisons can be made of regional wall motion abnormalities between different cohorts, enabling reuse of valuable data. Atlas-based analysis provides precise quantification of shape and motion differences between disease groups and normal subjects. However, subtle shape differences may arise due to differences in imaging protocol between studies. A mathematical model describing regional wall motion and shape was used to establish a coordinate system registered to the cardiac anatomy. The atlas was applied to data contributed to the Cardiac Atlas Project from two independent studies which used different imaging protocols: steady state free precession (SSFP) and gradient recalled echo (GRE) cardiovascular magnetic resonance (CMR). Shape bias due to imaging protocol was corrected using an atlas-based transformation which was generated from a set of 46 volunteers who were imaged with both protocols. Shape bias between GRE and SSFP was regionally variable, and was effectively removed using the atlas-based transformation. Global mass and volume bias was also corrected by this method. Regional shape differences between cohorts were more statistically significant after removing regional artifacts due to imaging protocol bias. Bias arising from imaging protocol can be both global and regional in nature, and is effectively corrected using an atlas-based transformation, enabling direct comparison of regional wall motion abnormalities between cohorts acquired in separate studies.

  5. Prospective motion correction with volumetric navigators (vNavs) reduces the bias and variance in brain morphometry induced by subject motion.

    Science.gov (United States)

    Tisdall, M Dylan; Reuter, Martin; Qureshi, Abid; Buckner, Randy L; Fischl, Bruce; van der Kouwe, André J W

    2016-02-15

    Recent work has demonstrated that subject motion produces systematic biases in the metrics computed by widely used morphometry software packages, even when the motion is too small to produce noticeable image artifacts. In the common situation where the control population exhibits different behaviors in the scanner when compared to the experimental population, these systematic measurement biases may produce significant confounds for between-group analyses, leading to erroneous conclusions about group differences. While previous work has shown that prospective motion correction can improve perceived image quality, here we demonstrate that, in healthy subjects performing a variety of directed motions, the use of the volumetric navigator (vNav) prospective motion correction system significantly reduces the motion-induced bias and variance in morphometry. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Change in bias in self-reported body mass index in Australia between 1995 and 2008 and the evaluation of correction equations.

    Science.gov (United States)

    Hayes, Alison J; Clarke, Philip M; Lung, Tom Wc

    2011-09-25

    Many studies have documented the bias in body mass index (BMI) determined from self-reported data on height and weight, but few have examined the change in bias over time. Using data from large, nationally-representative population health surveys, we examined change in bias in height and weight reporting among Australian adults between 1995 and 2008. Our study dataset included 9,635 men and women in 1995 and 9,141 in 2007-2008. We investigated the determinants of the bias and derived correction equations using 2007-2008 data, which can be applied when only self-reported anthropometric data are available. In 1995, self-reported BMI (derived from height and weight) was 1.2 units (men) and 1.4 units (women) lower than measured BMI. In 2007-2008, there was still underreporting, but the amount had declined to 0.6 units (men) and 0.7 units (women) below measured BMI. The major determinants of reporting error in 2007-2008 were age, sex, measured BMI, and education of the respondent. Correction equations for height and weight derived from 2007-2008 data and applied to self-reported data were able to adjust for the bias and were accurate across all age and sex strata. The diminishing reporting bias in BMI in Australia means that correction equations derived from 2007-2008 data may not be transferable to earlier self-reported data. Second, predictions of future overweight and obesity in Australia based on trends in self-reported information are likely to be inaccurate, as the change in reporting bias will affect the apparent increase in self-reported obesity prevalence.

  7. Practical Bias Correction in Aerial Surveys of Large Mammals: Validation of Hybrid Double-Observer with Sightability Method against Known Abundance of Feral Horse (Equus caballus) Populations.

    Science.gov (United States)

    Lubow, Bruce C; Ransom, Jason I

    2016-01-01

    Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs.

  8. An Improved Dynamical Downscaling Method with GCM Bias Corrections and Its Validation with 30 Years of Climate Simulations

    KAUST Repository

    Xu, Zhongfeng; Yang, Zong-Liang

    2012-01-01

    An improved dynamical downscaling method (IDD) with general circulation model (GCM) bias corrections is developed and assessed over North America. A set of regional climate simulations is performed with the Weather Research and Forecasting Model

  9. Evaluating anemometer drift: A statistical approach to correct biases in wind speed measurement

    Science.gov (United States)

    Azorin-Molina, Cesar; Asin, Jesus; McVicar, Tim R.; Minola, Lorenzo; Lopez-Moreno, Juan I.; Vicente-Serrano, Sergio M.; Chen, Deliang

    2018-05-01

    Recent studies on observed wind variability have revealed a decline (termed "stilling") of near-surface wind speed during the last 30-50 years over many mid-latitude terrestrial regions, particularly in the Northern Hemisphere. The well-known impact of cup anemometer drift (i.e., wear on the bearings) on the observed weakening of wind speed has been mentioned as a potential contributor to the declining trend. However, to date, no research has quantified its contribution to stilling based on measurements, which is most likely due to lack of quantification of the ageing effect. In this study, a 3-year field experiment (2014-2016) with 10-minute paired wind speed measurements from one new and one malfunctioned (i.e., old bearings) SEAC SV5 cup anemometer which has been used by the Spanish Meteorological Agency in automatic weather stations since mid-1980s, was developed for assessing for the first time the role of anemometer drift on wind speed measurement. The results showed a statistical significant impact of anemometer drift on wind speed measurements, with the old anemometer measuring lower wind speeds than the new one. Biases show a marked temporal pattern and clear dependency on wind speed, with both weak and strong winds causing significant biases. This pioneering quantification of biases has allowed us to define two regression models that correct up to 37% of the artificial bias in wind speed due to measurement with an old anemometer.

  10. Non-linear corrections to the cosmological matter power spectrum and scale-dependent galaxy bias: implications for parameter estimation

    International Nuclear Information System (INIS)

    Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y Y

    2008-01-01

    We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency

  11. Non-linear corrections to the cosmological matter power spectrum and scale-dependent galaxy bias: implications for parameter estimation

    Science.gov (United States)

    Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y. Y.

    2008-07-01

    We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency.

  12. Bias correction of nutritional status estimates when reported age is used for calculating WHO indicators in children under five years of age.

    Science.gov (United States)

    Quezada, Amado D; García-Guerra, Armando; Escobar, Leticia

    2016-06-01

    To assess the performance of a simple correction method for nutritional status estimates in children under five years of age when exact age is not available from the data. The proposed method was based on the assumption of symmetry of age distributions within a given month of age and validated in a large population-based survey sample of Mexican preschool children. The main distributional assumption was consistent with the data. All prevalence estimates derived from the correction method showed no statistically significant bias. In contrast, failing to correct attained age resulted in an underestimation of stunting in general and an overestimation of overweight or obesity among the youngest. The proposed method performed remarkably well in terms of bias correction of estimates and could be easily applied in situations in which either birth or interview dates are not available from the data.

  13. An Improved Dynamical Downscaling Method with GCM Bias Corrections and Its Validation with 30 Years of Climate Simulations

    KAUST Repository

    Xu, Zhongfeng

    2012-09-01

    An improved dynamical downscaling method (IDD) with general circulation model (GCM) bias corrections is developed and assessed over North America. A set of regional climate simulations is performed with the Weather Research and Forecasting Model (WRF) version 3.3 embedded in the National Center for Atmospheric Research\\'s (NCAR\\'s) Community Atmosphere Model (CAM). The GCM climatological means and the amplitudes of interannual variations are adjusted based on the National Centers for Environmental Prediction (NCEP)-NCAR global reanalysis products (NNRP) before using them to drive WRF. In this study, the WRF downscaling experiments are identical except the initial and lateral boundary conditions derived from the NNRP, original GCM output, and bias-corrected GCM output, respectively. The analysis finds that the IDD greatly improves the downscaled climate in both climatological means and extreme events relative to the traditional dynamical downscaling approach (TDD). The errors of downscaled climatological mean air temperature, geopotential height, wind vector, moisture, and precipitation are greatly reduced when the GCM bias corrections are applied. In the meantime, IDD also improves the downscaled extreme events characterized by the reduced errors in 2-yr return levels of surface air temperature and precipitation. In comparison with TDD, IDD is also able to produce a more realistic probability distribution in summer daily maximum temperature over the central U.S.-Canada region as well as in summer and winter daily precipitation over the middle and eastern United States. © 2012 American Meteorological Society.

  14. Is the Pearson r[squared] Biased, and if So, What Is the Best Correction Formula?

    Science.gov (United States)

    Wang, Zhongmiao; Thompson, Bruce

    2007-01-01

    In this study the authors investigated the use of 5 (i.e., Claudy, Ezekiel, Olkin-Pratt, Pratt, and Smith) R[squared] correction formulas with the Pearson r[squared]. The authors estimated adjustment bias and precision under 6 x 3 x 6 conditions (i.e., population [rho] values of 0.0, 0.1, 0.3, 0.5, 0.7, and 0.9; population shapes normal, skewness…

  15. On the photon energy moments and their 'bias' corrections in B->Xs+γ

    International Nuclear Information System (INIS)

    Benson, D.; Bigi, I.I.; Uraltsev, N.

    2005-01-01

    Photon energy moments in B->X s +γ and the impact of experimental cuts are analyzed, including the biases exponential in the effective hardness missed in the conventional OPE. We incorporate the perturbative corrections fully implementing the Wilsonian momentum separation ab initio. This renders perturbative effects numerically suppressed while leaving heavy quark parameters and the corresponding light-cone distribution function well defined and preserving their physical properties. The moments of the distribution function are given by the heavy quark expectation values of which many have been extracted from the B->X c -bar ν decays. The quantitative estimates for the biases in the heavy quark parameters determined from the photon moments show they cannot be neglected for E cut -bar 1.85 GeV, and grow out of theory control for E cut above 2.1 GeV. Implications for the moments in the B->X c -bar ν decays at high cuts are briefly addressed

  16. TRMM-3B43 Bias Correction over the High Elevations of the Contiguous United States

    Science.gov (United States)

    Hashemi, H.; Nordin, K. M.; Lakshmi, V.; Knight, R. J.

    2016-12-01

    Precipitation can be quantified using a rain gauge network, or a remotely sensed precipitation product. Ultimately, the choice of dataset depends on the particular application, the catchment size, climate and the time period of study. In a region with a long record and a dense rain gauge network, the elevation-modified ground-based precipitation product, PRISM, has been found to work well. However, in poorly gauged regions the use of remotely sensed precipitation products is an absolute necessity. The Tropical Rainfall Measuring Mission (TRMM) has provided valuable precipitation datasets for hydrometeorological studies over the past two decades (1998-2015). One concern regarding the usage of TRMM data is the accuracy of the precipitation estimates, when compared to those obtained using PRISM. The reason for this concern is that TRMM and PRISM do not always agree and, typically, TRMM underestimates PRISM over the mountainous regions of the United States. In this study, we develop a correction function to improve the accuracy of the TRMM monthly product (TRMM-3B43) by estimating and removing the bias in the satellite data using the ground-based precipitation product, PRISM. We observe a strong relationship between the bias and land surface elevation; TRMM-3B43 tends to underestimate the PRISM product at altitudes greater than 1500 m above mean sea level (m.amsl) in the contiguous United States. A relationship is developed between TRMM-PRISM bias and elevation. The correction function is used to adjust the TRMM monthly precipitation using PRISM and elevation data. The model is calibrated using 25% of the available time period and the remaining 75% of the time period is used for validation. The corrected TRMM-3B43 product is verified for the high elevations over the contiguous United States and two local regions in the mountainous areas of the western United States. The results show a significant improvement in the accuracy of the TRMM product in the high elevations of

  17. Correcting bias in the rational polynomial coefficients of satellite imagery using thin-plate smoothing splines

    Science.gov (United States)

    Shen, Xiang; Liu, Bin; Li, Qing-Quan

    2017-03-01

    The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates

  18. Endoscopic techniques for diagnosis and correction of complications after retroperitoneal pancreas transplantation

    Directory of Open Access Journals (Sweden)

    A. V. Pinchuk

    2016-01-01

    Full Text Available Relevance. Timely diagnosis and treatment of postoperative complications after pancreas transplantation is an actual problem of modern clinical transplantation. Purpose. The assessment of the endoscopy potential for the diagnosis and correction of postoperative complications after pancreas transplantation. Materials and methods. Since October 2011, simultaneous retroperitoneal pancreas-kidney transplantation has been performed in 27 patients. In 8 cases, the use of endoscopic techniques allowed a timely identification and treatment of the complications occurred. Conclusions. Endoscopic techniques proved to be highly efficient in the diagnosis and treatment of surgical complications and immunological impairments after retroperitoneal pancreas transplantation. 

  19. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Estimation and correction of surface wind-stress bias in the Tropical Pacific with the Ensemble Kalman Filter

    NARCIS (Netherlands)

    Leeuwenburgh, O.

    2008-01-01

    The assimilation of high-quality in situ data into ocean models is known to lead to imbalanced analyses and spurious circulations when the model dynamics or the forcing contain systematic errors. Use of a bias estimation and correction scheme has been shown to significantly improve the balance of

  1. SU-F-I-80: Correction for Bias in a Channelized Hotelling Model Observer Caused by Temporally Variable Non-Stationary Noise

    International Nuclear Information System (INIS)

    Favazza, C; Fetterly, K

    2016-01-01

    Purpose: Application of a channelized Hotelling model observer (CHO) over a wide range of x-ray angiography detector target dose (DTD) levels demonstrated substantial bias for conditions yielding low detectability indices (d’), including low DTD and small test objects. The purpose of this work was to develop theory and methods to correct this bias. Methods: A hypothesis was developed wherein the measured detectability index (d’b) for a known test object is positively biased by temporally variable non-stationary noise in the images. Hotelling’s T2 test statistic provided the foundation for a mathematical theory which accounts for independent contributions to the measured d’b value from both the test object (d’o) and non-stationary noise (d’ns). Experimental methods were developed to directly estimate d’o by determining d’ns and subtracting it from d’b, in accordance with the theory. Specifically, d’ns was determined from two sets of images from which the traditional test object was withheld. This method was applied to angiography images with DTD levels in the range 0 to 240 nGy and for disk-shaped iodine-based contrast targets with diameters 0.5 to 4.0 mm. Results: Bias in d’ was evidenced by d’b values which exceeded values expected from a quantum limited imaging system and decreasing object size and DTD. d’ns increased with decreasing DTD, reaching a maximum of 2.6 for DTD = 0. Bias-corrected d’o estimates demonstrated sub-quantum limited performance of the x-ray angiography for low DTD. Findings demonstrated that the source of non-stationary noise was detector electronic readout noise. Conclusion: Theory and methods to estimate and correct bias in CHO measurements from temporally variable non-stationary noise were presented. The temporal non-stationary noise was shown to be due to electronic readout noise. This method facilitates accurate estimates of d’ values over a large range of object size and detector target dose.

  2. SU-F-I-80: Correction for Bias in a Channelized Hotelling Model Observer Caused by Temporally Variable Non-Stationary Noise

    Energy Technology Data Exchange (ETDEWEB)

    Favazza, C; Fetterly, K [Mayo Clinic, Rochester, MN (United States)

    2016-06-15

    Purpose: Application of a channelized Hotelling model observer (CHO) over a wide range of x-ray angiography detector target dose (DTD) levels demonstrated substantial bias for conditions yielding low detectability indices (d’), including low DTD and small test objects. The purpose of this work was to develop theory and methods to correct this bias. Methods: A hypothesis was developed wherein the measured detectability index (d’b) for a known test object is positively biased by temporally variable non-stationary noise in the images. Hotelling’s T2 test statistic provided the foundation for a mathematical theory which accounts for independent contributions to the measured d’b value from both the test object (d’o) and non-stationary noise (d’ns). Experimental methods were developed to directly estimate d’o by determining d’ns and subtracting it from d’b, in accordance with the theory. Specifically, d’ns was determined from two sets of images from which the traditional test object was withheld. This method was applied to angiography images with DTD levels in the range 0 to 240 nGy and for disk-shaped iodine-based contrast targets with diameters 0.5 to 4.0 mm. Results: Bias in d’ was evidenced by d’b values which exceeded values expected from a quantum limited imaging system and decreasing object size and DTD. d’ns increased with decreasing DTD, reaching a maximum of 2.6 for DTD = 0. Bias-corrected d’o estimates demonstrated sub-quantum limited performance of the x-ray angiography for low DTD. Findings demonstrated that the source of non-stationary noise was detector electronic readout noise. Conclusion: Theory and methods to estimate and correct bias in CHO measurements from temporally variable non-stationary noise were presented. The temporal non-stationary noise was shown to be due to electronic readout noise. This method facilitates accurate estimates of d’ values over a large range of object size and detector target dose.

  3. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    Science.gov (United States)

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Bias correction of nutritional status estimates when reported age is used for calculating WHO indicators in children under five years of age

    Directory of Open Access Journals (Sweden)

    Amado D Quezada

    2016-05-01

    Full Text Available Objective.To assess the performance of a simple correction method for nutritional status estimates in children under five years of age when exact age is not available from the data. Materials and methods. The proposed method was ba- sed on the assumption of symmetry of age distributions within a given month of age and validated in a large population-based survey sample of Mexican preschool children. Results. The main distributional assumption was consistent with the data. All prevalence estimates derived from the correction method showed no statistically significant bias. In contrast, failing to correct attained age resulted in an underestimation of stunting in general and an overestimation of overweight or obesity among the youngest. Conclusions. The proposed method performed remarkably well in terms of bias correction of estimates and could be easily applied in situations in which either birth or interview dates are not available from the data.

  5. Diagnostic Reasoning and Cognitive Biases of Nurse Practitioners.

    Science.gov (United States)

    Lawson, Thomas N

    2018-04-01

    Diagnostic reasoning is often used colloquially to describe the process by which nurse practitioners and physicians come to the correct diagnosis, but a rich definition and description of this process has been lacking in the nursing literature. A literature review was conducted with theoretical sampling seeking conceptual insight into diagnostic reasoning. Four common themes emerged: Cognitive Biases and Debiasing Strategies, the Dual Process Theory, Diagnostic Error, and Patient Harm. Relevant cognitive biases are discussed, followed by debiasing strategies and application of the dual process theory to reduce diagnostic error and harm. The accuracy of diagnostic reasoning of nurse practitioners may be improved by incorporating these items into nurse practitioner education and practice. [J Nurs Educ. 2018;57(4):203-208.]. Copyright 2018, SLACK Incorporated.

  6. The response of future projections of the North American monsoon when combining dynamical downscaling and bias correction of CCSM4 output

    Science.gov (United States)

    Meyer, Jonathan D. D.; Jin, Jiming

    2017-07-01

    A 20-km regional climate model (RCM) dynamically downscaled the Community Climate System Model version 4 (CCSM4) to compare 32-year historical and future "end-of-the-century" climatologies of the North American Monsoon (NAM). CCSM4 and other phase 5 Coupled Model Intercomparison Project models have indicated a delayed NAM and overall general drying trend. Here, we test the suggested mechanism for this drier NAM where increasing atmospheric static stability and reduced early-season evapotranspiration under global warming will limit early-season convection and compress the mature-season of the NAM. Through our higher resolution RCM, we found the role of accelerated evaporation under a warmer climate is likely understated in coarse resolution models such as CCSM4. Improving the representation of mesoscale interactions associated with the Gulf of California and surrounding topography produced additional surface evaporation, which overwhelmed the convection-suppressing effects of a warmer troposphere. Furthermore, the improved land-sea temperature gradient helped drive stronger southerly winds and greater moisture transport. Finally, we addressed limitations from inherent CCSM4 biases through a form of mean bias correction, which resulted in a more accurate seasonality of the atmospheric thermodynamic profile. After bias correction, greater surface evaporation from average peak GoC SSTs of 32 °C compared to 29 °C from the original CCSM4 led to roughly 50 % larger changes to low-level moist static energy compared to that produced by the downscaled original CCSM4. The increasing destabilization of the NAM environment produced onset dates that were one to 2 weeks earlier in the core of the NAM and northern extent, respectively. Furthermore, a significantly more vigorous NAM signal was produced after bias correction, with >50 mm month-1 increases to the June-September precipitation found along east and west coasts of Mexico and into parts of Texas. A shift towards more

  7. Bias Correction and Random Error Characterization for the Assimilation of HRDI Line-of-Sight Wind Measurements

    Science.gov (United States)

    Tangborn, Andrew; Menard, Richard; Ortland, David; Einaudi, Franco (Technical Monitor)

    2001-01-01

    A new approach to the analysis of systematic and random observation errors is presented in which the error statistics are obtained using forecast data rather than observations from a different instrument type. The analysis is carried out at an intermediate retrieval level, instead of the more typical state variable space. This method is carried out on measurements made by the High Resolution Doppler Imager (HRDI) on board the Upper Atmosphere Research Satellite (UARS). HRDI, a limb sounder, is the only satellite instrument measuring winds in the stratosphere, and the only instrument of any kind making global wind measurements in the upper atmosphere. HRDI measures doppler shifts in the two different O2 absorption bands (alpha and B) and the retrieved products are tangent point Line-of-Sight wind component (level 2 retrieval) and UV winds (level 3 retrieval). This analysis is carried out on a level 1.9 retrieval, in which the contributions from different points along the line-of-sight have not been removed. Biases are calculated from O-F (observed minus forecast) LOS wind components and are separated into a measurement parameter space consisting of 16 different values. The bias dependence on these parameters (plus an altitude dependence) is used to create a bias correction scheme carried out on the level 1.9 retrieval. The random error component is analyzed by separating the gamma and B band observations and locating observation pairs where both bands are very nearly looking at the same location at the same time. It is shown that the two observation streams are uncorrelated and that this allows the forecast error variance to be estimated. The bias correction is found to cut the effective observation error variance in half.

  8. Assessing the implementation of bias correction in the climate prediction

    Science.gov (United States)

    Nadrah Aqilah Tukimat, Nurul

    2018-04-01

    An issue of the climate changes nowadays becomes trigger and irregular. The increment of the greenhouse gases (GHGs) emission into the atmospheric system day by day gives huge impact to the fluctuated weather and global warming. It becomes significant to analyse the changes of climate parameters in the long term. However, the accuracy in the climate simulation is always be questioned to control the reliability of the projection results. Thus, the Linear Scaling (LS) as a bias correction method (BC) had been applied to treat the gaps between observed and simulated results. About two rainfall stations were selected in Pahang state there are Station Lubuk Paku and Station Temerloh. Statistical Downscaling Model (SDSM) used to perform the relationship between local weather and atmospheric parameters in projecting the long term rainfall trend. The result revealed the LS was successfully to reduce the error up to 3% and produced better climate simulated results.

  9. Retrospective correction of bias in diffusion tensor imaging arising from coil combination mode.

    Science.gov (United States)

    Sakaie, Ken; Lowe, Mark

    2017-04-01

    To quantify and retrospectively correct for systematic differences in diffusion tensor imaging (DTI) measurements due to differences in coil combination mode. Multi-channel coils are now standard among MRI systems. There are several options for combining signal from multiple coils during image reconstruction, including sum-of-squares (SOS) and adaptive combine (AC). This contribution examines the bias between SOS- and AC-derived measures of tissue microstructure and a strategy for limiting that bias. Five healthy subjects were scanned under an institutional review board-approved protocol. Each set of raw image data was reconstructed twice-once with SOS and once with AC. The diffusion tensor was calculated from SOS- and AC-derived data by two algorithms-standard log-linear least squares and an approach that accounts for the impact of coil combination on signal statistics. Systematic differences between SOS and AC in terms of tissue microstructure (axial diffusivity, radial diffusivity, mean diffusivity and fractional anisotropy) were evaluated on a voxel-by-voxel basis. SOS-based tissue microstructure values are systematically lower than AC-based measures throughout the brain in each subject when using the standard tensor calculation method. The difference between SOS and AC can be virtually eliminated by taking into account the signal statistics associated with coil combination. The impact of coil combination mode on diffusion tensor-based measures of tissue microstructure is statistically significant but can be corrected retrospectively. The ability to do so is expected to facilitate pooling of data among imaging protocols. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Considerations about expected a posteriori estimation in adaptive testing: adaptive a priori, adaptive correction for bias, and adaptive integration interval.

    Science.gov (United States)

    Raiche, Gilles; Blais, Jean-Guy

    2009-01-01

    In a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.

  11. Joint deformable liver registration and bias field correction for MR-guided HDR brachytherapy.

    Science.gov (United States)

    Rak, Marko; König, Tim; Tönnies, Klaus D; Walke, Mathias; Ricke, Jens; Wybranski, Christian

    2017-12-01

    In interstitial high-dose rate brachytherapy, liver cancer is treated by internal radiation, requiring percutaneous placement of applicators within or close to the tumor. To maximize utility, the optimal applicator configuration is pre-planned on magnetic resonance images. The pre-planned configuration is then implemented via a magnetic resonance-guided intervention. Mapping the pre-planning information onto interventional data would reduce the radiologist's cognitive load during the intervention and could possibly minimize discrepancies between optimally pre-planned and actually placed applicators. We propose a fast and robust two-step registration framework suitable for interventional settings: first, we utilize a multi-resolution rigid registration to correct for differences in patient positioning (rotation and translation). Second, we employ a novel iterative approach alternating between bias field correction and Markov random field deformable registration in a multi-resolution framework to compensate for non-rigid movements of the liver, the tumors and the organs at risk. In contrast to existing pre-correction methods, our multi-resolution scheme can recover bias field artifacts of different extents at marginal computational costs. We compared our approach to deformable registration via B-splines, demons and the SyN method on 22 registration tasks from eleven patients. Results showed that our approach is more accurate than the contenders for liver as well as for tumor tissues. We yield average liver volume overlaps of 94.0 ± 2.7% and average surface-to-surface distances of 2.02 ± 0.87 mm and 3.55 ± 2.19 mm for liver and tumor tissue, respectively. The reported distances are close to (or even below) the slice spacing (2.5 - 3.0 mm) of our data. Our approach is also the fastest, taking 35.8 ± 12.8 s per task. The presented approach is sufficiently accurate to map information available from brachytherapy pre-planning onto interventional data. It

  12. "Racial bias in mock juror decision-making: A meta-analytic review of defendant treatment": Correction to Mitchell et al. (2005).

    Science.gov (United States)

    2017-06-01

    Reports an error in "Racial Bias in Mock Juror Decision-Making: A Meta-Analytic Review of Defendant Treatment" by Tara L. Mitchell, Ryann M. Haw, Jeffrey E. Pfeifer and Christian A. Meissner ( Law and Human Behavior , 2005[Dec], Vol 29[6], 621-637). In the article, all of the numbers in Appendix A were correct, but the signs were reversed for z' in a number of studies, which are listed. Also, in Appendix B, some values were incorrect, some signs were reversed, and some values were missing. The corrected appendix is included. (The following abstract of the original article appeared in record 2006-00971-001.) Common wisdom seems to suggest that racial bias, defined as disparate treatment of minority defendants, exists in jury decision-making, with Black defendants being treated more harshly by jurors than White defendants. The empirical research, however, is inconsistent--some studies show racial bias while others do not. Two previous meta-analyses have found conflicting results regarding the existence of racial bias in juror decision-making (Mazzella & Feingold, 1994, Journal of Applied Social Psychology, 24, 1315-1344; Sweeney & Haney, 1992, Behavioral Sciences and the Law, 10, 179-195). This research takes a meta-analytic approach to further investigate the inconsistencies within the empirical literature on racial bias in juror decision-making by defining racial bias as disparate treatment of racial out-groups (rather than focusing upon the minority group alone). Our results suggest that a small, yet significant, effect of racial bias in decision-making is present across studies, but that the effect becomes more pronounced when certain moderators are considered. The state of the research will be discussed in light of these findings. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. A quantitative approach to diagnosis and correction of organizational and programmatic issues

    International Nuclear Information System (INIS)

    Chiu, C.; Johnson, K.

    1997-01-01

    A integrated approach to diagnosis and correction of critical Organizational and Programmatic (O and P) issues is summarized, and the quantitative special evaluations that ar used to confirm the O and P issues identified by the periodic common cause analysis and integrated safety assessments

  14. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    Science.gov (United States)

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2018-04-15

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  16. A method for additive bias correction in cross-cultural surveys

    DEFF Research Database (Denmark)

    Scholderer, Joachim; Grunert, Klaus G.; Brunsø, Karen

    2001-01-01

    additive bias from cross-cultural data. The procedure involves four steps: (1) embed a potentially biased item in a factor-analytic measurement model, (2) test for the existence of additive bias between populations, (3) use the factor-analytic model to estimate the magnitude of the bias, and (4) replace......Measurement bias in cross-cultural surveys can seriously threaten the validity of hypothesis tests. Direct comparisons of means depend on the assumption that differences in observed variables reflect differences in the underlying constructs, and not an additive bias that may be caused by cultural...... differences in the understanding of item wording or response category labels. However, experience suggests that additive bias can be found more often than not. Based on the concept of partial measurement invariance (Byrne, Shavelson and Muthén, 1989), the present paper develops a procedure for eliminating...

  17. A retrieval-based approach to eliminating hindsight bias.

    Science.gov (United States)

    Van Boekel, Martin; Varma, Keisha; Varma, Sashank

    2017-03-01

    Individuals exhibit hindsight bias when they are unable to recall their original responses to novel questions after correct answers are provided to them. Prior studies have eliminated hindsight bias by modifying the conditions under which original judgments or correct answers are encoded. Here, we explored whether hindsight bias can be eliminated by manipulating the conditions that hold at retrieval. Our retrieval-based approach predicts that if the conditions at retrieval enable sufficient discrimination of memory representations of original judgments from memory representations of correct answers, then hindsight bias will be reduced or eliminated. Experiment 1 used the standard memory design to replicate the hindsight bias effect in middle-school students. Experiments 2 and 3 modified the retrieval phase of this design, instructing participants beforehand that they would be recalling both their original judgments and the correct answers. As predicted, this enabled participants to form compound retrieval cues that discriminated original judgment traces from correct answer traces, and eliminated hindsight bias. Experiment 4 found that when participants were not instructed beforehand that they would be making both recalls, they did not form discriminating retrieval cues, and hindsight bias returned. These experiments delineate the retrieval conditions that produce-and fail to produce-hindsight bias.

  18. Direct estimation and correction of bias from temporally variable non-stationary noise in a channelized Hotelling model observer.

    Science.gov (United States)

    Fetterly, Kenneth A; Favazza, Christopher P

    2016-08-07

    Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ([Formula: see text]) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame(-1) resulted in [Formula: see text] estimates which were as much as 2.9×  greater than expected of a quantum limited system. Over-estimation of [Formula: see text] was presumed to be a result of bias error due to temporally variable non-stationary noise. Statistical theory which allows for independent contributions of 'signal' from a test object (o) and temporally variable non-stationary noise (ns) was developed. The theory demonstrates that the biased [Formula: see text] is the sum of the detectability indices associated with the test object [Formula: see text] and non-stationary noise ([Formula: see text]). Given the nature of the imaging system and the experimental methods, [Formula: see text] cannot be directly determined independent of [Formula: see text]. However, methods to estimate [Formula: see text] independent of [Formula: see text] were developed. In accordance with the theory, [Formula: see text] was subtracted from experimental estimates of [Formula: see text], providing an unbiased estimate of [Formula: see text]. Estimates of [Formula: see text] exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide [Formula: see text] estimates which are accurate and precise for [Formula: see text]. Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the

  19. Hydrological Modeling in Northern Tunisia with Regional Climate Model Outputs: Performance Evaluation and Bias-Correction in Present Climate Conditions

    Directory of Open Access Journals (Sweden)

    Asma Foughali

    2015-07-01

    Full Text Available This work aims to evaluate the performance of a hydrological balance model in a watershed located in northern Tunisia (wadi Sejnane, 378 km2 in present climate conditions using input variables provided by four regional climate models. A modified version (MBBH of the lumped and single layer surface model BBH (Bucket with Bottom Hole model, in which pedo-transfer parameters estimated using watershed physiographic characteristics are introduced is adopted to simulate the water balance components. Only two parameters representing respectively the water retention capacity of the soil and the vegetation resistance to evapotranspiration are calibrated using rainfall-runoff data. The evaluation criterions for the MBBH model calibration are: relative bias, mean square error and the ratio of mean actual evapotranspiration to mean potential evapotranspiration. Daily air temperature, rainfall and runoff observations are available from 1960 to 1984. The period 1960–1971 is selected for calibration while the period 1972–1984 is chosen for validation. Air temperature and precipitation series are provided by four regional climate models (DMI, ARP, SMH and ICT from the European program ENSEMBLES, forced by two global climate models (GCM: ECHAM and ARPEGE. The regional climate model outputs (precipitation and air temperature are compared to the observations in terms of statistical distribution. The analysis was performed at the seasonal scale for precipitation. We found out that RCM precipitation must be corrected before being introduced as MBBH inputs. Thus, a non-parametric quantile-quantile bias correction method together with a dry day correction is employed. Finally, simulated runoff generated using corrected precipitation from the regional climate model SMH is found the most acceptable by comparison with runoff simulated using observed precipitation data, to reproduce the temporal variability of mean monthly runoff. The SMH model is the most accurate to

  20. Bias correction by use of errors-in-variables regression models in studies with K-X-ray fluorescence bone lead measurements.

    Science.gov (United States)

    Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard

    2011-01-01

    In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. A forward bias method for lag correction of an a-Si flat panel detector

    International Nuclear Information System (INIS)

    Starman, Jared; Tognina, Carlo; Partain, Larry; Fahrig, Rebecca

    2012-01-01

    Purpose: Digital a-Si flat panel (FP) x-ray detectors can exhibit detector lag, or residual signal, of several percent that can cause ghosting in projection images or severe shading artifacts, known as the radar artifact, in cone-beam computed tomography (CBCT) reconstructions. A major contributor to detector lag is believed to be defect states, or traps, in the a-Si layer of the FP. Software methods to characterize and correct for the detector lag exist, but they may make assumptions such as system linearity and time invariance, which may not be true. The purpose of this work is to investigate a new hardware based method to reduce lag in an a-Si FP and to evaluate its effectiveness at removing shading artifacts in CBCT reconstructions. The feasibility of a novel, partially hardware based solution is also examined. Methods: The proposed hardware solution for lag reduction requires only a minor change to the FP. For pulsed irradiation, the proposed method inserts a new operation step between the readout and data collection stages. During this new stage the photodiode is operated in a forward bias mode, which fills the defect states with charge. A Varian 4030CB panel was modified to allow for operation in the forward bias mode. The contrast of residual lag ghosts was measured for lag frames 2 and 100 after irradiation ceased for standard and forward bias modes. Detector step response, lag, SNR, modulation transfer function (MTF), and detective quantum efficiency (DQE) measurements were made with standard and forward bias firmware. CBCT data of pelvic and head phantoms were also collected. Results: Overall, the 2nd and 100th detector lag frame residual signals were reduced 70%-88% using the new method. SNR, MTF, and DQE measurements show a small decrease in collected signal and a small increase in noise. The forward bias hardware successfully reduced the radar artifact in the CBCT reconstruction of the pelvic and head phantoms by 48%-81%. Conclusions: Overall, the

  2. "The impact of uncertain threat on affective bias: Individual differences in response to ambiguity": Correction.

    Science.gov (United States)

    2018-04-01

    Reports an error in "The impact of uncertain threat on affective bias: Individual differences in response to ambiguity" by Maital Neta, Julie Cantelon, Zachary Haga, Caroline R. Mahoney, Holly A. Taylor and F. Caroline Davis ( Emotion , 2017[Dec], Vol 17[8], 1137-1143). In this article, the copyright attribution was incorrectly listed under the Creative Commons CC-BY license due to production-related error. The correct copyright should be "In the public domain." The online version of this article has been corrected. (The following abstract of the original article appeared in record 2017-40275-001.) Individuals who operate under highly stressful conditions (e.g., military personnel and first responders) are often faced with the challenge of quickly interpreting ambiguous information in uncertain and threatening environments. When faced with ambiguity, it is likely adaptive to view potentially dangerous stimuli as threatening until contextual information proves otherwise. One laboratory-based paradigm that can be used to simulate uncertain threat is known as threat of shock (TOS), in which participants are told that they might receive mild but unpredictable electric shocks while performing an unrelated task. The uncertainty associated with this potential threat induces a state of emotional arousal that is not overwhelmingly stressful, but has widespread-both adaptive and maladaptive-effects on cognitive and affective function. For example, TOS is thought to enhance aversive processing and abolish positivity bias. Importantly, in certain situations (e.g., when walking home alone at night), this anxiety can promote an adaptive state of heightened vigilance and defense mobilization. In the present study, we used TOS to examine the effects of uncertain threat on valence bias, or the tendency to interpret ambiguous social cues as positive or negative. As predicted, we found that heightened emotional arousal elicited by TOS was associated with an increased tendency to

  3. Can bias correction and statistical downscaling methods improve the skill of seasonal precipitation forecasts?

    Science.gov (United States)

    Manzanas, R.; Lucero, A.; Weisheimer, A.; Gutiérrez, J. M.

    2018-02-01

    Statistical downscaling methods are popular post-processing tools which are widely used in many sectors to adapt the coarse-resolution biased outputs from global climate simulations to the regional-to-local scale typically required by users. They range from simple and pragmatic Bias Correction (BC) methods, which directly adjust the model outputs of interest (e.g. precipitation) according to the available local observations, to more complex Perfect Prognosis (PP) ones, which indirectly derive local predictions (e.g. precipitation) from appropriate upper-air large-scale model variables (predictors). Statistical downscaling methods have been extensively used and critically assessed in climate change applications; however, their advantages and limitations in seasonal forecasting are not well understood yet. In particular, a key problem in this context is whether they serve to improve the forecast quality/skill of raw model outputs beyond the adjustment of their systematic biases. In this paper we analyze this issue by applying two state-of-the-art BC and two PP methods to downscale precipitation from a multimodel seasonal hindcast in a challenging tropical region, the Philippines. To properly assess the potential added value beyond the reduction of model biases, we consider two validation scores which are not sensitive to changes in the mean (correlation and reliability categories). Our results show that, whereas BC methods maintain or worsen the skill of the raw model forecasts, PP methods can yield significant skill improvement (worsening) in cases for which the large-scale predictor variables considered are better (worse) predicted by the model than precipitation. For instance, PP methods are found to increase (decrease) model reliability in nearly 40% of the stations considered in boreal summer (autumn). Therefore, the choice of a convenient downscaling approach (either BC or PP) depends on the region and the season.

  4. A bias correction for covariance estimators to improve inference with generalized estimating equations that use an unstructured correlation matrix.

    Science.gov (United States)

    Westgate, Philip M

    2013-07-20

    Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Removing Malmquist bias from linear regressions

    Science.gov (United States)

    Verter, Frances

    1993-01-01

    Malmquist bias is present in all astronomical surveys where sources are observed above an apparent brightness threshold. Those sources which can be detected at progressively larger distances are progressively more limited to the intrinsically luminous portion of the true distribution. This bias does not distort any of the measurements, but distorts the sample composition. We have developed the first treatment to correct for Malmquist bias in linear regressions of astronomical data. A demonstration of the corrected linear regression that is computed in four steps is presented.

  6. Potassium-based algorithm allows correction for the hematocrit bias in quantitative analysis of caffeine and its major metabolite in dried blood spots.

    Science.gov (United States)

    De Kesel, Pieter M M; Capiau, Sara; Stove, Veronique V; Lambert, Willy E; Stove, Christophe P

    2014-10-01

    Although dried blood spot (DBS) sampling is increasingly receiving interest as a potential alternative to traditional blood sampling, the impact of hematocrit (Hct) on DBS results is limiting its final breakthrough in routine bioanalysis. To predict the Hct of a given DBS, potassium (K(+)) proved to be a reliable marker. The aim of this study was to evaluate whether application of an algorithm, based upon predicted Hct or K(+) concentrations as such, allowed correction for the Hct bias. Using validated LC-MS/MS methods, caffeine, chosen as a model compound, was determined in whole blood and corresponding DBS samples with a broad Hct range (0.18-0.47). A reference subset (n = 50) was used to generate an algorithm based on K(+) concentrations in DBS. Application of the developed algorithm on an independent test set (n = 50) alleviated the assay bias, especially at lower Hct values. Before correction, differences between DBS and whole blood concentrations ranged from -29.1 to 21.1%. The mean difference, as obtained by Bland-Altman comparison, was -6.6% (95% confidence interval (CI), -9.7 to -3.4%). After application of the algorithm, differences between corrected and whole blood concentrations lay between -19.9 and 13.9% with a mean difference of -2.1% (95% CI, -4.5 to 0.3%). The same algorithm was applied to a separate compound, paraxanthine, which was determined in 103 samples (Hct range, 0.17-0.47), yielding similar results. In conclusion, a K(+)-based algorithm allows correction for the Hct bias in the quantitative analysis of caffeine and its metabolite paraxanthine.

  7. Moisture Forecast Bias Correction in GEOS DAS

    Science.gov (United States)

    Dee, D.

    1999-01-01

    Data assimilation methods rely on numerous assumptions about the errors involved in measuring and forecasting atmospheric fields. One of the more disturbing of these is that short-term model forecasts are assumed to be unbiased. In case of atmospheric moisture, for example, observational evidence shows that the systematic component of errors in forecasts and analyses is often of the same order of magnitude as the random component. we have implemented a sequential algorithm for estimating forecast moisture bias from rawinsonde data in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The algorithm is designed to remove the systematic component of analysis errors and can be easily incorporated in an existing statistical data assimilation system. We will present results of initial experiments that show a significant reduction of bias in the GEOS DAS moisture analyses.

  8. Correcting biases in psychiatric diagnostic practice in Northwest Russia: Comparing the impact of a general educational program and a specific diagnostic training program

    Directory of Open Access Journals (Sweden)

    Rezvyy Grigory

    2008-04-01

    Full Text Available Abstract Background A general education in psychiatry does not necessary lead to good diagnostic skills. Specific training programs in diagnostic coding are established to facilitate implementation of ICD-10 coding practices. However, studies comparing the impact of these two different educational approaches on diagnostic skills are lacking. The aim of the current study was to find out if a specific training program in diagnostic coding improves the diagnostic skills better than a general education program, and if a national bias in diagnostic patterns can be minimised by a specific training in diagnostic coding. Methods A pre post design study with two groups was carried in the county of Archangels, Russia. The control group (39 psychiatrists took the required course (general educational program, while the intervention group (45 psychiatrists were given a specific training in diagnostic coding. Their diagnostic skills before and after education were assessed using 12 written case-vignettes selected from the entire spectrum of psychiatric disorders. Results There was a significant improvement in diagnostic skills in both the intervention group and the control group. However, the intervention group improved significantly more than did the control group. The national bias was partly corrected in the intervention group but not to the same degree in the control group. When analyzing both groups together, among the background factors only the current working place impacted the outcome of the intervention. Conclusion Establishing an internationally accepted diagnosis seems to be a special skill that requires specific training and needs to be an explicit part of the professional educational activities of psychiatrists. It does not appear that that skill is honed without specific training. The issue of national diagnostic biases should be taken into account in comparative cross-cultural studies of almost any character. The mechanisms of such biases are

  9. CPI Bias in Korea

    Directory of Open Access Journals (Sweden)

    Chul Chung

    2007-12-01

    Full Text Available We estimate the CPI bias in Korea by employing the approach of Engel’s Law as suggested by Hamilton (2001. This paper is the first attempt to estimate the bias using Korean panel data, Korean Labor and Income Panel Study(KLIPS. Following Hamilton’s model with non­linear specification correction, our estimation result shows that the cumulative CPI bias over the sample period (2000-2005 was 0.7 percent annually. This CPI bias implies that about 21 percent of the inflation rate during the period can be attributed to the bias. In light of purchasing power parity, we provide an interpretation of the estimated bias.

  10. Reproducibility and day time bias correction of optoelectronic leg volumetry: a prospective cohort study.

    Science.gov (United States)

    Engelberger, Rolf P; Blazek, Claudia; Amsler, Felix; Keo, Hong H; Baumann, Frédéric; Blättler, Werner; Baumgartner, Iris; Willenberg, Torsten

    2011-10-05

    Leg edema is a common manifestation of various underlying pathologies. Reliable measurement tools are required to quantify edema and monitor therapeutic interventions. Aim of the present work was to investigate the reproducibility of optoelectronic leg volumetry over 3 weeks' time period and to eliminate daytime related within-individual variability. Optoelectronic leg volumetry was performed in 63 hairdressers (mean age 45 ± 16 years, 85.7% female) in standing position twice within a minute for each leg and repeated after 3 weeks. Both lower leg (legBD) and whole limb (limbBF) volumetry were analysed. Reproducibility was expressed as analytical and within-individual coefficients of variance (CVA, CVW), and as intra-class correlation coefficients (ICC). A total of 492 leg volume measurements were analysed. Both legBD and limbBF volumetry were highly reproducible with CVA of 0.5% and 0.7%, respectively. Within-individual reproducibility of legBD and limbBF volumetry over a three weeks' period was high (CVW 1.3% for both; ICC 0.99 for both). At both visits, the second measurement revealed a significantly higher volume compared to the first measurement with a mean increase of 7.3 ml ± 14.1 (0.33% ± 0.58%) for legBD and 30.1 ml ± 48.5 ml (0.52% ± 0.79%) for limbBF volume. A significant linear correlation between absolute and relative leg volume differences and the difference of exact day time of measurement between the two study visits was found (P correction formula permitted further improvement of CVW. Leg volume changes can be reliably assessed by optoelectronic leg volumetry at a single time point and over a 3 weeks' time period. However, volumetry results are biased by orthostatic and daytime-related volume changes. The bias for day-time related volume changes can be minimized by a time-correction formula.

  11. Automatic segmentation for brain MR images via a convex optimized segmentation and bias field correction coupled model.

    Science.gov (United States)

    Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui

    2014-09-01

    Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Estimating the residential demand function for natural gas in Seoul with correction for sample selection bias

    International Nuclear Information System (INIS)

    Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun

    2009-01-01

    Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)

  13. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  14. Ensemble Kalman filter assimilation of temperature and altimeter data with bias correction and application to seasonal prediction

    Directory of Open Access Journals (Sweden)

    C. L. Keppenne

    2005-01-01

    Full Text Available To compensate for a poorly known geoid, satellite altimeter data is usually analyzed in terms of anomalies from the time mean record. When such anomalies are assimilated into an ocean model, the bias between the climatologies of the model and data is problematic. An ensemble Kalman filter (EnKF is modified to account for the presence of a forecast-model bias and applied to the assimilation of TOPEX/Poseidon (T/P altimeter data. The online bias correction (OBC algorithm uses the same ensemble of model state vectors to estimate biased-error and unbiased-error covariance matrices. Covariance localization is used but the bias covariances have different localization scales from the unbiased-error covariances, thereby accounting for the fact that the bias in a global ocean model could have much larger spatial scales than the random error.The method is applied to a 27-layer version of the Poseidon global ocean general circulation model with about 30-million state variables. Experiments in which T/P altimeter anomalies are assimilated show that the OBC reduces the RMS observation minus forecast difference for sea-surface height (SSH over a similar EnKF run in which OBC is not used. Independent in situ temperature observations show that the temperature field is also improved. When the T/P data and in situ temperature data are assimilated in the same run and the configuration of the ensemble at the end of the run is used to initialize the ocean component of the GMAO coupled forecast model, seasonal SSH hindcasts made with the coupled model are generally better than those initialized with optimal interpolation of temperature observations without altimeter data. The analysis of the corresponding sea-surface temperature hindcasts is not as conclusive.

  15. Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model

    Science.gov (United States)

    Arumugam, S.; Libera, D.

    2017-12-01

    Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.

  16. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

    Science.gov (United States)

    Carnegie, Nicole Bohme

    2011-04-15

    The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

  17. Assessing the extent of non-stationary biases in GCMs

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-06-01

    General circulation models (GCMs) are the main tools for estimating changes in the climate for the future. The imperfect representation of climate models introduces biases in the simulations that need to be corrected prior to their use for impact assessments. Bias correction methods generally assume that the bias calculated over the historical period does not change and can be applied to the future. This study investigates this assumption by considering the extent and nature of bias non-stationarity using 20th century precipitation and temperature simulations from six CMIP5 GCMs across Australia. Four statistics (mean, standard deviation, 10th and 90th quantiles) in monthly and seasonal biases are obtained for three different time window lengths (10, 25 and 33 years) to examine the properties of bias over time. This approach is repeated for two different phases of the Interdecadal Pacific Oscillation (IPO), which is known to have strong influences on the Australian climate. It is found that bias non-stationarity at decadal timescales is indeed an issue over some of Australia for some GCMs. When considering interdecadal variability there are significant difference in the bias between positive and negative phases of the IPO. Regional analyses confirmed these findings with the largest differences seen on the east coast of Australia, where IPO impacts tend to be the strongest. The nature of the bias non-stationarity found in this study suggests that it will be difficult to modify existing bias correction approaches to account for non-stationary biases. A more practical approach for impact assessments that use bias correction maybe to use a selection of GCMs where the assumption of bias non-stationarity holds.

  18. Short Tree, Long Tree, Right Tree, Wrong Tree: New Acquisition Bias Corrections for Inferring SNP Phylogenies.

    Science.gov (United States)

    Leaché, Adam D; Banbury, Barbara L; Felsenstein, Joseph; de Oca, Adrián Nieto-Montes; Stamatakis, Alexandros

    2015-11-01

    Single nucleotide polymorphisms (SNPs) are useful markers for phylogenetic studies owing in part to their ubiquity throughout the genome and ease of collection. Restriction site associated DNA sequencing (RADseq) methods are becoming increasingly popular for SNP data collection, but an assessment of the best practises for using these data in phylogenetics is lacking. We use computer simulations, and new double digest RADseq (ddRADseq) data for the lizard family Phrynosomatidae, to investigate the accuracy of RAD loci for phylogenetic inference. We compare the two primary ways RAD loci are used during phylogenetic analysis, including the analysis of full sequences (i.e., SNPs together with invariant sites), or the analysis of SNPs on their own after excluding invariant sites. We find that using full sequences rather than just SNPs is preferable from the perspectives of branch length and topological accuracy, but not of computational time. We introduce two new acquisition bias corrections for dealing with alignments composed exclusively of SNPs, a conditional likelihood method and a reconstituted DNA approach. The conditional likelihood method conditions on the presence of variable characters only (the number of invariant sites that are unsampled but known to exist is not considered), while the reconstituted DNA approach requires the user to specify the exact number of unsampled invariant sites prior to the analysis. Under simulation, branch length biases increase with the amount of missing data for both acquisition bias correction methods, but branch length accuracy is much improved in the reconstituted DNA approach compared to the conditional likelihood approach. Phylogenetic analyses of the empirical data using concatenation or a coalescent-based species tree approach provide strong support for many of the accepted relationships among phrynosomatid lizards, suggesting that RAD loci contain useful phylogenetic signal across a range of divergence times despite the

  19. Bias correction of daily precipitation projected by the CORDEX-Africa ensemble for a sparsely gauged region in West Africa with regionalized distribution parameters

    Science.gov (United States)

    Lorenz, Manuel; Bliefernicht, Jan; Laux, Patrick; Kunstmann, Harald

    2017-04-01

    Reliable estimates of future climatic conditions are indispensable for the sustainable planning of agricultural activities in West Africa. Precipitation time series of regional climate models (RCMs) typically exhibit a bias in the distribution of both rainfall intensities and wet day frequencies. Furthermore, the annual and monthly sums of precipitation may remarkably vary from the observations in this region. As West Africa experiences a distinct rainy season, sowing dates are oftentimes planned based on the beginning of this rainfall period. A biased representation of the annual cycle of precipitation in the uncorrected RCMs can therefore lead to crop failure. The precipitation ensemble, obtained from the Coordinated Downscaling Experiment CORDEX-Africa, was bias-corrected for the study region in West Africa (extending approximately 343,358 km2) which covers large parts of Burkina Faso, Ghana and Benin. In oder to debias the RCM precipitation simulations, a Quantile-Mapping method was applied to the historical period 1950-2005. For the RCM future projections (2006-2100), the Double-Quantile-Mapping procedure was chosen. This method makes use of the shift in the distribution function of the future precipitation values which allows to incorporate the climate change signal of the RCM projections into the bias correction. As large areas of the study region are ungauged, the assignment of the information from the nearest station to the ungauged location would lead to sharp changes in the estimated statistics from one location to another. Thus, the distribution parameters needed for the Quantile-Mapping were estimated by Kriging the distribution parameters of the available measurement stations. This way it is possible to obtain reasonable estimates of the expected distribution of precipitation at ungauged locations. The presentation will illustrate some aspects and trade-offs in the distribution parameter interpolation as well as an analysis of the uncertainties of the

  20. Variations in the Reported Age of a Patient: A Source of Bias in the Diagnosis of Depression and Dementia.

    Science.gov (United States)

    Perlick, Deborah; Atkins, Alvin

    1984-01-01

    Varied the reported age of patients (N=36) presented to clinicians for diagnosis. Results indicated the presence of a bias, with a greater attributon of organic symptoms reflective of senile dementia and fewer judgments of depression when a patient is described as elderly as opposed to middle-aged. (LLL)

  1. Correct diagnosis of vascular encasement and longitudinal extension of hilar cholangiocarcinoma by four-channel multidetector-row computed tomography

    International Nuclear Information System (INIS)

    Okumoto, Tadayuki; Yamada, Takayuki; Sato, Akihiro

    2009-01-01

    Accurate diagnosis of local invasion of hilar cholangiocarcinomas is challenging due to their small size and the anatomic complexity of the hepatic hilar region. On the other hand, the correct diagnosis of local invasion is essential for assuring the possibility of curative surgery. The purpose of this study was to evaluate the feasibility of four-channel multidetector-row computed tomography (MDCT) in the assessment of vascular and bile duct involvement, by which we could obtain useful information for the surgical management of hilar cholangiocarcinoma. The subjects were 18 patients for whom the extent of tumor invasion was surgically and pathologically confirmed. All patients underwent preoperative multiphasic CT scanning by MDCT. Arterial and portal dominant phases were acquired using a detector configuration of 1.25 mm x 4 mm, and both axial and multiplanar reconstructed images were interpreted. Longitudinal extension was evaluated up to second-order branches. Vascular invasion is considered to be the degree of tumor contiguity to the hepatic arteries and portal vein and was graded by CT. The longitudinal extension was correctly diagnosed in 14 patients (77.8%). Hepatic artery invasion was correctly diagnosed in 17 patients with sensitivity of 100% and specificity of 90%, respectively. Portal vein invasion was correctly diagnosed in 47 of 51 branches with sensitivity and specificity of 92.3% and 90.2%, respectively. Multiplanar reconstructed images contributed to the correct diagnosis for both vascular encasement and longitudinal tumor extension. In conclusion, MDCT is useful in preoperative evaluation of hilar cholangiocarcinoma, especially when combined with multiplanar reconstructed images. (author)

  2. Bias-field equalizer for bubble memories

    Science.gov (United States)

    Keefe, G. E.

    1977-01-01

    Magnetoresistive Perm-alloy sensor monitors bias field required to maintain bubble memory. Sensor provides error signal that, in turn, corrects magnitude of bias field. Error signal from sensor can be used to control magnitude of bias field in either auxiliary set of bias-field coils around permanent magnet field, or current in small coils used to remagnetize permanent magnet by infrequent, short, high-current pulse or short sequence of pulses.

  3. Internal correction of spectral interferences and mass bias for selenium metabolism studies using enriched stable isotopes in combination with multiple linear regression.

    Science.gov (United States)

    Lunøe, Kristoffer; Martínez-Sierra, Justo Giner; Gammelgaard, Bente; Alonso, J Ignacio García

    2012-03-01

    The analytical methodology for the in vivo study of selenium metabolism using two enriched selenium isotopes has been modified, allowing for the internal correction of spectral interferences and mass bias both for total selenium and speciation analysis. The method is based on the combination of an already described dual-isotope procedure with a new data treatment strategy based on multiple linear regression. A metabolic enriched isotope ((77)Se) is given orally to the test subject and a second isotope ((74)Se) is employed for quantification. In our approach, all possible polyatomic interferences occurring in the measurement of the isotope composition of selenium by collision cell quadrupole ICP-MS are taken into account and their relative contribution calculated by multiple linear regression after minimisation of the residuals. As a result, all spectral interferences and mass bias are corrected internally allowing the fast and independent quantification of natural abundance selenium ((nat)Se) and enriched (77)Se. In this sense, the calculation of the tracer/tracee ratio in each sample is straightforward. The method has been applied to study the time-related tissue incorporation of (77)Se in male Wistar rats while maintaining the (nat)Se steady-state conditions. Additionally, metabolically relevant information such as selenoprotein synthesis and selenium elimination in urine could be studied using the proposed methodology. In this case, serum proteins were separated by affinity chromatography while reverse phase was employed for urine metabolites. In both cases, (74)Se was used as a post-column isotope dilution spike. The application of multiple linear regression to the whole chromatogram allowed us to calculate the contribution of bromine hydride, selenium hydride, argon polyatomics and mass bias on the observed selenium isotope patterns. By minimising the square sum of residuals for the whole chromatogram, internal correction of spectral interferences and mass

  4. The Bias of the Gini Coefficient due to Grouping

    NARCIS (Netherlands)

    T.G.M. van Ourti (Tom); Ph. Clarke (Philip)

    2008-01-01

    textabstractWe propose a first order bias correction term for the Gini index to reduce the bias due to grouping. The first order correction term is obtained from studying the estimator of the Gini index within a measurement error framework. In addition, it reveals an intuitive formula for the

  5. Climate projections and extremes in dynamically downscaled CMIP5 model outputs over the Bengal delta: a quartile based bias-correction approach with new gridded data

    Science.gov (United States)

    Hasan, M. Alfi; Islam, A. K. M. Saiful; Akanda, Ali Shafqat

    2017-11-01

    In the era of global warning, the insight of future climate and their changing extremes is critical for climate-vulnerable regions of the world. In this study, we have conducted a robust assessment of Regional Climate Model (RCM) results in a monsoon-dominated region within the new Coupled Model Intercomparison Project Phase 5 (CMIP5) and the latest Representative Concentration Pathways (RCP) scenarios. We have applied an advanced bias correction approach to five RCM simulations in order to project future climate and associated extremes over Bangladesh, a critically climate-vulnerable country with a complex monsoon system. We have also generated a new gridded product that performed better in capturing observed climatic extremes than existing products. The bias-correction approach provided a notable improvement in capturing the precipitation extremes as well as mean climate. The majority of projected multi-model RCMs indicate an increase of rainfall, where one model shows contrary results during the 2080s (2071-2100) era. The multi-model mean shows that nighttime temperatures will increase much faster than daytime temperatures and the average annual temperatures are projected to be as hot as present-day summer temperatures. The expected increase of precipitation and temperature over the hilly areas are higher compared to other parts of the country. Overall, the projected extremities of future rainfall are more variable than temperature. According to the majority of the models, the number of the heavy rainy days will increase in future years. The severity of summer-day temperatures will be alarming, especially over hilly regions, where winters are relatively warm. The projected rise of both precipitation and temperature extremes over the intense rainfall-prone northeastern region of the country creates a possibility of devastating flash floods with harmful impacts on agriculture. Moreover, the effect of bias-correction, as presented in probable changes of both bias-corrected

  6. KOREKSI BIAS BETA SAHAM DI BURSA EFEK INDONESIA PERIODE 2009-2012

    Directory of Open Access Journals (Sweden)

    Indah Saptorini

    2016-04-01

    Full Text Available This  study  aims  to  determine  whether  the  beta  value  of  shares  listed  on  the Indonesia Stock Exchange (BEI is a bias beta due to nonsynchronous trading activities.  There  are  310  companies  listed  on  the  Stock  Exchange  2009-2012 period  sampled  in  this  study.  The  bias  needs  to  be  corrected.  From  three methods employed : the Scholes and Williams (1977, the Dimson (1979, and the Fowler and Rorke (1983. Results of the analysis conclude that the shares on the Stock Exchange has a bias beta caused by not having a securities trading for  some  time.  This  resulted  in  the  calculation  of  IHSG  the  period  of  t  was biased because it uses the closing price of the period t-1.  In  this  study  bias  beta  correction  method  Scholes  and  Williams  (1977,  both one lag one lead and two lag two lead are better than the bias beta correction method Dimson (1979 and the bias beta correction method Fowler and Rorke (1983 because the value of beta Scholes and Williams after corrected close to one. Keywords : Nonsynchronous tradings, thin tradings, bias

  7. Bias Correction for Assimilation of Retrieved AIRS Profiles of Temperature and Humidity

    Science.gov (United States)

    Blakenship, Clay; Zavodsky, Bradley; Blackwell, William

    2014-01-01

    The Atmospheric Infrared Sounder (AIRS) is a hyperspectral radiometer aboard NASA's Aqua satellite designed to measure atmospheric profiles of temperature and humidity. AIRS retrievals are assimilated into the Weather Research and Forecasting (WRF) model over the North Pacific for some cases involving "atmospheric rivers". These events bring a large flux of water vapor to the west coast of North America and often lead to extreme precipitation in the coastal mountain ranges. An advantage of assimilating retrievals rather than radiances is that information in partly cloudy fields of view can be used. Two different Level 2 AIRS retrieval products are compared: the Version 6 AIRS Science Team standard retrievals and a neural net retrieval from MIT. Before assimilation, a bias correction is applied to adjust each layer of retrieved temperature and humidity so the layer mean values agree with a short-term model climatology. WRF runs assimilating each of the products are compared against each other and against a control run with no assimilation. Forecasts are against ERA reanalyses.

  8. Uncertainty estimation with bias-correction for flow series based on rating curve

    Science.gov (United States)

    Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta

    2014-03-01

    Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.

  9. Effect of Malmquist bias on correlation studies with IRAS data base

    Science.gov (United States)

    Verter, Frances

    1993-01-01

    The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.

  10. Simulating publication bias

    DEFF Research Database (Denmark)

    Paldam, Martin

    is censoring: selection by the size of estimate; SR3 selects the optimal combination of fit and size; and SR4 selects the first satisficing result. The last four SRs are steered by priors and result in bias. The MST and the FAT-PET have been developed for detection and correction of such bias. The simulations......Economic research typically runs J regressions for each selected for publication – it is often selected as the ‘best’ of the regressions. The paper examines five possible meanings of the word ‘best’: SR0 is ideal selection with no bias; SR1 is polishing: selection by statistical fit; SR2...... are made by data variation, while the model is the same. It appears that SR0 generates narrow funnels much at odds with observed funnels, while the other four funnels look more realistic. SR1 to SR4 give the mean a substantial bias that confirms the prior causing the bias. The FAT-PET MRA works well...

  11. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    Science.gov (United States)

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  12. The relative contributions of scatter and attenuation corrections toward improved brain SPECT quantification

    International Nuclear Information System (INIS)

    Stodilka, Robert Z.; Msaki, Peter; Prato, Frank S.; Nicholson, Richard L.; Kemp, B.J.

    1998-01-01

    Mounting evidence indicates that scatter and attenuation are major confounds to objective diagnosis of brain disease by quantitative SPECT. There is considerable debate, however, as to the relative importance of scatter correction (SC) and attenuation correction (AC), and how they should be implemented. The efficacy of SC and AC for 99m Tc brain SPECT was evaluated using a two-compartment fully tissue-equivalent anthropomorphic head phantom. Four correction schemes were implemented: uniform broad-beam AC, non-uniform broad-beam AC, uniform SC+AC, and non-uniform SC+AC. SC was based on non-stationary deconvolution scatter subtraction, modified to incorporate a priori knowledge of either the head contour (uniform SC) or transmission map (non-uniform SC). The quantitative accuracy of the correction schemes was evaluated in terms of contrast recovery, relative quantification (cortical:cerebellar activity), uniformity ((coefficient of variation of 230 macro-voxels) x100%), and bias (relative to a calibration scan). Our results were: uniform broad-beam (μ=0.12cm -1 ) AC (the most popular correction): 71% contrast recovery, 112% relative quantification, 7.0% uniformity, +23% bias. Non-uniform broad-beam (soft tissue μ=0.12cm -1 ) AC: 73%, 114%, 6.0%, +21%, respectively. Uniform SC+AC: 90%, 99%, 4.9%, +12%, respectively. Non-uniform SC+AC: 93%, 101%, 4.0%, +10%, respectively. SC and AC achieved the best quantification; however, non-uniform corrections produce only small improvements over their uniform counterparts. SC+AC was found to be superior to AC; this advantage is distinct and consistent across all four quantification indices. (author)

  13. Variability in Depressive Symptoms of Cognitive Deficit and Cognitive Bias During the First 2 Years After Diagnosis in Australian Men With Prostate Cancer.

    Science.gov (United States)

    Sharpley, Christopher F; Bitsika, Vicki; Christie, David R H

    2016-01-01

    The incidence and contribution to total depression of the depressive symptoms of cognitive deficit and cognitive bias in prostate cancer (PCa) patients were compared from cohorts sampled during the first 2 years after diagnosis. Survey data were collected from 394 patients with PCa, including background information, treatments, and disease status, plus total scores of depression and scores for subscales of the depressive symptoms of cognitive bias and cognitive deficit via the Zung Self-Rating Depression Scale. The sample was divided into eight 3-monthly time-since-diagnosis cohorts and according to depression severity. Mean scores for the depressive symptoms of cognitive deficit were significantly higher than those for cognitive bias for the whole sample, but the contribution of cognitive bias to total depression was stronger than that for cognitive deficit. When divided according to overall depression severity, patients with clinically significant depression showed reversed patterns of association between the two subsets of cognitive symptoms of depression and total depression compared with those patients who reported less severe depression. Differences in the incidence and contribution of these two different aspects of the cognitive symptoms of depression for patients with more severe depression argue for consideration of them when assessing and diagnosing depression in patients with PCa. Treatment requirements are also different between the two types of cognitive symptoms of depression, and several suggestions for matching treatment to illness via a personalized medicine approach are discussed. © The Author(s) 2014.

  14. Evaluation of correct diagnosis of referral patients to skin clinic by family physicians: A needs assessment for UME,CME

    Directory of Open Access Journals (Sweden)

    A Ramezanpour

    2009-08-01

    Full Text Available Background and purpose: It has been demonstrated that the level of welfare and improvement of nations is evaluated by the progress and achievement of their health service networks. The specialization of therapeutic approaches is one of the practical and effective ways to accomplish this goal. The health system savants believe that the family physician guideline is the redeemer of health system section. This study is aimed, to evaluate the accuracy of diagnosis of the dermatologic disease of referred patients by family physician in Zanjan Valiasr hospital in year 2008.Methods: This descriptive and cross-sectional study was done on 173 cases of referred patients from village family physician to dermatologic clinic. After correct diagnosis by dermatologist data including age, sex, family physician diagnosis and dermatologist diagnosis were recorded on data forms and then analyzed by Chi-square test.Results: From 173 referred patients, 76 cases (43.9% were male, 49 cases (28.3% were under 15 years old, 73 cases (42.2% were between 15-30 years old, and rest were more than 30 years old. 28 cases (16.1% have been referred with correct diagnosis.Conclusion: The level of accurate diagnosis by family physicians was law, which can be due to non-familiarity with common local skin disease and lack of enough instruction and education before starting the family physician project. We recommended that before starting this project, specialist workshop be prepared for family physicians.key words: Family Physician, Skin Diseases, Needs Assessment

  15. A weighted least-squares lump correction algorithm for transmission-corrected gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Sprinkle, J.K. Jr.; Sheppard, G.A.

    1993-01-01

    With transmission-corrected gamma-ray nondestructive assay instruments such as the Segmented Gamma Scanner (SGS) and the Tomographic Gamma Scanner (TGS) that is currently under development at Los Alamos National Laboratory, the amount of gamma-ray emitting material can be underestimated for samples in which the emitting material consists of particles or lumps of highly attenuating material. This problem is encountered in the assay of uranium and plutonium-bearing samples. To correct for this source of bias, we have developed a least-squares algorithm that uses transmission-corrected assay results for several emitted energies and a weighting function to account for statistical uncertainties in the assay results. The variation of effective lump size in the fitted model is parameterized; this allows the correction to be performed for a wide range of lump-size distributions. It may be possible to use the reduced chi-squared value obtained in the fit to identify samples in which assay assumptions have been violated. We found that the algorithm significantly reduced bias in simulated assays and improved SGS assay results for plutonium-bearing samples. Further testing will be conducted with the TGS, which is expected to be less susceptible than the SGS to systematic source of bias

  16. Length bias correction in gene ontology enrichment analysis using logistic regression.

    Science.gov (United States)

    Mi, Gu; Di, Yanming; Emerson, Sarah; Cumbie, Jason S; Chang, Jeff H

    2012-01-01

    When assessing differential gene expression from RNA sequencing data, commonly used statistical tests tend to have greater power to detect differential expression of genes encoding longer transcripts. This phenomenon, called "length bias", will influence subsequent analyses such as Gene Ontology enrichment analysis. In the presence of length bias, Gene Ontology categories that include longer genes are more likely to be identified as enriched. These categories, however, are not necessarily biologically more relevant. We show that one can effectively adjust for length bias in Gene Ontology analysis by including transcript length as a covariate in a logistic regression model. The logistic regression model makes the statistical issue underlying length bias more transparent: transcript length becomes a confounding factor when it correlates with both the Gene Ontology membership and the significance of the differential expression test. The inclusion of the transcript length as a covariate allows one to investigate the direct correlation between the Gene Ontology membership and the significance of testing differential expression, conditional on the transcript length. We present both real and simulated data examples to show that the logistic regression approach is simple, effective, and flexible.

  17. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization

    DEFF Research Database (Denmark)

    Keller, Sune H; Svarer, Claus; Sibomana, Merence

    2013-01-01

    scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Results: Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT......In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum...

  18. Gastric stromal tumor presenting as a right upper quadrant abdominal mass. Importance of a correct radiological differential diagnosis

    International Nuclear Information System (INIS)

    Tudela, X.; Garcia-Vila, J. H.; Jornet, J.

    2001-01-01

    Stromal tumors of the gastrointestinal tract encompass a group of neoplasms representing 1% to 3% of all digestive system tumors. When located in the stomach, their tendency to exhibit and exophytic growth pattern makes it necessary to establish the differential diagnosis with respect to other gastric tumors (lymphoma, exophytic adenocarcinoma) and nongastrointestinal masses. We present a case that illustrated the difficulties associated with the imaging diagnosis of these lesions and the importance of modern radiological techniques (helical computed tomography and magnetic resonance) and the correct interpretation on the part of radiologists to orient pathologists and clinicians toward the diagnosis and proper treatment. (Author) 10 refs

  19. A 2 × 2 taxonomy of multilevel latent contextual models: accuracy-bias trade-offs in full and partial error correction models.

    Science.gov (United States)

    Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich

    2011-12-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.

  20. Contamination effects on fixed-bias Langmuir probes

    Energy Technology Data Exchange (ETDEWEB)

    Steigies, C. T. [Institut fuer Experimentelle und Angewandte Physik, Christian-Albrechts-Universitaet zu Kiel, 24098 Kiel (Germany); Barjatya, A. [Department of Physical Sciences, Embry-Riddle Aeronautical University, Daytona Beach, Florida 32114 (United States)

    2012-11-15

    Langmuir probes are standard instruments for plasma density measurements on many sounding rockets. These probes can be operated in swept-bias as well as in fixed-bias modes. In swept-bias Langmuir probes, contamination effects are frequently visible as a hysteresis between consecutive up and down voltage ramps. This hysteresis, if not corrected, leads to poorly determined plasma densities and temperatures. With a properly chosen sweep function, the contamination parameters can be determined from the measurements and correct plasma parameters can then be determined. In this paper, we study the contamination effects on fixed-bias Langmuir probes, where no hysteresis type effect is seen in the data. Even though the contamination is not evident from the measurements, it does affect the plasma density fluctuation spectrum as measured by the fixed-bias Langmuir probe. We model the contamination as a simple resistor-capacitor circuit between the probe surface and the plasma. We find that measurements of small scale plasma fluctuations (meter to sub-meter scale) along a rocket trajectory are not affected, but the measured amplitude of large scale plasma density variation (tens of meters or larger) is attenuated. From the model calculations, we determine amplitude and cross-over frequency of the contamination effect on fixed-bias probes for different contamination parameters. The model results also show that a fixed bias probe operating in the ion-saturation region is affected less by contamination as compared to a fixed bias probe operating in the electron saturation region.

  1. Bias in Examination Test Banks that Accompany Cost Accounting Texts.

    Science.gov (United States)

    Clute, Ronald C.; McGrail, George R.

    1989-01-01

    Eight text banks that accompany cost accounting textbooks were evaluated for the presence of bias in the distribution of correct responses. All but one were found to have considerable bias, and three of eight were found to have significant choice bias. (SK)

  2. Bias aware Kalman filters

    DEFF Research Database (Denmark)

    Drecourt, J.-P.; Madsen, H.; Rosbjerg, Dan

    2006-01-01

    This paper reviews two different approaches that have been proposed to tackle the problems of model bias with the Kalman filter: the use of a colored noise model and the implementation of a separate bias filter. Both filters are implemented with and without feedback of the bias into the model state....... The colored noise filter formulation is extended to correct both time correlated and uncorrelated model error components. A more stable version of the separate filter without feedback is presented. The filters are implemented in an ensemble framework using Latin hypercube sampling. The techniques...... are illustrated on a simple one-dimensional groundwater problem. The results show that the presented filters outperform the standard Kalman filter and that the implementations with bias feedback work in more general conditions than the implementations without feedback. 2005 Elsevier Ltd. All rights reserved....

  3. Productivity changes in OECD healthcare systems: bias-corrected Malmquist productivity approach.

    Science.gov (United States)

    Kim, Younhee; Oh, Dong-Hyun; Kang, Minah

    2016-10-01

    This study evaluates productivity changes in the healthcare systems of 30 Organization for Economic Co-operation and Development (OECD) countries over the 2002-2012 periods. The bootstrapped Malmquist approach is used to estimate bias-corrected indices of healthcare performance in productivity, efficiency and technology by modifying the original distance functions. Two inputs (health expenditure and school life expectancy) and two outputs (life expectancy at birth and infant mortality rate) are used to calculate productivity growth. There are no perceptible trends in productivity changes over the 2002-2012 periods, but positive productivity improvement has been noticed for most OECD countries. The result also informs considerable variations in annual productivity scores across the countries. Average annual productivity growth is evenly yielded by efficiency and technical changes, but both changes run somewhat differently across the years. The results of this study assert that policy reforms in OECD countries have improved productivity growth in healthcare systems over the past decade. Countries that lag behind in productivity growth should benchmark peer countries' practices to increase performance by prioritizing an achievable trajectory based on socioeconomic conditions. For example, relatively inefficient countries in this study indicate higher income inequality, corresponding to inequality and health outcomes studies. Although income inequality and globalization are not direct measures to estimate healthcare productivity in this study, these issues could be latent factors to explain cross-country healthcare productivity for future research. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Biased lineup instructions and face identification from video images.

    Science.gov (United States)

    Thompson, W Burt; Johnson, Jaime

    2008-01-01

    Previous eyewitness memory research has shown that biased lineup instructions reduce identification accuracy, primarily by increasing false-positive identifications in target-absent lineups. Because some attempts at identification do not rely on a witness's memory of the perpetrator but instead involve matching photos to images on surveillance video, the authors investigated the effects of biased instructions on identification accuracy in a matching task. In Experiment 1, biased instructions did not affect the overall accuracy of participants who used video images as an identification aid, but nearly all correct decisions occurred with target-present photo spreads. Both biased and unbiased instructions resulted in high false-positive rates. In Experiment 2, which focused on video-photo matching accuracy with target-absent photo spreads, unbiased instructions led to more correct responses (i.e., fewer false positives). These findings suggest that investigators should not relax precautions against biased instructions when people attempt to match photos to an unfamiliar person recorded on video.

  5. Development and Application of Tools for MRI Analysis - A Study on the Effects of Exercise in Patients with Alzheimer's Disease and Generative Models for Bias Field Correction in MR Brain Imaging

    DEFF Research Database (Denmark)

    Larsen, Christian Thode

    in several cognitive performance measures, including mental speed, attention and verbal uency. MRI suffers from an image artifact often referred to as the "bias field”. This effect complicates automatized analysis of the images. For this reason, bias field correction is typical an early preprocessing step...... as a "histogram sharpening” method, actually employs an underlying generative model, and that the bias field is estimated using an algorithm that is identical to generalized expectation maximization, but relies on heuristic parameter updates. The thesis progresses to present a new generative model...

  6. Modeling Temporal Bias of Uplift Events in Recommender Systems

    KAUST Repository

    Altaf, Basmah

    2013-05-08

    Today, commercial industry spends huge amount of resources in advertisement campaigns, new marketing strategies, and promotional deals to introduce their product to public and attract a large number of customers. These massive investments by a company are worthwhile because marketing tactics greatly influence the consumer behavior. Alternatively, these advertising campaigns have a discernible impact on recommendation systems which tend to promote popular items by ranking them at the top, resulting in biased and unfair decision making and loss of customers’ trust. The biasing impact of popularity of items on recommendations, however, is not fixed, and varies with time. Therefore, it is important to build a bias-aware recommendation system that can rank or predict items based on their true merit at given time frame. This thesis proposes a framework that can model the temporal bias of individual items defined by their characteristic contents, and provides a simple process for bias correction. Bias correction is done either by cleaning the bias from historical training data that is used for building predictive model, or by ignoring the estimated bias from the predictions of a standard predictor. Evaluated on two real world datasets, NetFlix and MovieLens, our framework is shown to be able to estimate and remove the bias as a result of adopted marketing techniques from the predicted popularity of items at a given time.

  7. Resting State fMRI in the moving fetus: a robust framework for motion, bias field and spin history correction.

    Science.gov (United States)

    Ferrazzi, Giulio; Kuklisova Murgasova, Maria; Arichi, Tomoki; Malamateniou, Christina; Fox, Matthew J; Makropoulos, Antonios; Allsop, Joanna; Rutherford, Mary; Malik, Shaihan; Aljabar, Paul; Hajnal, Joseph V

    2014-11-01

    There is growing interest in exploring fetal functional brain development, particularly with Resting State fMRI. However, during a typical fMRI acquisition, the womb moves due to maternal respiration and the fetus may perform large-scale and unpredictable movements. Conventional fMRI processing pipelines, which assume that brain movements are infrequent or at least small, are not suitable. Previous published studies have tackled this problem by adopting conventional methods and discarding as much as 40% or more of the acquired data. In this work, we developed and tested a processing framework for fetal Resting State fMRI, capable of correcting gross motion. The method comprises bias field and spin history corrections in the scanner frame of reference, combined with slice to volume registration and scattered data interpolation to place all data into a consistent anatomical space. The aim is to recover an ordered set of samples suitable for further analysis using standard tools such as Group Independent Component Analysis (Group ICA). We have tested the approach using simulations and in vivo data acquired at 1.5 T. After full motion correction, Group ICA performed on a population of 8 fetuses extracted 20 networks, 6 of which were identified as matching those previously observed in preterm babies. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Causes of model dry and warm bias over central U.S. and impact on climate projections.

    Science.gov (United States)

    Lin, Yanluan; Dong, Wenhao; Zhang, Minghua; Xie, Yuanyu; Xue, Wei; Huang, Jianbin; Luo, Yong

    2017-10-12

    Climate models show a conspicuous summer warm and dry bias over the central United States. Using results from 19 climate models in the Coupled Model Intercomparison Project Phase 5 (CMIP5), we report a persistent dependence of warm bias on dry bias with the precipitation deficit leading the warm bias over this region. The precipitation deficit is associated with the widespread failure of models in capturing strong rainfall events in summer over the central U.S. A robust linear relationship between the projected warming and the present-day warm bias enables us to empirically correct future temperature projections. By the end of the 21st century under the RCP8.5 scenario, the corrections substantially narrow the intermodel spread of the projections and reduce the projected temperature by 2.5 K, resulting mainly from the removal of the warm bias. Instead of a sharp decrease, after this correction the projected precipitation is nearly neutral for all scenarios.Climate models repeatedly show a warm and dry bias over the central United States, but the origin of this bias remains unclear. Here the authors associate this bias to precipitation deficits in models and after applying a correction, projected precipitation in this region shows no significant changes.

  9. Reproducibility and day time bias correction of optoelectronic leg volumetry: a prospective cohort study

    Directory of Open Access Journals (Sweden)

    Baumgartner Iris

    2011-10-01

    Full Text Available Abstract Background Leg edema is a common manifestation of various underlying pathologies. Reliable measurement tools are required to quantify edema and monitor therapeutic interventions. Aim of the present work was to investigate the reproducibility of optoelectronic leg volumetry over 3 weeks' time period and to eliminate daytime related within-individual variability. Methods Optoelectronic leg volumetry was performed in 63 hairdressers (mean age 45 ± 16 years, 85.7% female in standing position twice within a minute for each leg and repeated after 3 weeks. Both lower leg (legBD and whole limb (limbBF volumetry were analysed. Reproducibility was expressed as analytical and within-individual coefficients of variance (CVA, CVW, and as intra-class correlation coefficients (ICC. Results A total of 492 leg volume measurements were analysed. Both legBD and limbBF volumetry were highly reproducible with CVA of 0.5% and 0.7%, respectively. Within-individual reproducibility of legBD and limbBF volumetry over a three weeks' period was high (CVW 1.3% for both; ICC 0.99 for both. At both visits, the second measurement revealed a significantly higher volume compared to the first measurement with a mean increase of 7.3 ml ± 14.1 (0.33% ± 0.58% for legBD and 30.1 ml ± 48.5 ml (0.52% ± 0.79% for limbBF volume. A significant linear correlation between absolute and relative leg volume differences and the difference of exact day time of measurement between the two study visits was found (P W. Conclusions Leg volume changes can be reliably assessed by optoelectronic leg volumetry at a single time point and over a 3 weeks' time period. However, volumetry results are biased by orthostatic and daytime-related volume changes. The bias for day-time related volume changes can be minimized by a time-correction formula.

  10. Tax Evasion, Information Reporting, and the Regressive Bias Hypothesis

    DEFF Research Database (Denmark)

    Boserup, Simon Halphen; Pinje, Jori Veng

    A robust prediction from the tax evasion literature is that optimal auditing induces a regressive bias in effective tax rates compared to statutory rates. If correct, this will have important distributional consequences. Nevertheless, the regressive bias hypothesis has never been tested empirically...

  11. Biased lineups: sequential presentation reduces the problem.

    Science.gov (United States)

    Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C

    1991-12-01

    Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.

  12. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    Science.gov (United States)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed

  13. Correction of gene expression data

    DEFF Research Database (Denmark)

    Darbani Shirvanehdeh, Behrooz; Stewart, C. Neal, Jr.; Noeparvar, Shahin

    2014-01-01

    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies....... For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce...

  14. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  15. Modeling bias and variation in the stochastic processes of small RNA sequencing.

    Science.gov (United States)

    Argyropoulos, Christos; Etheridge, Alton; Sakhanenko, Nikita; Galas, David

    2017-06-20

    The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Survey Response-Related Biases in Contingent Valuation: Concepts, Remedies, and Empirical Application to Valuing Aquatic Plant Management

    Science.gov (United States)

    Mark L. Messonnier; John C. Bergstrom; Chrisopher M. Cornwell; R. Jeff Teasley; H. Ken Cordell

    2000-01-01

    Simple nonresponse and selection biases that may occur in survey research such as contingent valuation applications are discussed and tested. Correction mechanisms for these types of biases are demonstrated. Results indicate the importance of testing and correcting for unit and item nonresponse bias in contingent valuation survey data. When sample nonresponse and...

  17. "Unreliability as a threat to understanding psychopathology: The cautionary tale of attentional bias": Correction to Rodebaugh et al. (2016).

    Science.gov (United States)

    2016-10-01

    Reports an error in "Unreliability as a threat to understanding psychopathology: The cautionary tale of attentional bias" by Thomas L. Rodebaugh, Rachel B. Scullin, Julia K. Langer, David J. Dixon, Jonathan D. Huppert, Amit Bernstein, Ariel Zvielli and Eric J. Lenze ( Journal of Abnormal Psychology , 2016[Aug], Vol 125[6], 840-851). There was an error in the Author Note concerning the support of the MacBrain Face Stimulus Set. The correct statement is provided. (The following abstract of the original article appeared in record 2016-30117-001.) The use of unreliable measures constitutes a threat to our understanding of psychopathology, because advancement of science using both behavioral and biologically oriented measures can only be certain if such measurements are reliable. Two pillars of the National Institute of Mental Health's portfolio-the Research Domain Criteria (RDoC) initiative for psychopathology and the target engagement initiative in clinical trials-cannot succeed without measures that possess the high reliability necessary for tests involving mediation and selection based on individual differences. We focus on the historical lack of reliability of attentional bias measures as an illustration of how reliability can pose a threat to our understanding. Our own data replicate previous findings of poor reliability for traditionally used scores, which suggests a serious problem with the ability to test theories regarding attentional bias. This lack of reliability may also suggest problems with the assumption (in both theory and the formula for the scores) that attentional bias is consistent and stable across time. In contrast, measures accounting for attention as a dynamic process in time show good reliability in our data. The field is sorely in need of research reporting findings and reliability for attentional bias scores using multiple methods, including those focusing on dynamic processes over time. We urge researchers to test and report reliability of

  18. Model-based control of observer bias for the analysis of presence-only data in ecology.

    Directory of Open Access Journals (Sweden)

    David I Warton

    Full Text Available Presence-only data, where information is available concerning species presence but not species absence, are subject to bias due to observers being more likely to visit and record sightings at some locations than others (hereafter "observer bias". In this paper, we describe and evaluate a model-based approach to accounting for observer bias directly--by modelling presence locations as a function of known observer bias variables (such as accessibility variables in addition to environmental variables, then conditioning on a common level of bias to make predictions of species occurrence free of such observer bias. We implement this idea using point process models with a LASSO penalty, a new presence-only method related to maximum entropy modelling, that implicitly addresses the "pseudo-absence problem" of where to locate pseudo-absences (and how many. The proposed method of bias-correction is evaluated using systematically collected presence/absence data for 62 plant species endemic to the Blue Mountains near Sydney, Australia. It is shown that modelling and controlling for observer bias significantly improves the accuracy of predictions made using presence-only data, and usually improves predictions as compared to pseudo-absence or "inventory" methods of bias correction based on absences from non-target species. Future research will consider the potential for improving the proposed bias-correction approach by estimating the observer bias simultaneously across multiple species.

  19. Bias correction and Bayesian analysis of aggregate counts in SAGE libraries

    Directory of Open Access Journals (Sweden)

    Briggs William M

    2010-02-01

    Full Text Available Abstract Background Tag-based techniques, such as SAGE, are commonly used to sample the mRNA pool of an organism's transcriptome. Incomplete digestion during the tag formation process may allow for multiple tags to be generated from a given mRNA transcript. The probability of forming a tag varies with its relative location. As a result, the observed tag counts represent a biased sample of the actual transcript pool. In SAGE this bias can be avoided by ignoring all but the 3' most tag but will discard a large fraction of the observed data. Taking this bias into account should allow more of the available data to be used leading to increased statistical power. Results Three new hierarchical models, which directly embed a model for the variation in tag formation probability, are proposed and their associated Bayesian inference algorithms are developed. These models may be applied to libraries at both the tag and aggregate level. Simulation experiments and analysis of real data are used to contrast the accuracy of the various methods. The consequences of tag formation bias are discussed in the context of testing differential expression. A description is given as to how these algorithms can be applied in that context. Conclusions Several Bayesian inference algorithms that account for tag formation effects are compared with the DPB algorithm providing clear evidence of superior performance. The accuracy of inferences when using a particular non-informative prior is found to depend on the expression level of a given gene. The multivariate nature of the approach easily allows both univariate and joint tests of differential expression. Calculations demonstrate the potential for false positive and negative findings due to variation in tag formation probabilities across samples when testing for differential expression.

  20. Correcting length-frequency distributions for imperfect detection

    Science.gov (United States)

    Breton, André R.; Hawkins, John A.; Winkelman, Dana L.

    2013-01-01

    Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data

  1. Corrected ROC analysis for misclassified binary outcomes.

    Science.gov (United States)

    Zawistowski, Matthew; Sussman, Jeremy B; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A; Wiitala, Wyndy L

    2017-06-15

    Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  2. A simulation study to compare three self-controlled case series approaches: correction for violation of assumption and evaluation of bias.

    Science.gov (United States)

    Hua, Wei; Sun, Guoying; Dodd, Caitlin N; Romio, Silvana A; Whitaker, Heather J; Izurieta, Hector S; Black, Steven; Sturkenboom, Miriam C J M; Davis, Robert L; Deceuninck, Genevieve; Andrews, N J

    2013-08-01

    The assumption that the occurrence of outcome event must not alter subsequent exposure probability is critical for preserving the validity of the self-controlled case series (SCCS) method. This assumption is violated in scenarios in which the event constitutes a contraindication for exposure. In this simulation study, we compared the performance of the standard SCCS approach and two alternative approaches when the event-independent exposure assumption was violated. Using the 2009 H1N1 and seasonal influenza vaccines and Guillain-Barré syndrome as a model, we simulated a scenario in which an individual may encounter multiple unordered exposures and each exposure may be contraindicated by the occurrence of outcome event. The degree of contraindication was varied at 0%, 50%, and 100%. The first alternative approach used only cases occurring after exposure with follow-up time starting from exposure. The second used a pseudo-likelihood method. When the event-independent exposure assumption was satisfied, the standard SCCS approach produced nearly unbiased relative incidence estimates. When this assumption was partially or completely violated, two alternative SCCS approaches could be used. While the post-exposure cases only approach could handle only one exposure, the pseudo-likelihood approach was able to correct bias for both exposures. Violation of the event-independent exposure assumption leads to an overestimation of relative incidence which could be corrected by alternative SCCS approaches. In multiple exposure situations, the pseudo-likelihood approach is optimal; the post-exposure cases only approach is limited in handling a second exposure and may introduce additional bias, thus should be used with caution. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Lessons learnt on biases and uncertainties in personal exposure measurement surveys of radiofrequency electromagnetic fields with exposimeters.

    Science.gov (United States)

    Bolte, John F B

    2016-09-01

    Personal exposure measurements of radio frequency electromagnetic fields are important for epidemiological studies and developing prediction models. Minimizing biases and uncertainties and handling spatial and temporal variability are important aspects of these measurements. This paper reviews the lessons learnt from testing the different types of exposimeters and from personal exposure measurement surveys performed between 2005 and 2015. Applying them will improve the comparability and ranking of exposure levels for different microenvironments, activities or (groups of) people, such that epidemiological studies are better capable of finding potential weak correlations with health effects. Over 20 papers have been published on how to prevent biases and minimize uncertainties due to: mechanical errors; design of hardware and software filters; anisotropy; and influence of the body. A number of biases can be corrected for by determining multiplicative correction factors. In addition a good protocol on how to wear the exposimeter, a sufficiently small sampling interval and sufficiently long measurement duration will minimize biases. Corrections to biases are possible for: non-detects through detection limit, erroneous manufacturer calibration and temporal drift. Corrections not deemed necessary, because no significant biases have been observed, are: linearity in response and resolution. Corrections difficult to perform after measurements are for: modulation/duty cycle sensitivity; out of band response aka cross talk; temperature and humidity sensitivity. Corrections not possible to perform after measurements are for: multiple signals detection in one band; flatness of response within a frequency band; anisotropy to waves of different elevation angle. An analysis of 20 microenvironmental surveys showed that early studies using exposimeters with logarithmic detectors, overestimated exposure to signals with bursts, such as in uplink signals from mobile phones and Wi

  4. How and how much does RAD-seq bias genetic diversity estimates?

    Science.gov (United States)

    Cariou, Marie; Duret, Laurent; Charlat, Sylvain

    2016-11-08

    RAD-seq is a powerful tool, increasingly used in population genomics. However, earlier studies have raised red flags regarding possible biases associated with this technique. In particular, polymorphism on restriction sites results in preferential sampling of closely related haplotypes, so that RAD data tends to underestimate genetic diversity. Here we (1) clarify the theoretical basis of this bias, highlighting the potential confounding effects of population structure and selection, (2) confront predictions to real data from in silico digestion of full genomes and (3) provide a proof of concept toward an ABC-based correction of the RAD-seq bias. Under a neutral and panmictic model, we confirm the previously established relationship between the true polymorphism and its RAD-based estimation, showing a more pronounced bias when polymorphism is high. Using more elaborate models, we show that selection, resulting in heterogeneous levels of polymorphism along the genome, exacerbates the bias and leads to a more pronounced underestimation. On the contrary, spatial genetic structure tends to reduce the bias. We confront the neutral and panmictic model to "ideal" empirical data (in silico RAD-sequencing) using full genomes from natural populations of the fruit fly Drosophila melanogaster and the fungus Shizophyllum commune, harbouring respectively moderate and high genetic diversity. In D. melanogaster, predictions fit the model, but the small difference between the true and RAD polymorphism makes this comparison insensitive to deviations from the model. In the highly polymorphic fungus, the model captures a large part of the bias but makes inaccurate predictions. Accordingly, ABC corrections based on this model improve the estimations, albeit with some imprecisions. The RAD-seq underestimation of genetic diversity associated with polymorphism in restriction sites becomes more pronounced when polymorphism is high. In practice, this means that in many systems where

  5. Bias correction factors for near-Earth asteroids

    Science.gov (United States)

    Benedix, Gretchen K.; Mcfadden, Lucy Ann; Morrow, Esther M.; Fomenkova, Marina N.

    1992-01-01

    Knowledge of the population size and physical characteristics (albedo, size, and rotation rate) of near-Earth asteroids (NEA's) is biased by observational selection effects which are functions of the population's intrinsic properties and the size of the telescope, detector sensitivity, and search strategy used. The NEA population is modeled in terms of orbital and physical elements: a, e, i, omega, Omega, M, albedo, and diameter, and an asteroid search program is simulated using actual telescope pointings of right ascension, declination, date, and time. The position of each object in the model population is calculated at the date and time of each telescope pointing. The program tests to see if that object is within the field of view (FOV = 8.75 degrees) of the telescope and above the limiting magnitude (V = +1.65) of the film. The effect of the starting population on the outcome of the simulation's discoveries is compared to the actual discoveries in order to define a most probable starting population.

  6. Rater bias in psychological research: when is it a problem and what can we do about it?

    Science.gov (United States)

    Hoyt, W T

    2000-03-01

    Rater bias is a substantial source of error in psychological research. Bias distorts observed effect sizes beyond the expected level of attenuation due to intrarater error, and the impact of bias is not accurately estimated using conventional methods of correction for attenuation. Using a model based on multivariate generalizability theory, this article illustrates how bias affects research results. The model identifies 4 types of bias that may affect findings in research using observer ratings, including the biases traditionally termed leniency and halo errors. The impact of bias depends on which of 4 classes of rating design is used, and formulas are derived for correcting observed effect sizes for attenuation (due to bias variance) and inflation (due to bias covariance) in each of these classes. The rater bias model suggests procedures for researchers seeking to minimize adverse impact of bias on study findings.

  7. Reducing neutron multiplicity counting bias for plutonium warhead authentication

    Energy Technology Data Exchange (ETDEWEB)

    Goettsche, Malte

    2015-06-05

    Confidence in future nuclear arms control agreements could be enhanced by direct verification of warheads. It would include warhead authentication. This is the assessment based on measurements whether a declaration that a specific item is a nuclear warhead is true. An information barrier can be used to protect sensitive information during measurements. It could for example show whether attributes such as a fissile mass exceeding a threshold are met without indicating detailed measurement results. Neutron multiplicity measurements would be able to assess a plutonium fissile mass attribute if it were possible to show that their bias is low. Plutonium measurements have been conducted with the He-3 based Passive Scrap Multiplicity Counter. The measurement data has been used as a reference to test the capacity of the Monte Carlo code MCNPX-PoliMi to simulate neutron multiplicity measurements. The simulation results with their uncertainties are in agreement with the experimental results. It is essential to use cross-sections which include neutron scattering with the detector's polyethylene molecular structure. Further MCNPX-PoliMi simulations have been conducted in order to study bias that occurs when measuring samples with large plutonium masses such as warheads. Simulation results of solid and hollow metal spheres up to 6000 g show that the masses are underpredicted by as much as 20%. The main source of this bias has been identified in the false assumption that the neutron multiplication does not depend on the position where a spontaneous fission event occurred. The multiplication refers to the total number of neutrons leaking a sample after a primary spontaneous fission event, taking induced fission into consideration. The correction of the analysis has been derived and implemented in a MATLAB code. It depends on four geometry-dependent correction coefficients. When the sample configuration is fully known, these can be exactly determined and remove this type of

  8. Declining Bias and Gender Wage Discrimination? A Meta-Regression Analysis

    Science.gov (United States)

    Jarrell, Stephen B.; Stanley, T. D.

    2004-01-01

    The meta-regression analysis reveals that there is a strong tendency for discrimination estimates to fall and wage discrimination exist against the woman. The biasing effect of researchers' gender of not correcting for selection bias has weakened and changes in labor market have made it less important.

  9. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    Science.gov (United States)

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  10. Bias correction in species distribution models: pooling survey and collection data for multiple species.

    Science.gov (United States)

    Fithian, William; Elith, Jane; Hastie, Trevor; Keith, David A

    2015-04-01

    Presence-only records may provide data on the distributions of rare species, but commonly suffer from large, unknown biases due to their typically haphazard collection schemes. Presence-absence or count data collected in systematic, planned surveys are more reliable but typically less abundant.We proposed a probabilistic model to allow for joint analysis of presence-only and survey data to exploit their complementary strengths. Our method pools presence-only and presence-absence data for many species and maximizes a joint likelihood, simultaneously estimating and adjusting for the sampling bias affecting the presence-only data. By assuming that the sampling bias is the same for all species, we can borrow strength across species to efficiently estimate the bias and improve our inference from presence-only data.We evaluate our model's performance on data for 36 eucalypt species in south-eastern Australia. We find that presence-only records exhibit a strong sampling bias towards the coast and towards Sydney, the largest city. Our data-pooling technique substantially improves the out-of-sample predictive performance of our model when the amount of available presence-absence data for a given species is scarceIf we have only presence-only data and no presence-absence data for a given species, but both types of data for several other species that suffer from the same spatial sampling bias, then our method can obtain an unbiased estimate of the first species' geographic range.

  11. Automation bias: empirical results assessing influencing factors.

    Science.gov (United States)

    Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C

    2014-05-01

    To investigate the rate of automation bias - the propensity of people to over rely on automated advice and the factors associated with it. Tested factors were attitudinal - trust and confidence, non-attitudinal - decision support experience and clinical experience, and environmental - task difficulty. The paradigm of simulated decision support advice within a prescribing context was used. The study employed within participant before-after design, whereby 26 UK NHS General Practitioners were shown 20 hypothetical prescribing scenarios with prevalidated correct and incorrect answers - advice was incorrect in 6 scenarios. They were asked to prescribe for each case, followed by being shown simulated advice. Participants were then asked whether they wished to change their prescription, and the post-advice prescription was recorded. Rate of overall decision switching was captured. Automation bias was measured by negative consultations - correct to incorrect prescription switching. Participants changed prescriptions in 22.5% of scenarios. The pre-advice accuracy rate of the clinicians was 50.38%, which improved to 58.27% post-advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of automation bias, as measured by decision switches from correct pre-advice, to incorrect post-advice was 5.2% of all cases - a net improvement of 8%. More immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching. Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching. This study adds to the literature surrounding automation bias in terms of its potential frequency and influencing factors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Impact of chlorophyll bias on the tropical Pacific mean climate in an earth system model

    Science.gov (United States)

    Lim, Hyung-Gyu; Park, Jong-Yeon; Kug, Jong-Seong

    2017-12-01

    Climate modeling groups nowadays develop earth system models (ESMs) by incorporating biogeochemical processes in their climate models. The ESMs, however, often show substantial bias in simulated marine biogeochemistry which can potentially introduce an undesirable bias in physical ocean fields through biogeophysical interactions. This study examines how and how much the chlorophyll bias in a state-of-the-art ESM affects the mean and seasonal cycle of tropical Pacific sea-surface temperature (SST). The ESM used in the present study shows a sizeable positive bias in the simulated tropical chlorophyll. We found that the correction of the chlorophyll bias can reduce the ESM's intrinsic cold SST mean bias in the equatorial Pacific. The biologically-induced cold SST bias is strongly affected by seasonally-dependent air-sea coupling strength. In addition, the correction of chlorophyll bias can improve the annual cycle of SST by up to 25%. This result suggests a possible modeling approach in understanding the two-way interactions between physical and chlorophyll biases by biogeophysical effects.

  13. Power and type I error results for a bias-correction approach recently shown to provide accurate odds ratios of genetic variants for the secondary phenotypes associated with primary diseases.

    Science.gov (United States)

    Wang, Jian; Shete, Sanjay

    2011-11-01

    We recently proposed a bias correction approach to evaluate accurate estimation of the odds ratio (OR) of genetic variants associated with a secondary phenotype, in which the secondary phenotype is associated with the primary disease, based on the original case-control data collected for the purpose of studying the primary disease. As reported in this communication, we further investigated the type I error probabilities and powers of the proposed approach, and compared the results to those obtained from logistic regression analysis (with or without adjustment for the primary disease status). We performed a simulation study based on a frequency-matching case-control study with respect to the secondary phenotype of interest. We examined the empirical distribution of the natural logarithm of the corrected OR obtained from the bias correction approach and found it to be normally distributed under the null hypothesis. On the basis of the simulation study results, we found that the logistic regression approaches that adjust or do not adjust for the primary disease status had low power for detecting secondary phenotype associated variants and highly inflated type I error probabilities, whereas our approach was more powerful for identifying the SNP-secondary phenotype associations and had better-controlled type I error probabilities. © 2011 Wiley Periodicals, Inc.

  14. Bias Correction in the Dynamic Panel Data Model with a Nonscalar Disturbance Covariance Matrix

    NARCIS (Netherlands)

    Bun, M.J.G.

    2003-01-01

    Approximation formulae are developed for the bias of ordinary and generalized Least Squares Dummy Variable (LSDV) estimators in dynamic panel data models. Results from Kiviet [Kiviet, J. F. (1995), on bias, inconsistency, and efficiency of various estimators in dynamic panel data models, J.

  15. A re-examination of the effects of biased lineup instructions in eyewitness identification.

    Science.gov (United States)

    Clark, Steven E

    2005-10-01

    A meta-analytic review of research comparing biased and unbiased instructions in eyewitness identification experiments showed an asymmetry; specifically, that biased instructions led to a large and consistent decrease in accuracy in target-absent lineups, but produced inconsistent results for target-present lineups, with an average effect size near zero (Steblay, 1997). The results for target-present lineups are surprising, and are inconsistent with statistical decision theories (i.e., Green & Swets, 1966). A re-examination of the relevant studies and the meta-analysis of those studies shows clear evidence that correct identification rates do increase with biased lineup instructions, and that biased witnesses make correct identifications at a rate considerably above chance. Implications for theory, as well as police procedure and policy, are discussed.

  16. On biases in precise point positioning with multi-constellation and multi-frequency GNSS data

    International Nuclear Information System (INIS)

    El-Mowafy, A; Deo, M; Rizos, C

    2016-01-01

    Various types of biases in Global Navigation Satellite System (GNSS) data preclude integer ambiguity fixing and degrade solution accuracy when not being corrected during precise point positioning (PPP). In this contribution, these biases are first reviewed, including satellite and receiver hardware biases, differential code biases, differential phase biases, initial fractional phase biases, inter-system receiver time biases, and system time scale offset. PPP models that take account of these biases are presented for two cases using ionosphere-free observations. The first case is when using primary signals that are used to generate precise orbits and clock corrections. The second case applies when using additional signals to the primary ones. In both cases, measurements from single and multiple constellations are addressed. It is suggested that the satellite-related code biases be handled as calibrated quantities that are obtained from multi-GNSS experiment products and the fractional phase cycle biases obtained from a network to allow for integer ambiguity fixing. Some receiver-related biases are removed using between-satellite single differencing, whereas other receiver biases such as inter-system biases are lumped with differential code and phase biases and need to be estimated. The testing results show that the treatment of biases significantly improves solution convergence in the float ambiguity PPP mode, and leads to ambiguity-fixed PPP within a few minutes with a small improvement in solution precision. (paper)

  17. The diagnosis of COPD in primary care; gender differences and the role of spirometry.

    Science.gov (United States)

    Roberts, N J; Patel, I S; Partridge, M R

    2016-02-01

    Females with exacerbations of Chronic Obstructive Pulmonary Disease now account for one half of all hospital admissions for that condition and rates have been increasing over the last few decades. Differences in presentations of disease between genders have been shown in several conditions and this study explores whether there are inter gender biases in probable diagnoses in those suspected to have COPD. 445 individuals with a provisional diagnosis by their General Practitioner of "suspected COPD" or "definite COPD" were referred to a community Respiratory Assessment unit (CRAU) for tests including spirometry. Gender, demographics, respiratory symptoms and respiratory medical history were recorded. The provisional diagnoses were compared with the final diagnosis made after spirometry and respiratory specialist nurse review and the provisional diagnosis was either confirmed as correct or refuted as unlikely. Significantly more men (87.5%) had their diagnosis of "definite COPD" confirmed compared to 73.9% of women (p = 0.021). When the GP suggested a provisional diagnosis of "suspected COPD" (n = 265) at referral, this was confirmed in 60.9% of men and only 43.2% of women (p = 0.004). There was a different symptom pattern between genders with women being more likely to report allergies, symptoms starting earlier in life, and being less likely than men to report breathlessness as the main symptom. These results may suggest a difference between genders in some of the clinical features of COPD and a difference in likelihood of a GPs provisional diagnosis of COPD being correct. The study reiterates the absolute importance of spirometry in the diagnosis of COPD. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Recognizing and reducing cognitive bias in clinical and forensic neurology.

    Science.gov (United States)

    Satya-Murti, Saty; Lockhart, Joseph

    2015-10-01

    In medicine, cognitive errors form the basis of bias in clinical practice. Several types of bias are common and pervasive, and may lead to inaccurate diagnosis or treatment. Forensic and clinical neurology, even when aided by current technologies, are still dependent on cognitive interpretations, and therefore prone to bias. This article discusses 4 common biases that can lead the clinician astray. They are confirmation bias (selective gathering of and neglect of contradictory evidence); base rate bias (ignoring or misusing prevailing base rate data); hindsight bias (oversimplification of past causation); and good old days bias (the tendency for patients to misremember and exaggerate their preinjury functioning). We briefly describe strategies adopted from the field of psychology that could minimize bias. While debiasing is not easy, reducing such errors requires awareness and acknowledgment of our susceptibility to these cognitive distortions.

  19. Bias in segmented gamma scans arising from size differences between calibration standards and assay samples

    International Nuclear Information System (INIS)

    Sampson, T.E.

    1991-01-01

    Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-adsorption in particulates or lumps of special nuclear material in the sample. another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, as enormous savings considering the expense of multiple calibration standard sets otherwise needed. This paper presents calculations of the bias resulting from not using this new formalism. These calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts

  20. Towards improving the reliability of future regional climate projections: A bias-correction method applied to precipitation over the west coast of Norway

    Science.gov (United States)

    Valved, A.; Barstad, I.; Sobolowski, S.

    2012-04-01

    The early winter of 2011/2012 in the city of Bergen, located on the west coast of Norway, was dominated by warm, wet and extreme weather. This might be a glimpse of future average climate conditions under continued atmospheric warming and an enhanced hydrological cycle. The extreme weather events have resulted in drainage/sewage problems, landslides, flooding property damage and even death. As the Municipality plans for the future they must contend with a growing population in a geographically complex area in addition to any effects attributable to climate change. While the scientific community is increasingly confident in the projections of large scale changes over the mid - high latitudes this confidence does not extend to the local - regional scale where the magnitude and even direction of change may be highly uncertain. Meanwhile it is precisely these scales that Municipalities such as Bergen require information if they are to plan effectively. Thus, there is a need for reliable, local climate projections, which can aid policy makers and planners in decision-making. Current state of the art regional climate models are capable of providing detailed simulations on the order of 1 or 10km. However, due to the increased computational demands of these simulations, large ensembles, such as those used for GCM experiments, are often not possible. Thus, greater detail, under these circumstances, does not necessarily correspond to greater reliability. One way to deal with this issue is to apply a statistical bias correction method where model results are fitted to observationally derived probability density functions (pdfs). In this way, a full distribution of potential changes may be generated which are constrained by known, observed data.This will result in a shifted model distribution with mean and spread that more closely follows observations. In short, the method temporarily removes the climate signals from the model run working on the different percentiles, fits the

  1. Relative equilibrium plot improves graphical analysis and allows bias correction of SUVR in quantitative [11C]PiB PET studies

    Science.gov (United States)

    Zhou, Yun; Sojkova, Jitka; Resnick, Susan M.; Wong, Dean F.

    2012-01-01

    Both the standardized uptake value ratio (SUVR) and the Logan plot result in biased distribution volume ratios (DVR) in ligand-receptor dynamic PET studies. The objective of this study is to use a recently developed relative equilibrium-based graphical plot (RE plot) method to improve and simplify the two commonly used methods for quantification of [11C]PiB PET. Methods The overestimation of DVR in SUVR was analyzed theoretically using the Logan and the RE plots. A bias-corrected SUVR (bcSUVR) was derived from the RE plot. Seventy-eight [11C]PiB dynamic PET scans (66 from controls and 12 from mildly cognitively impaired participants (MCI) from the Baltimore Longitudinal Study of Aging (BLSA)) were acquired over 90 minutes. Regions of interest (ROIs) were defined on coregistered MRIs. Both the ROI and pixelwise time activity curves (TACs) were used to evaluate the estimates of DVR. DVRs obtained using the Logan plot applied to ROI TACs were used as a reference for comparison of DVR estimates. Results Results from the theoretical analysis were confirmed by human studies. ROI estimates from the RE plot and the bcSUVR were nearly identical to those from the Logan plot with ROI TACs. In contrast, ROI estimates from DVR images in frontal, temporal, parietal, cingulate regions, and the striatum were underestimated by the Logan plot (controls 4 – 12%; MCI 9 – 16%) and overestimated by the SUVR (controls 8 – 16%; MCI 16 – 24%). This bias was higher in the MCI group than in controls (p plot or the bcSUVR. Conclusion The RE plot improves pixel-wise quantification of [11C]PiB dynamic PET compared to the conventional Logan plot. The bcSUVR results in lower bias and higher consistency of DVR estimates compared to SUVR. The RE plot and the bcSUVR are practical quantitative approaches that improve the analysis of [11C]PiB studies. PMID:22414634

  2. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  3. Attentional Bias for Emotional Faces in Children with Generalized Anxiety Disorder

    Science.gov (United States)

    Waters, Allison M.; Mogg, Karin; Bradley, Brendan P.; Pine, Daniel S.

    2008-01-01

    Attentional bias for angry and happy faces in 7-12 year old children with general anxiety disorder (GAD) is examined. Results suggest that an attentional bias toward threat faces depends on a certain degree of clinical severity and/or the type of anxiety diagnosis in children.

  4. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Directory of Open Access Journals (Sweden)

    Huiliang Cao

    2016-01-01

    Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  5. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  6. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  7. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    Science.gov (United States)

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  8. Non-Gaussian bias: insights from discrete density peaks

    CERN Document Server

    Desjacques, Vincent; Riotto, Antonio

    2013-01-01

    Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastly simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out tha...

  9. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization.

    Science.gov (United States)

    Keller, Sune H; Svarer, Claus; Sibomana, Merence

    2013-09-01

    In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum and pons in HRRT brain images have been reported. The two main sources of the problem with MAP-TR are poor bone/soft tissue segmentation below the brain and overestimation of bone mass in the skull. We developed the new transmission processing with total variation (TXTV) method that introduces scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT scanner using TXTV to the GE Advance scanner images and found high quantitative correspondence. TXTV has been used to reconstruct more than 4000 HRRT scans at seven different sites with no reports of biases. TXTV-based reconstruction is recommended for human brain scans on the HRRT.

  10. A New Source Biasing Approach in ADVANTG

    International Nuclear Information System (INIS)

    Bevill, Aaron M.; Mosher, Scott W.

    2012-01-01

    The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of source points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ((bar w)). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for (bar w) in post-processing. A stratified-random sampling approach in ADVANTG is under

  11. Non-Gaussian halo assembly bias

    International Nuclear Information System (INIS)

    Reid, Beth A.; Verde, Licia; Dolag, Klaus; Matarrese, Sabino; Moscardini, Lauro

    2010-01-01

    The strong dependence of the large-scale dark matter halo bias on the (local) non-Gaussianity parameter, f NL , offers a promising avenue towards constraining primordial non-Gaussianity with large-scale structure surveys. In this paper, we present the first detection of the dependence of the non-Gaussian halo bias on halo formation history using N-body simulations. We also present an analytic derivation of the expected signal based on the extended Press-Schechter formalism. In excellent agreement with our analytic prediction, we find that the halo formation history-dependent contribution to the non-Gaussian halo bias (which we call non-Gaussian halo assembly bias) can be factorized in a form approximately independent of redshift and halo mass. The correction to the non-Gaussian halo bias due to the halo formation history can be as large as 100%, with a suppression of the signal for recently formed halos and enhancement for old halos. This could in principle be a problem for realistic galaxy surveys if observational selection effects were to pick galaxies occupying only recently formed halos. Current semi-analytic galaxy formation models, for example, imply an enhancement in the expected signal of ∼ 23% and ∼ 48% for galaxies at z = 1 selected by stellar mass and star formation rate, respectively

  12. Updated prevalence rates of overweight and obesity in 11- to 17-year-old adolescents in Germany. Results from the telephone-based KiGGS Wave 1 after correction for bias in self-reports.

    Science.gov (United States)

    Brettschneider, Anna-Kristin; Brettschneidera, Anna-Kristin; Schaffrath Rosario, Angelika; Kuhnert, Ronny; Schmidt, Steffen; Wiegand, Susanna; Ellert, Ute; Kurth, Bärbel-Maria

    2015-11-06

    The nationwide "German Health Interview and Examination Survey for Children and Adolescents" (KiGGS), conducted in 2003-2006, showed an increase in the prevalence rates of overweight and obesity compared to the early 1990s, indicating the need for regularly monitoring. Recently, a follow-up-KiGGS Wave 1 (2009-2012)-was carried out as a telephone-based survey, providing self-reported height and weight. Since self-reports lead to a bias in prevalence rates of weight status, a correction is needed. The aim of the present study is to obtain updated prevalence rates for overweight and obesity for 11- to 17-year olds living in Germany after correction for bias in self-reports. In KiGGS Wave 1, self-reported height and weight were collected from 4948 adolescents during a telephone interview. Participants were also asked about their body perception. From a subsample of KiGGS Wave 1 participants, measurements for height and weight were collected in a physical examination. In order to correct prevalence rates derived from self-reports, weight status categories based on self-reported and measured height and weight were used to estimate a correction formula according to an established procedure under consideration of body perception. The correction procedure was applied and corrected rates were estimated. The corrected prevalence of overweight, including obesity, derived from KiGGS Wave 1, showed that the rate has not further increased compared to the KiGGS baseline survey (18.9 % vs. 18.8 % based on the German reference). The rates of overweight still remain at a high level. The results of KiGGS Wave 1 emphasise the significance of this health issue and the need for prevention of overweight and obesity in children and adolescents.

  13. EMC Diagnosis and Corrective Actions for Silicon Strip Tracker Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Arteche, F.; /CERN /Imperial Coll., London; Rivetta, C.; /SLAC

    2006-06-06

    The tracker sub-system is one of the five sub-detectors of the Compact Muon Solenoid (CMS) experiment under construction at CERN for the Large Hadron Collider (LHC) accelerator. The tracker subdetector is designed to reconstruct tracks of charged sub-atomic particles generated after collisions. The tracker system processes analogue signals from 10 million channels distributed across 14000 silicon micro-strip detectors. It is designed to process signals of a few nA and digitize them at 40 MHz. The overall sub-detector is embedded in a high particle radiation environment and a magnetic field of 4 Tesla. The evaluation of the electromagnetic immunity of the system is very important to optimize the performance of the tracker sub-detector and the whole CMS experiment. This paper presents the EMC diagnosis of the CMS silicon tracker sub-detector. Immunity tests were performed using the final prototype of the Silicon Tracker End-Caps (TEC) system to estimate the sensitivity of the system to conducted noise, evaluate the weakest areas of the system and take corrective actions before the integration of the overall detector. This paper shows the results of one of those tests, that is the measurement and analysis of the immunity to CM external conducted noise perturbations.

  14. Biases in cost measurement for economic evaluation studies in health care.

    Science.gov (United States)

    Jacobs, P; Baladi, J F

    1996-01-01

    This paper addresses the issue of biases in cost measures which used in economic evaluation studies. The basic measure of hospital costs which is used by most investigators is unit cost. Focusing on this measure, a set of criteria which the basic measures must fulfil in order to approximate the marginal cost (MC) of a service for the relevant product, in the representative site, was identified. Then four distinct biases--a scale bias, a case mix bias, a methods bias and a site selection bias--each of which reflects the divergence of the unit cost measure from the desired MC measure, were identified. Measures are proposed for several of these biases and it is suggested how they can be corrected.

  15. Attenuation correction for the large non-human primate brain imaging using microPET

    International Nuclear Information System (INIS)

    Naidoo-Variawa, S; Lehnert, W; Kassiou, M; Banati, R; Meikle, S R

    2010-01-01

    Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a 57 Co transmission point source with a 4% energy window. The optimal energy window for a 68 Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for 57 Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [ 18 F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass 57 Co (4% energy window) or 68 Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.

  16. Novel use of gamma correction for precise 99mTc-HDP pinhole bone scan diagnosis and classification of knee occult fractures

    International Nuclear Information System (INIS)

    Bahk, Yong-Whee; Jeon, Ho-Seung; Kim, Jang Min; Park, Jung Mee; Kim, Sung-Hoon; Chung, Soo-Kyo; Chung, Yong-An; Kim, E.E.

    2010-01-01

    The aim of this study was to introduce gamma correction pinhole bone scan (GCPBS) to depict specific signs of knee occult fractures (OF) on 99m Tc-hydroxydiphosphonate (HDP) scan. Thirty-six cases of six different types of knee OF in 27 consecutive patients (male = 20, female = 7, and age = 18-86 years) were enrolled. The diagnosis was made on the basis of a history of acute or subacute knee trauma, local pain, tenderness, cutaneous injury, negative conventional radiography, and positive magnetic resonance imaging (MRI). Because of the impracticability of histological verification of individual OF, MRI was utilized as a gold standard of diagnosis and classification. All patients had 99m Tc-HDP bone scanning and supplementary GCPBS. GCPBS signs were correlated and compared with those of MRI. The efficacy of gamma correction of ordinary parallel collimator and pinhole collimator scans were collated. Gamma correction pinhole bone scan depicted the signs characteristic of six different types of OF. They were well defined stuffed globular tracer uptake in geographic I fractures (n = 9), block-like uptake in geographic II fractures (n = 7), simple or branching linear uptake in linear cancellous fractures (n = 4), compression in impacted fractures (n = 2), stippled-serpentine uptake in reticular fractures (n = 11), and irregular subcortical uptake in osteochondral fractures (n = 3). All fractures were equally well or more distinctly depicted on GCPBS than on MRI except geographic II fracture, the details of which were not appreciated on GCPBS. Parallel collimator scan also yielded to gamma correction, but the results were inferior to those of the pinhole scan. Gamma correction pinhole bone scan can depict the specific diagnostic signs in six different types of knee occult fractures. The specific diagnostic capability along with the lower cost and wider global availability of bone scanning would make GCPBS an effective alternative. (orig.)

  17. Rational Learning and Information Sampling: On the "Naivety" Assumption in Sampling Explanations of Judgment Biases

    Science.gov (United States)

    Le Mens, Gael; Denrell, Jerker

    2011-01-01

    Recent research has argued that several well-known judgment biases may be due to biases in the available information sample rather than to biased information processing. Most of these sample-based explanations assume that decision makers are "naive": They are not aware of the biases in the available information sample and do not correct for them.…

  18. Biases in GNSS-Data Processing

    Science.gov (United States)

    Schaer, S. C.; Dach, R.; Lutz, S.; Meindl, M.; Beutler, G.

    2010-12-01

    Within the Global Positioning System (GPS) traditionally different types of pseudo-range measurements (P-code, C/A-code) are available on the first frequency that are tracked by the receivers with different technologies. For that reason, P1-C1 and P1-P2 Differential Code Biases (DCB) need to be considered in a GPS data processing with a mix of different receiver types. Since the Block IIR-M series of GPS satellites also provide C/A-code on the second frequency, P2-C2 DCB need to be added to the list of biases for maintenance. Potential quarter-cycle biases between different phase observables (specifically L2P and L2C) are another issue. When combining GNSS (currently GPS and GLONASS), careful consideration of inter-system biases (ISB) is indispensable, in particular when an adequate combination of individual GLONASS clock correction results from different sources (using, e.g., different software packages) is intended. Facing the GPS and GLONASS modernization programs and the upcoming GNSS, like the European Galileo and the Chinese Compass, an increasing number of types of biases is expected. The Center for Orbit Determination in Europe (CODE) is monitoring these GPS and GLONASS related biases for a long time based on RINEX files of the tracking network of the International GNSS Service (IGS) and in the frame of the data processing as one of the global analysis centers of the IGS. Within the presentation we give an overview on the stability of the biases based on the monitoring. Biases derived from different sources are compared. Finally, we give an outlook on the potential handling of such biases with the big variety of signals and systems expected in the future.

  19. An accurate filter loading correction is essential for assessing personal exposure to black carbon using an Aethalometer.

    Science.gov (United States)

    Good, Nicholas; Mölter, Anna; Peel, Jennifer L; Volckens, John

    2017-07-01

    The AE51 micro-Aethalometer (microAeth) is a popular and useful tool for assessing personal exposure to particulate black carbon (BC). However, few users of the AE51 are aware that its measurements are biased low (by up to 70%) due to the accumulation of BC on the filter substrate over time; previous studies of personal black carbon exposure are likely to have suffered from this bias. Although methods to correct for bias in micro-Aethalometer measurements of particulate black carbon have been proposed, these methods have not been verified in the context of personal exposure assessment. Here, five Aethalometer loading correction equations based on published methods were evaluated. Laboratory-generated aerosols of varying black carbon content (ammonium sulfate, Aquadag and NIST diesel particulate matter) were used to assess the performance of these methods. Filters from a personal exposure assessment study were also analyzed to determine how the correction methods performed for real-world samples. Standard correction equations produced correction factors with root mean square errors of 0.10 to 0.13 and mean bias within ±0.10. An optimized correction equation is also presented, along with sampling recommendations for minimizing bias when assessing personal exposure to BC using the AE51 micro-Aethalometer.

  20. Experimenter Confirmation Bias and the Correction of Science Misconceptions

    Science.gov (United States)

    Allen, Michael; Coole, Hilary

    2012-01-01

    This paper describes a randomised educational experiment (n = 47) that examined two different teaching methods and compared their effectiveness at correcting one science misconception using a sample of trainee primary school teachers. The treatment was designed to promote engagement with the scientific concept by eliciting emotional responses from…

  1. Lactose intolerance: from diagnosis to correct management.

    Science.gov (United States)

    Di Rienzo, T; D'Angelo, G; D'Aversa, F; Campanale, M C; Cesario, V; Montalto, M; Gasbarrini, A; Ojetti, V

    2013-01-01

    This review discusses one of the most relevant problems in gastrointestinal clinical practice: lactose intolerance. The role of lactase-persistence alleles the diagnosis of lactose malabsorption the development of lactose intolerance symptoms and its management. Most people are born with the ability to digest lactose, the major carbohydrate in milk and the main source of nutrition until weaning. Approximately, 75% of the world's population loses this ability at some point, while others can digest lactose into adulthood. Symptoms of lactose intolerance include abdominal pain, bloating, flatulence and diarrhea with a considerable intraindividual and interindividual variability in the severity. Diagnosis is most commonly performed by the non invasive lactose hydrogen breath test. Management of lactose intolerance consists of two possible clinical choice not mutually exclusive: alimentary restriction and drug therapy.

  2. On the Upward Bias of the Dissimilarity Index and Its Corrections

    Science.gov (United States)

    Mazza, Angelo; Punzo, Antonio

    2015-01-01

    The dissimilarity index of Duncan and Duncan is widely used in a broad range of contexts to assess the overall extent of segregation in the allocation of two groups in two or more units. Its sensitivity to random allocation implies an upward bias with respect to the unknown amount of systematic segregation. In this article, following a multinomial…

  3. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    Science.gov (United States)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  4. Phonon-induced renormalization of the electron spectrum of biased bilayer graphene

    Science.gov (United States)

    Kryuchkov, S. V.; Kukhar, E. I.

    2018-05-01

    The effect of the electron-phonon interaction on the electron subsystem of the bilayer graphene has been investigated in the case when there is a potential bias between the graphene layers. The electron-phonon interaction has been shown to lead to increasing of the curvature of the lower dispersion branch of the conduction band of the bigraphene in the vicinity of the Dirac point. The latter corresponds to the decreasing of the absolute value of the electron effective mass. The corresponding correction to the effective mass has been calculated. Dependence of this correction on the bias has been investigated. Influence of such effect on the bigraphene conductivity is discussed.

  5. Impacts of altimeter corrections on local linear sea level trends around Taiwan

    DEFF Research Database (Denmark)

    Cheng, Yongcun; Andersen, Ole Baltazar

    2013-01-01

    .e. the inverted barometer correction, wet tropospheric correction, and sea state bias correction, have significant impacts on the determination of local LSLT. The trend of default corrections contribute more than 1.4 mm year-1 along the coastline of China mainland and 2.1 mm year-1 to local LSLT in the Taiwan...

  6. Social biases determine spatiotemporal sparseness of ciliate mating heuristics.

    Science.gov (United States)

    Clark, Kevin B

    2012-01-01

    Ciliates become highly social, even displaying animal-like qualities, in the joint presence of aroused conspecifics and nonself mating pheromones. Pheromone detection putatively helps trigger instinctual and learned courtship and dominance displays from which social judgments are made about the availability, compatibility, and fitness representativeness or likelihood of prospective mates and rivals. In earlier studies, I demonstrated the heterotrich Spirostomum ambiguum improves mating competence by effecting preconjugal strategies and inferences in mock social trials via behavioral heuristics built from Hebbian-like associative learning. Heuristics embody serial patterns of socially relevant action that evolve into ordered, topologically invariant computational networks supporting intra- and intermate selection. S. ambiguum employs heuristics to acquire, store, plan, compare, modify, select, and execute sets of mating propaganda. One major adaptive constraint over formation and use of heuristics involves a ciliate's initial subjective bias, responsiveness, or preparedness, as defined by Stevens' Law of subjective stimulus intensity, for perceiving the meaningfulness of mechanical pressures accompanying cell-cell contacts and additional perimating events. This bias controls durations and valences of nonassociative learning, search rates for appropriate mating strategies, potential net reproductive payoffs, levels of social honesty and deception, successful error diagnosis and correction of mating signals, use of insight or analysis to solve mating dilemmas, bioenergetics expenditures, and governance of mating decisions by classical or quantum statistical mechanics. I now report this same social bias also differentially affects the spatiotemporal sparseness, as measured with metric entropy, of ciliate heuristics. Sparseness plays an important role in neural systems through optimizing the specificity, efficiency, and capacity of memory representations. The present

  7. Social biases determine spatiotemporal sparseness of ciliate mating heuristics

    Science.gov (United States)

    2012-01-01

    Ciliates become highly social, even displaying animal-like qualities, in the joint presence of aroused conspecifics and nonself mating pheromones. Pheromone detection putatively helps trigger instinctual and learned courtship and dominance displays from which social judgments are made about the availability, compatibility, and fitness representativeness or likelihood of prospective mates and rivals. In earlier studies, I demonstrated the heterotrich Spirostomum ambiguum improves mating competence by effecting preconjugal strategies and inferences in mock social trials via behavioral heuristics built from Hebbian-like associative learning. Heuristics embody serial patterns of socially relevant action that evolve into ordered, topologically invariant computational networks supporting intra- and intermate selection. S. ambiguum employs heuristics to acquire, store, plan, compare, modify, select, and execute sets of mating propaganda. One major adaptive constraint over formation and use of heuristics involves a ciliate’s initial subjective bias, responsiveness, or preparedness, as defined by Stevens’ Law of subjective stimulus intensity, for perceiving the meaningfulness of mechanical pressures accompanying cell-cell contacts and additional perimating events. This bias controls durations and valences of nonassociative learning, search rates for appropriate mating strategies, potential net reproductive payoffs, levels of social honesty and deception, successful error diagnosis and correction of mating signals, use of insight or analysis to solve mating dilemmas, bioenergetics expenditures, and governance of mating decisions by classical or quantum statistical mechanics. I now report this same social bias also differentially affects the spatiotemporal sparseness, as measured with metric entropy, of ciliate heuristics. Sparseness plays an important role in neural systems through optimizing the specificity, efficiency, and capacity of memory representations. The

  8. Novel use of gamma correction for precise {sup 99m}Tc-HDP pinhole bone scan diagnosis and classification of knee occult fractures

    Energy Technology Data Exchange (ETDEWEB)

    Bahk, Yong-Whee [Sung Ae General Hospital, Department of Nuclear Medicine, Seoul (Korea); Jeon, Ho-Seung [Sung Ae General Hospital, Department of Orthopedic Surgery, Seoul (Korea); Kim, Jang Min [Sung Ae General Hospital, Department of Radiology, Seoul (Korea); Park, Jung Mee; Kim, Sung-Hoon; Chung, Soo-Kyo [The Catholic University of Korea, Department of Radiology, College of Medicine, Seoul (Korea); Chung, Yong-An [The Catholic University of Korea, Department of Radiology, College of Medicine, Seoul (Korea); Incheon St. Mary' s Hospital, Institute of Catholic Integrative Medicine (ICIM), Incheon (Korea); Incheon St. Mary' s Hospital, The Catholic University of Korea, Department of Radiology, Incheon (Korea); Kim, E.E. [University of Texas MD Anderson Cancer Center, Department of Radiology and Nuclear Medicine, Houston, TX (United States)

    2010-08-15

    The aim of this study was to introduce gamma correction pinhole bone scan (GCPBS) to depict specific signs of knee occult fractures (OF) on {sup 99m}Tc-hydroxydiphosphonate (HDP) scan. Thirty-six cases of six different types of knee OF in 27 consecutive patients (male = 20, female = 7, and age = 18-86 years) were enrolled. The diagnosis was made on the basis of a history of acute or subacute knee trauma, local pain, tenderness, cutaneous injury, negative conventional radiography, and positive magnetic resonance imaging (MRI). Because of the impracticability of histological verification of individual OF, MRI was utilized as a gold standard of diagnosis and classification. All patients had {sup 99m}Tc-HDP bone scanning and supplementary GCPBS. GCPBS signs were correlated and compared with those of MRI. The efficacy of gamma correction of ordinary parallel collimator and pinhole collimator scans were collated. Gamma correction pinhole bone scan depicted the signs characteristic of six different types of OF. They were well defined stuffed globular tracer uptake in geographic I fractures (n = 9), block-like uptake in geographic II fractures (n = 7), simple or branching linear uptake in linear cancellous fractures (n = 4), compression in impacted fractures (n = 2), stippled-serpentine uptake in reticular fractures (n = 11), and irregular subcortical uptake in osteochondral fractures (n = 3). All fractures were equally well or more distinctly depicted on GCPBS than on MRI except geographic II fracture, the details of which were not appreciated on GCPBS. Parallel collimator scan also yielded to gamma correction, but the results were inferior to those of the pinhole scan. Gamma correction pinhole bone scan can depict the specific diagnostic signs in six different types of knee occult fractures. The specific diagnostic capability along with the lower cost and wider global availability of bone scanning would make GCPBS an effective alternative. (orig.)

  9. Relative equilibrium plot improves graphical analysis and allows bias correction of standardized uptake value ratio in quantitative 11C-PiB PET studies.

    Science.gov (United States)

    Zhou, Yun; Sojkova, Jitka; Resnick, Susan M; Wong, Dean F

    2012-04-01

    Both the standardized uptake value ratio (SUVR) and the Logan plot result in biased distribution volume ratios (DVRs) in ligand-receptor dynamic PET studies. The objective of this study was to use a recently developed relative equilibrium-based graphical (RE) plot method to improve and simplify the 2 commonly used methods for quantification of (11)C-Pittsburgh compound B ((11)C-PiB) PET. The overestimation of DVR in SUVR was analyzed theoretically using the Logan and the RE plots. A bias-corrected SUVR (bcSUVR) was derived from the RE plot. Seventy-eight (11)C-PiB dynamic PET scans (66 from controls and 12 from participants with mild cognitive impaired [MCI] from the Baltimore Longitudinal Study of Aging) were acquired over 90 min. Regions of interest (ROIs) were defined on coregistered MR images. Both the ROI and the pixelwise time-activity curves were used to evaluate the estimates of DVR. DVRs obtained using the Logan plot applied to ROI time-activity curves were used as a reference for comparison of DVR estimates. Results from the theoretic analysis were confirmed by human studies. ROI estimates from the RE plot and the bcSUVR were nearly identical to those from the Logan plot with ROI time-activity curves. In contrast, ROI estimates from DVR images in frontal, temporal, parietal, and cingulate regions and the striatum were underestimated by the Logan plot (controls, 4%-12%; MCI, 9%-16%) and overestimated by the SUVR (controls, 8%-16%; MCI, 16%-24%). This bias was higher in the MCI group than in controls (P bias and higher consistency of DVR estimates than of SUVR. The RE plot and the bcSUVR are practical quantitative approaches that improve the analysis of (11)C-PiB studies.

  10. Motion, identity and the bias toward agency

    Directory of Open Access Journals (Sweden)

    Chris eFields

    2014-08-01

    Full Text Available The well-documented human bias toward agency as a cause and therefore an explanation of observed events is typically attributed to evolutionary selection for a social brain. Based on a review of developmental and adult behavioral and neurocognitive data, it is argued that the bias toward agency is a result of the default human solution, developed during infancy, to the computational requirements of object re-identification over gaps in observation of more than a few seconds. If this model is correct, overriding the bias toward agency to construct mechanistic explanations of observed events requires structure-mapping inferences, implemented by the pre-motor action planning system, that replace agents with mechanisms as causes of unobserved changes in contextual or featural properties of objects. Experiments that would test this model are discussed.

  11. A Correction Method for UAV Helicopter Airborne Temperature and Humidity Sensor

    Directory of Open Access Journals (Sweden)

    Longqing Fan

    2017-01-01

    Full Text Available This paper presents a correction method for UAV helicopter airborne temperature and humidity including an error correction scheme and a bias-calibration scheme. As rotor downwash flow brings measurement error on helicopter airborne sensors inevitably, the error correction scheme constructs a model between the rotor induced velocity and temperature and humidity by building the heat balance equation for platinum resistor temperature sensor and the pressure correction term for humidity sensor. The induced velocity of a spatial point below the rotor disc plane can be calculated by the sum of the induced velocities excited by center line vortex, rotor disk vortex, and skew cylinder vortex based on the generalized vortex theory. In order to minimize the systematic biases, the bias-calibration scheme adopts a multiple linear regression to achieve a systematically consistent result with the tethered balloon profiles. Two temperature and humidity sensors were mounted on “Z-5” UAV helicopter in the field experiment. Overall, the result of applying the calibration method shows that the temperature and relative humidity obtained by UAV helicopter closely align with tethered balloon profiles in providing measurements of the temperature profiles and humidity profiles within marine atmospheric boundary layers.

  12. Diagnostic uncertainty and recall bias in chronic low back pain.

    Science.gov (United States)

    Serbic, Danijela; Pincus, Tamar

    2014-08-01

    Patients' beliefs about the origin of their pain and their cognitive processing of pain-related information have both been shown to be associated with poorer prognosis in low back pain (LBP), but the relationship between specific beliefs and specific cognitive processes is not known. The aim of this study was to examine the relationship between diagnostic uncertainty and recall bias in 2 groups of chronic LBP patients, those who were certain about their diagnosis and those who believed that their pain was due to an undiagnosed problem. Patients (N=68) endorsed and subsequently recalled pain, illness, depression, and neutral stimuli. They also provided measures of pain, diagnostic status, mood, and disability. Both groups exhibited a recall bias for pain stimuli, but only the group with diagnostic uncertainty also displayed a recall bias for illness-related stimuli. This bias remained after controlling for depression and disability. Sensitivity analyses using grouping by diagnosis/explanation received supported these findings. Higher levels of depression and disability were found in the group with diagnostic uncertainty, but levels of pain intensity did not differ between the groups. Although the methodology does not provide information on causality, the results provide evidence for a relationship between diagnostic uncertainty and recall bias for negative health-related stimuli in chronic LBP patients. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  13. Generation of Unbiased Ionospheric Corrections in Brazilian Region for GNSS positioning based on SSR concept

    Science.gov (United States)

    Monico, J. F. G.; De Oliveira, P. S., Jr.; Morel, L.; Fund, F.; Durand, S.; Durand, F.

    2017-12-01

    Mitigation of ionospheric effects on GNSS (Global Navigation Satellite System) signals is very challenging, especially for GNSS positioning applications based on SSR (State Space Representation) concept, which requires the knowledge of spatial correlated errors with considerable accuracy level (centimeter). The presence of satellite and receiver hardware biases on GNSS measurements difficult the proper estimation of ionospheric corrections, reducing their physical meaning. This problematic can lead to ionospheric corrections biased of several meters and often presenting negative values, which is physically not possible. In this contribution, we discuss a strategy to obtain SSR ionospheric corrections based on GNSS measurements from CORS (Continuous Operation Reference Stations) Networks with minimal presence of hardware biases and consequently physical meaning. Preliminary results are presented on generation and application of such corrections for simulated users located in Brazilian region under high level of ionospheric activity.

  14. EVOLUTION OF THE MERGER-INDUCED HYDROSTATIC MASS BIAS IN GALAXY CLUSTERS

    International Nuclear Information System (INIS)

    Nelson, Kaylea; Nagai, Daisuke; Rudd, Douglas H.; Shaw, Laurie

    2012-01-01

    In this work, we examine the effects of mergers on the hydrostatic mass estimate of galaxy clusters using high-resolution Eulerian cosmological simulations. We utilize merger trees to isolate the last merger for each cluster in our sample and follow the time evolution of the hydrostatic mass bias as the systems relax. We find that during a merger, a shock propagates outward from the parent cluster, resulting in an overestimate in the hydrostatic mass bias. After the merger, as a cluster relaxes, the bias in hydrostatic mass estimate decreases but remains at a level of –5%-10% with 15%-20% scatter within r 500 . We also investigate the post-merger evolution of the pressure support from bulk motions, a dominant cause of this residual mass bias. At r 500 , the contribution from random motions peaks at 30% of the total pressure during the merger and quickly decays to ∼10%-15% as a cluster relaxes. Additionally, we use a measure of the random motion pressure to correct the hydrostatic mass estimate. We discover that 4 Gyr after mergers, the direct effects of the merger event on the hydrostatic mass bias have become negligible. Thereafter, the mass bias is primarily due to residual bulk motions in the gas which are not accounted for in the hydrostatic equilibrium equation. We present a hydrostatic mass bias correction method that can recover the unbiased cluster mass for relaxed clusters with 9% scatter at r 500 and 11% scatter in the outskirts, within r 200 .

  15. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    Science.gov (United States)

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  16. Propensity score matching and persistence correction to reduce bias in comparative effectiveness: the effect of cinacalcet use on all-cause mortality.

    Science.gov (United States)

    Gillespie, Iain A; Floege, Jürgen; Gioni, Ioanna; Drüeke, Tilman B; de Francisco, Angel L; Anker, Stefan D; Kubo, Yumi; Wheeler, David C; Froissart, Marc

    2015-07-01

    The generalisability of randomised controlled trials (RCTs) may be limited by restrictive entry criteria or by their experimental nature. Observational research can provide complementary findings but is prone to bias. Employing propensity score matching, to reduce such bias, we compared the real-life effect of cinacalcet use on all-cause mortality (ACM) with findings from the Evaluation of Cinacalcet Therapy to Lower Cardiovascular Events (EVOLVE) RCT in chronic haemodialysis patients. Incident adult haemodialysis patients receiving cinacalcet, recruited in a prospective observational cohort from 2007-2009 (AROii; n = 10,488), were matched to non-exposed patients regardless of future exposure status. The effect of treatment crossover was investigated with inverse probability of censoring weighted and lag-censored analyses. EVOLVE ACM data were analysed largely as described for the primary composite endpoint. AROii patients receiving cinacalcet (n = 532) were matched to 1790 non-exposed patients. The treatment effect of cinacalcet on ACM in the main AROii analysis (hazard ratio 1.03 [95% confidence interval (CI) 0.78-1.35]) was closer to the null than for the Intention to Treat (ITT) analysis of EVOLVE (0.94 [95%CI 0.85-1.04]). Adjusting for non-persistence by 0- and 6-month lag-censoring and by inverse probability of censoring weight, the hazard ratios in AROii (0.76 [95%CI 0.51-1.15], 0.84 [95%CI 0.60-1.18] and 0.79 [95%CI 0.56-1.11], respectively) were comparable with those of EVOLVE (0.82 [95%CI 0.67-1.01], 0.83 [95%CI 0.73-0.96] and 0.87 [95%CI 0.71-1.06], respectively). Correcting for treatment crossover, we observed results in the 'real-life' setting of the AROii observational cohort that closely mirrored the results of the EVOLVE RCT. Persistence-corrected analyses revealed a trend towards reduced ACM in haemodialysis patients receiving cinacalcet therapy. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Weighted divergence correction scheme and its fast implementation

    Science.gov (United States)

    Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun

    2017-05-01

    Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.

  18. Significant biases affecting abundance determinations

    Science.gov (United States)

    Wesson, Roger

    2015-08-01

    I have developed two highly efficient codes to automate analyses of emission line nebulae. The tools place particular emphasis on the propagation of uncertainties. The first tool, ALFA, uses a genetic algorithm to rapidly optimise the parameters of gaussian fits to line profiles. It can fit emission line spectra of arbitrary resolution, wavelength range and depth, with no user input at all. It is well suited to highly multiplexed spectroscopy such as that now being carried out with instruments such as MUSE at the VLT. The second tool, NEAT, carries out a full analysis of emission line fluxes, robustly propagating uncertainties using a Monte Carlo technique.Using these tools, I have found that considerable biases can be introduced into abundance determinations if the uncertainty distribution of emission lines is not well characterised. For weak lines, normally distributed uncertainties are generally assumed, though it is incorrect to do so, and significant biases can result. I discuss observational evidence of these biases. The two new codes contain routines to correctly characterise the probability distributions, giving more reliable results in analyses of emission line nebulae.

  19. Reducing Bias in Citizens’ Perception of Crime Rates: Evidence From a Field Experiment on Burglary Prevalence

    DEFF Research Database (Denmark)

    Larsen, Martin Vinæs; Olsen, Asmus Leth

    2018-01-01

    Citizens are on average too pessimistic when assessing the trajectory of current crime trends. In this study, we examine whether we can correct this perceptual bias with respect to burglaries. Using a field experiment coupled with a large panel survey (n=4,895), we explore whether a public...... information campaign can reduce misperceptions about the prevalence of burglaries. Embedding the correct information about burglary rates in a direct mail campaign, we find that it is possible to substantially reduce citizens’ misperceptions. The effects are not short lived – they are detectable several weeks...... after the mailer was sent, but they are temporary. Eventually the perceptual bias re-emerges. Our results suggest that if citizens were continually supplied with correct information about crime rates they would be less pessimistic. Reducing bias in citizens’ perception of crime rates might therefore...

  20. Effects of sertraline, duloxetine, vortioxetine, and idazoxan in the rat affective bias test

    DEFF Research Database (Denmark)

    Refsgaard, Louise Konradsen; Haubro, Kia; Pickering, Darryl S

    2016-01-01

    Rationale Affective biases seemingly play a crucial role for the onset and development of depression. Acute treatment with monoamine-based antidepressants positively influence emotional processing, and an early correction of biases likely results in repeated positive experiences that ultimately...... lead to improved mood. Objectives Using two conventional antidepressants, sertraline and duloxetine, we aimed to forward the characterization of a newly developed affective bias test (ABT) for rats. Further, we examined the effect of vortioxetine, a recently approved antidepressant, and the α2...... adrenoceptor antagonist idazoxan on affective biases....

  1. Nonlinear vs. linear biasing in Trp-cage folding simulations

    Energy Technology Data Exchange (ETDEWEB)

    Spiwok, Vojtěch, E-mail: spiwokv@vscht.cz; Oborský, Pavel; Králová, Blanka [Department of Biochemistry and Microbiology, University of Chemistry and Technology, Prague, Technická 3, Prague 6 166 28 (Czech Republic); Pazúriková, Jana [Institute of Computer Science, Masaryk University, Botanická 554/68a, 602 00 Brno (Czech Republic); Křenek, Aleš [Institute of Computer Science, Masaryk University, Botanická 554/68a, 602 00 Brno (Czech Republic); Center CERIT-SC, Masaryk Univerzity, Šumavská 416/15, 602 00 Brno (Czech Republic)

    2015-03-21

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.

  2. Downscaling RCP8.5 daily temperatures and precipitation in Ontario using localized ensemble optimal interpolation (EnOI) and bias correction

    Science.gov (United States)

    Deng, Ziwang; Liu, Jinliang; Qiu, Xin; Zhou, Xiaolan; Zhu, Huaiping

    2017-10-01

    A novel method for daily temperature and precipitation downscaling is proposed in this study which combines the Ensemble Optimal Interpolation (EnOI) and bias correction techniques. For downscaling temperature, the day to day seasonal cycle of high resolution temperature of the NCEP climate forecast system reanalysis (CFSR) is used as background state. An enlarged ensemble of daily temperature anomaly relative to this seasonal cycle and information from global climate models (GCMs) are used to construct a gain matrix for each calendar day. Consequently, the relationship between large and local-scale processes represented by the gain matrix will change accordingly. The gain matrix contains information of realistic spatial correlation of temperature between different CFSR grid points, between CFSR grid points and GCM grid points, and between different GCM grid points. Therefore, this downscaling method keeps spatial consistency and reflects the interaction between local geographic and atmospheric conditions. Maximum and minimum temperatures are downscaled using the same method. For precipitation, because of the non-Gaussianity issue, a logarithmic transformation is used to daily total precipitation prior to conducting downscaling. Cross validation and independent data validation are used to evaluate this algorithm. Finally, data from a 29-member ensemble of phase 5 of the Coupled Model Intercomparison Project (CMIP5) GCMs are downscaled to CFSR grid points in Ontario for the period from 1981 to 2100. The results show that this method is capable of generating high resolution details without changing large scale characteristics. It results in much lower absolute errors in local scale details at most grid points than simple spatial downscaling methods. Biases in the downscaled data inherited from GCMs are corrected with a linear method for temperatures and distribution mapping for precipitation. The downscaled ensemble projects significant warming with amplitudes of 3

  3. Robust Active Label Correction

    DEFF Research Database (Denmark)

    Kremer, Jan; Sha, Fei; Igel, Christian

    2018-01-01

    for the noisy data lead to different active label correction algorithms. If loss functions consider the label noise rates, these rates are estimated during learning, where importance weighting compensates for the sampling bias. We show empirically that viewing the true label as a latent variable and computing......Active label correction addresses the problem of learning from input data for which noisy labels are available (e.g., from imprecise measurements or crowd-sourcing) and each true label can be obtained at a significant cost (e.g., through additional measurements or human experts). To minimize......). To select labels for correction, we adopt the active learning strategy of maximizing the expected model change. We consider the change in regularized empirical risk functionals that use different pointwise loss functions for patterns with noisy and true labels, respectively. Different loss functions...

  4. Bias against research on gender bias.

    Science.gov (United States)

    Cislak, Aleksandra; Formanowicz, Magdalena; Saguy, Tamar

    2018-01-01

    The bias against women in academia is a documented phenomenon that has had detrimental consequences, not only for women, but also for the quality of science. First, gender bias in academia affects female scientists, resulting in their underrepresentation in academic institutions, particularly in higher ranks. The second type of gender bias in science relates to some findings applying only to male participants, which produces biased knowledge. Here, we identify a third potentially powerful source of gender bias in academia: the bias against research on gender bias. In a bibliometric investigation covering a broad range of social sciences, we analyzed published articles on gender bias and race bias and established that articles on gender bias are funded less often and published in journals with a lower Impact Factor than articles on comparable instances of social discrimination. This result suggests the possibility of an underappreciation of the phenomenon of gender bias and related research within the academic community. Addressing this meta-bias is crucial for the further examination of gender inequality, which severely affects many women across the world.

  5. Imputation across genotyping arrays for genome-wide association studies: assessment of bias and a correction strategy.

    Science.gov (United States)

    Johnson, Eric O; Hancock, Dana B; Levy, Joshua L; Gaddis, Nathan C; Saccone, Nancy L; Bierut, Laura J; Page, Grier P

    2013-05-01

    A great promise of publicly sharing genome-wide association data is the potential to create composite sets of controls. However, studies often use different genotyping arrays, and imputation to a common set of SNPs has shown substantial bias: a problem which has no broadly applicable solution. Based on the idea that using differing genotyped SNP sets as inputs creates differential imputation errors and thus bias in the composite set of controls, we examined the degree to which each of the following occurs: (1) imputation based on the union of genotyped SNPs (i.e., SNPs available on one or more arrays) results in bias, as evidenced by spurious associations (type 1 error) between imputed genotypes and arbitrarily assigned case/control status; (2) imputation based on the intersection of genotyped SNPs (i.e., SNPs available on all arrays) does not evidence such bias; and (3) imputation quality varies by the size of the intersection of genotyped SNP sets. Imputations were conducted in European Americans and African Americans with reference to HapMap phase II and III data. Imputation based on the union of genotyped SNPs across the Illumina 1M and 550v3 arrays showed spurious associations for 0.2 % of SNPs: ~2,000 false positives per million SNPs imputed. Biases remained problematic for very similar arrays (550v1 vs. 550v3) and were substantial for dissimilar arrays (Illumina 1M vs. Affymetrix 6.0). In all instances, imputing based on the intersection of genotyped SNPs (as few as 30 % of the total SNPs genotyped) eliminated such bias while still achieving good imputation quality.

  6. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Science.gov (United States)

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  7. Effects of diurnal adjustment on biases and trends derived from inter-sensor calibrated AMSU-A data

    Science.gov (United States)

    Chen, H.; Zou, X.; Qin, Z.

    2018-03-01

    Measurements of brightness temperatures from Advanced Microwave Sounding Unit-A (AMSU-A) temperature sounding instruments onboard NOAA Polarorbiting Operational Environmental Satellites (POES) have been extensively used for studying atmospheric temperature trends over the past several decades. Intersensor biases, orbital drifts and diurnal variations of atmospheric and surface temperatures must be considered before using a merged long-term time series of AMSU-A measurements from NOAA-15, -18, -19 and MetOp-A.We study the impacts of the orbital drift and orbital differences of local equator crossing times (LECTs) on temperature trends derivable from AMSU-A using near-nadir observations from NOAA-15, NOAA-18, NOAA-19, and MetOp-A during 1998-2014 over the Amazon rainforest. The double difference method is firstly applied to estimation of inter-sensor biases between any two satellites during their overlapping time period. The inter-calibrated observations are then used to generate a monthly mean diurnal cycle of brightness temperature for each AMSU-A channel. A diurnal correction is finally applied each channel to obtain AMSU-A data valid at the same local time. Impacts of the inter-sensor bias correction and diurnal correction on the AMSU-A derived long-term atmospheric temperature trends are separately quantified and compared with those derived from original data. It is shown that the orbital drift and differences of LECTamong different POESs induce a large uncertainty in AMSU-A derived long-term warming/cooling trends. After applying an inter-sensor bias correction and a diurnal correction, the warming trends at different local times, which are approximately the same, are smaller by half than the trends derived without applying these corrections.

  8. Implicit Bias and Mental Health Professionals: Priorities and Directions for Research.

    Science.gov (United States)

    Merino, Yesenia; Adams, Leslie; Hall, William J

    2018-06-01

    This Open Forum explores the role of implicit bias along the mental health care continuum, which may contribute to mental health disparities among vulnerable populations. Emerging research shows that implicit bias is prevalent among service providers. These negative or stigmatizing attitudes toward population groups are held at a subconscious level and are automatically activated during practitioner-client encounters. The authors provide examples of how implicit bias may impede access to care, clinical screening and diagnosis, treatment processes, and crisis response. They also discuss how implicit attitudes may manifest at the intersection between mental health and criminal justice institutions. Finally, they discuss the need for more research on the impact of implicit bias on health practices throughout the mental health system, including the development of interventions to address implicit bias among mental health professionals.

  9. Scale dependence of halo and galaxy bias: Effects in real space

    International Nuclear Information System (INIS)

    Smith, Robert E.; Scoccimarro, Roman; Sheth, Ravi K.

    2007-01-01

    We examine the scale dependence of dark matter halo and galaxy clustering on very large scales (0.01 -1 ] -1 ] -1 ], and only show amplification on smaller scales, whereas low mass haloes show strong, ∼5%-10%, suppression over the range 0.05 -1 ]<0.15. These results were primarily established through the use of the cross-power spectrum of dark matter and haloes, which circumvents the thorny issue of shot-noise correction. The halo-halo power spectrum, however, is highly sensitive to the shot-noise correction; we show that halo exclusion effects make this sub-Poissonian and a new correction is presented. Our results have special relevance for studies of the baryon acoustic oscillation features in the halo power spectra. Nonlinear mode-mode coupling: (i) damps these features on progressively larger scales as halo mass increases; (ii) produces small shifts in the positions of the peaks and troughs which depend on halo mass. We show that these effects on halo clustering are important over the redshift range relevant to such studies (0< z<2), and so will need to be accounted for when extracting information from precision measurements of galaxy clustering. Our analytic model is described in the language of the ''halo model.'' The halo-halo clustering term is propagated into the nonlinear regime using ''1-loop'' perturbation theory and a nonlinear halo bias model. Galaxies are then inserted into haloes through the halo occupation distribution. We show that, with nonlinear bias parameters derived from simulations, this model produces predictions that are qualitatively in agreement with our numerical results. We then use it to show that the power spectra of red and blue galaxies depend differently on scale, thus underscoring the fact that proper modeling of nonlinear bias parameters will be crucial to derive reliable cosmological constraints. In addition to showing that the bias on very large scales is not simply linear, the model also shows that the halo-halo and halo

  10. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  11. A Comparison of Methods for a Priori Bias Correction in Soil Moisture Data Assimilation

    Science.gov (United States)

    Kumar, Sujay V.; Reichle, Rolf H.; Harrison, Kenneth W.; Peters-Lidard, Christa D.; Yatheendradas, Soni; Santanello, Joseph A.

    2011-01-01

    Data assimilation is being increasingly used to merge remotely sensed land surface variables such as soil moisture, snow and skin temperature with estimates from land models. Its success, however, depends on unbiased model predictions and unbiased observations. Here, a suite of continental-scale, synthetic soil moisture assimilation experiments is used to compare two approaches that address typical biases in soil moisture prior to data assimilation: (i) parameter estimation to calibrate the land model to the climatology of the soil moisture observations, and (ii) scaling of the observations to the model s soil moisture climatology. To enable this research, an optimization infrastructure was added to the NASA Land Information System (LIS) that includes gradient-based optimization methods and global, heuristic search algorithms. The land model calibration eliminates the bias but does not necessarily result in more realistic model parameters. Nevertheless, the experiments confirm that model calibration yields assimilation estimates of surface and root zone soil moisture that are as skillful as those obtained through scaling of the observations to the model s climatology. Analysis of innovation diagnostics underlines the importance of addressing bias in soil moisture assimilation and confirms that both approaches adequately address the issue.

  12. Hepatitis B virus infection in US correctional facilities: a review of diagnosis, management, and public health implications.

    Science.gov (United States)

    Gupta, Shaili; Altice, Frederick L

    2009-03-01

    Among the blood-borne chronic viral infections, hepatitis B virus (HBV) infection is one that is not only treatable but also preventable by provision of vaccination. Despite the availability of HBV vaccine for the last 15 years, more than 1.25 million individuals in the USA have chronic HBV infection, and about 5,000 die each year from HBV-related complications. From a societal perspective, access to treatment of chronic viral infections, like HIV and viral hepatitis, is highly cost-effective and has lasting benefits by reducing risk behaviors, morbidity, mortality, as well as disease transmission in the community. Individuals in correctional facilities are specially predisposed to such chronic viral infections because of their high-risk behaviors. The explosion of incarceration in the USA over the last few decades and the disproportionate burden of morbidity and mortality from chronic infections among the incarcerated have put incredible strains on an overcrowded system that was not originally designed to provide comprehensive medical care for chronic illnesses. Recently, there has been a call to address medical care for individuals with chronic medical conditions in correctional settings, including those with infectious diseases. The economic and public health burden of chronic hepatitis B and its sequelae, including cirrhosis and hepatocellular carcinoma, is felt most prominently in managed care settings with limited budgets, like correctional facilities. Prevalence of HBV infection among the incarcerated in the USA is fivefold that of the general population. We present a review of diagnosis, prevention, and the recently streamlined treatment guidelines for management of HBV infection in correctional settings, and discuss the implications and public health impact of these measures.

  13. Reduction of CMIP5 models bias using Cumulative Distribution Function transform and impact on crops yields simulations across West Africa.

    Science.gov (United States)

    Moise Famien, Adjoua; Defrance, Dimitri; Sultan, Benjamin; Janicot, Serge; Vrac, Mathieu

    2017-04-01

    Different CMIP exercises show that the simulations of the future/current temperature and precipitation are complex with a high uncertainty degree. For example, the African monsoon system is not correctly simulated and most of the CMIP5 models underestimate the precipitation. Therefore, Global Climate Models (GCMs) show significant systematic biases that require bias correction before it can be used in impacts studies. Several methods of bias corrections have been developed for several years and are increasingly using more complex statistical methods. The aims of this work is to show the interest of the CDFt (Cumulative Distribution Function transfom (Michelangeli et al.,2009)) method to reduce the data bias from 29 CMIP5 GCMs over Africa and to assess the impact of bias corrected data on crop yields prediction by the end of the 21st century. In this work, we apply the CDFt to daily data covering the period from 1950 to 2099 (Historical and RCP8.5) and we correct the climate variables (temperature, precipitation, solar radiation, wind) by the use of the new daily database from the EU project WATer and global CHange (WATCH) available from 1979 to 2013 as reference data. The performance of the method is assessed in several cases. First, data are corrected based on different calibrations periods and are compared, on one hand, with observations to estimate the sensitivity of the method to the calibration period and, on other hand, with another bias-correction method used in the ISIMIP project. We find that, whatever the calibration period used, CDFt corrects well the mean state of variables and preserves their trend, as well as daily rainfall occurrence and intensity distributions. However, some differences appear when compared to the outputs obtained with the method used in ISIMIP and show that the quality of the correction is strongly related to the reference data. Secondly, we validate the bias correction method with the agronomic simulations (SARRA-H model (Kouressy

  14. Correcting for non-response bias in contingent valuation surveys concerning environmental non-market goods

    DEFF Research Database (Denmark)

    Bonnichsen, Ole; Olsen, Søren Bøye

    2016-01-01

    Data collection for economic valuation by using Internet surveys and pre-recruited Internet panels can be associated with severe disadvantages. Problems concerning sample coverage and sample representativeness can be expected. Representation errors may occur since people can choose whether....... This paper analyses a sample used for an Internet contingent valuation method survey eliciting preferences for improvements in water quality of a river. We find that some variables that affect the survey participation decision also affect willingness-to-pay, consequently biasing our welfare estimates. We...... show how adjusting willingness-to-pay for this bias can be accomplished by using a grouped data model incorporating a correlation parameter to account for selection....

  15. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  16. Correcting for color crosstalk and chromatic aberration in multicolor particle shadow velocimetry

    International Nuclear Information System (INIS)

    McPhail, M J; Fontaine, A A; Krane, M H; Goss, L; Crafton, J

    2015-01-01

    Color crosstalk and chromatic aberration can bias estimates of fluid velocity measured by color particle shadow velocimetry (CPSV), using multicolor illumination and a color camera. This article describes corrections to remove these bias errors, and their evaluation. Color crosstalk removal is demonstrated with linear unmixing. It is also shown that chromatic aberrations may be removed using either scale calibration, or by processing an image illuminated by all colors simultaneously. CPSV measurements of a fully developed turbulent pipe flow of glycerin were conducted. Corrected velocity statistics from these measurements were compared to both single-color PSV and LDV measurements and showed excellent agreement to fourth-order, to well into the viscous sublayer. Recommendations for practical assessment and correction of color aberration and color crosstalk are discussed. (paper)

  17. Bias in estimating food consumption of fish from stomach-content analysis

    DEFF Research Database (Denmark)

    Rindorf, Anna; Lewy, Peter

    2004-01-01

    This study presents an analysis of the bias introduced by using simplified methods to calculate food intake of fish from stomach contents. Three sources of bias were considered: (1) the effect of estimating consumption based on a limited number of stomach samples, (2) the effect of using average......, a serious positive bias was introduced by estimating food intake from the contents of pooled stomach samples. An expression is given that can be used to correct analytically for this bias. A new method, which takes into account the distribution and evacuation of individual prey types as well as the effect...... of other food in the stomach on evacuation, is suggested for estimating the intake of separate prey types. Simplifying the estimation by ignoring these factors biased estimates of consumption of individual prey types by up to 150% in a data example....

  18. Eliminating bias in rainfall estimates from microwave links due to antenna wetting

    Science.gov (United States)

    Fencl, Martin; Rieckermann, Jörg; Bareš, Vojtěch

    2014-05-01

    Commercial microwave links (MWLs) are point-to-point radio systems which are widely used in telecommunication systems. They operate at frequencies where the transmitted power is mainly disturbed by precipitation. Thus, signal attenuation from MWLs can be used to estimate path-averaged rain rates, which is conceptually very promising, since MWLs cover about 20 % of surface area. Unfortunately, MWL rainfall estimates are often positively biased due to additional attenuation caused by antenna wetting. To correct MWL observations a posteriori to reduce the wet antenna effect (WAE), both empirically and physically based models have been suggested. However, it is challenging to calibrate these models, because the wet antenna attenuation depends both on the MWL properties (frequency, type of antennas, shielding etc.) and different climatic factors (temperature, due point, wind velocity and direction, etc.). Instead, it seems straight forward to keep antennas dry by shielding them. In this investigation we compare the effectiveness of antenna shielding to model-based corrections to reduce the WAE. The experimental setup, located in Dübendorf-Switzerland, consisted of 1.85-km long commercial dual-polarization microwave link at 38 GHz and 5 optical disdrometers. The MWL was operated without shielding in the period from March to October 2011 and with shielding from October 2011 to July 2012. This unique experimental design made it possible to identify the attenuation due to antenna wetting, which can be computed as the difference between the measured and theoretical attenuation. The theoretical path-averaged attenuation was calculated from the path-averaged drop size distribution. During the unshielded periods, the total bias caused by WAE was 0.74 dB, which was reduced by shielding to 0.39 dB for the horizontal polarization (vertical: reduction from 0.96 dB to 0.44 dB). Interestingly, the model-based correction (Schleiss et al. 2013) was more effective because it reduced

  19. Correction of sampling bias in a cross-sectional study of post-surgical complications.

    Science.gov (United States)

    Fluss, Ronen; Mandel, Micha; Freedman, Laurence S; Weiss, Inbal Salz; Zohar, Anat Ekka; Haklai, Ziona; Gordon, Ethel-Sherry; Simchen, Elisheva

    2013-06-30

    Cross-sectional designs are often used to monitor the proportion of infections and other post-surgical complications acquired in hospitals. However, conventional methods for estimating incidence proportions when applied to cross-sectional data may provide estimators that are highly biased, as cross-sectional designs tend to include a high proportion of patients with prolonged hospitalization. One common solution is to use sampling weights in the analysis, which adjust for the sampling bias inherent in a cross-sectional design. The current paper describes in detail a method to build weights for a national survey of post-surgical complications conducted in Israel. We use the weights to estimate the probability of surgical site infections following colon resection, and validate the results of the weighted analysis by comparing them with those obtained from a parallel study with a historically prospective design. Copyright © 2012 John Wiley & Sons, Ltd.

  20. [Study on correction of data bias caused by different missing mechanisms in survey of medical expenditure among students enrolling in Urban Resident Basic Medical Insurance].

    Science.gov (United States)

    Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong

    2015-05-01

    The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.

  1. Ensemble stacking mitigates biases in inference of synaptic connectivity.

    Science.gov (United States)

    Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N

    2018-01-01

    A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.

  2. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  3. X-ray-based attenuation correction for positron emission tomography/computed tomography scanners.

    Science.gov (United States)

    Kinahan, Paul E; Hasegawa, Bruce H; Beyer, Thomas

    2003-07-01

    A synergy of positron emission tomography (PET)/computed tomography (CT) scanners is the use of the CT data for x-ray-based attenuation correction of the PET emission data. Current methods of measuring transmission use positron sources, gamma-ray sources, or x-ray sources. Each of the types of transmission scans involves different trade-offs of noise versus bias, with positron transmission scans having the highest noise but lowest bias, whereas x-ray scans have negligible noise but the potential for increased quantitative errors. The use of x-ray-based attenuation correction, however, has other advantages, including a lack of bias introduced from post-injection transmission scanning, which is an important practical consideration for clinical scanners, as well as reduced scan times. The sensitivity of x-ray-based attenuation correction to artifacts and quantitative errors depends on the method of translating the CT image from the effective x-ray energy of approximately 70 keV to attenuation coefficients at the PET energy of 511 keV. These translation methods are usually based on segmentation and/or scaling techniques. Errors in the PET emission image arise from positional mismatches caused by patient motion or respiration differences between the PET and CT scans; incorrect calculation of attenuation coefficients for CT contrast agents or metallic implants; or keeping the patient's arms in the field of view, which leads to truncation and/or beam-hardening (or x-ray scatter) artifacts. Proper interpretation of PET emission images corrected for attenuation by using the CT image relies on an understanding of the potential artifacts. In cases where an artifact or bias is suspected, careful inspection of all three available images (CT and PET emission with and without attenuation correction) is recommended. Copyright 2003 Elsevier Inc. All rights reserved.

  4. A system for biasing a differential amplifier

    International Nuclear Information System (INIS)

    Barbier, Daniel; Ittel, J.M.; Poujois, Robert

    1975-01-01

    This invention concerns a system for biasing a differential amplifier. It particularly applies to the integrated differential amplifiers designed with MOS field effect transistors. Variations in the technological parameters may well cause the amplifying transistors to work outside their usual operational area, in other words outside the linear part of the transfer characteristic. To ensure that these transistors function correctly, it is necessary that the value of the voltage difference at the output be equally null. To do this and to centre on the so called 'rest' point of the amplifier transfer charateristic, the condition will be set that the output potentials of each amplifier transistor should have a zero value or a constant value as sum. With this in view, the bias on the source (generally a transistor powered by its grid bias voltage) supplying current to the two amplifying transistors fitted in parallel, is permanently adjusted in a suitable manner [fr

  5. A systematic review of context bias in invasion biology.

    Directory of Open Access Journals (Sweden)

    Robert J Warren

    Full Text Available The language that scientists use to frame biological invasions may reveal inherent bias-including how data are interpreted. A frequent critique of invasion biology is the use of value-laden language that may indicate context bias. Here we use a systematic study of language and interpretation in papers drawn from invasion biology to evaluate whether there is a link between the framing of papers and the interpretation of results. We also examine any trends in context bias in biological invasion research. We examined 651 peer-reviewed invasive species competition studies and implemented a rigorous systematic review to examine bias in the presentation and interpretation of native and invasive competition in invasion biology. We predicted that bias in the presentation of invasive species is increasing, as suggested by several authors, and that bias against invasive species would result in misinterpreting their competitive dominance in correlational observational studies compared to causative experimental studies. We indeed found evidence of bias in the presentation and interpretation of invasive species research; authors often introduced research with invasive species in a negative context and study results were interpreted against invasive species more in correlational studies. However, we also found a distinct decrease in those biases since the mid-2000s. Given that there have been several waves of criticism from scientists both inside and outside invasion biology, our evidence suggests that the subdiscipline has somewhat self-corrected apparent biases.

  6. A systematic review of context bias in invasion biology.

    Science.gov (United States)

    Warren, Robert J; King, Joshua R; Tarsa, Charlene; Haas, Brian; Henderson, Jeremy

    2017-01-01

    The language that scientists use to frame biological invasions may reveal inherent bias-including how data are interpreted. A frequent critique of invasion biology is the use of value-laden language that may indicate context bias. Here we use a systematic study of language and interpretation in papers drawn from invasion biology to evaluate whether there is a link between the framing of papers and the interpretation of results. We also examine any trends in context bias in biological invasion research. We examined 651 peer-reviewed invasive species competition studies and implemented a rigorous systematic review to examine bias in the presentation and interpretation of native and invasive competition in invasion biology. We predicted that bias in the presentation of invasive species is increasing, as suggested by several authors, and that bias against invasive species would result in misinterpreting their competitive dominance in correlational observational studies compared to causative experimental studies. We indeed found evidence of bias in the presentation and interpretation of invasive species research; authors often introduced research with invasive species in a negative context and study results were interpreted against invasive species more in correlational studies. However, we also found a distinct decrease in those biases since the mid-2000s. Given that there have been several waves of criticism from scientists both inside and outside invasion biology, our evidence suggests that the subdiscipline has somewhat self-corrected apparent biases.

  7. Correction and transformation of normative neurophysiological data: is there added value in the diagnosis of distal symmetrical peripheral neuropathy?

    LENUS (Irish Health Repository)

    Mchugh, John C

    2012-02-01

    INTRODUCTION: Despite theoretical advantages, the practical impact of mathematical correction of normative electrodiagnostic data is poorly quantified. METHODS: One hundred five healthy volunteers had clinical and neurophysiological assessment. The effects of age, height, gender, weight, and body mass index were explored using stepwise regression modeling. Reference values were derived from raw and adjusted data, which were transformed to allow appropriate use of parametric statistics. The diagnostic accuracy of derived limits was tested in patients at risk of distal symmetric peripheral neuropathy (DSPN) from chemotherapy. RESULTS: The variability of our normative data was reduced by up to 69% through the use of regression modeling, but the overall benefits of mathematical correction were marginal. The most accurate reference limits were established using the 2.5th and 97.5th percentiles of the raw data. CONCLUSIONS: Stepwise statistical regression and mathematical transformation improve the distribution of normative data, but their practical impact for diagnosis of distal symmetrical polyneuropathy is small.

  8. Bias in phylogenetic reconstruction of vertebrate rhodopsin sequences.

    Science.gov (United States)

    Chang, B S; Campbell, D L

    2000-08-01

    Two spurious nodes were found in phylogenetic analyses of vertebrate rhodopsin sequences in comparison with well-established vertebrate relationships. These spurious reconstructions were well supported in bootstrap analyses and occurred independently of the method of phylogenetic analysis used (parsimony, distance, or likelihood). Use of this data set of vertebrate rhodopsin sequences allowed us to exploit established vertebrate relationships, as well as the considerable amount known about the molecular evolution of this gene, in order to identify important factors contributing to the spurious reconstructions. Simulation studies using parametric bootstrapping indicate that it is unlikely that the spurious nodes in the parsimony analyses are due to long branches or other topological effects. Rather, they appear to be due to base compositional bias at third positions, codon bias, and convergent evolution at nucleotide positions encoding the hydrophobic residues isoleucine, leucine, and valine. LogDet distance methods, as well as maximum-likelihood methods which allow for nonstationary changes in base composition, reduce but do not entirely eliminate support for the spurious resolutions. Inclusion of five additional rhodopsin sequences in the phylogenetic analyses largely corrected one of the spurious reconstructions while leaving the other unaffected. The additional sequences not only were more proximal to the corrected node, but were also found to have intermediate levels of base composition and codon bias as compared with neighboring sequences on the tree. This study shows that the spurious reconstructions can be corrected either by excluding third positions, as well as those encoding the amino acids Ile, Val, and Leu (which may not be ideal, as these sites can contain useful phylogenetic signal for other parts of the tree), or by the addition of sequences that reduce problems associated with convergent evolution.

  9. Implicit and Explicit Memory Bias in Opiate Dependent, Abstinent and Normal Individuals

    Directory of Open Access Journals (Sweden)

    Jafar Hasani

    2013-07-01

    Full Text Available Objective: The aim of current research was to assess implicit and explicit memory bias to drug related stimuli in opiate Dependent, abstinent and normal Individuals. Method: Three groups including opiate Dependent, abstinent and normal Individuals (n=25 were selected by available sampling method. After matching on the base of age, education level and type of substance use all participants assessed by recognition task (explicit memory bias and stem completion task (implicit memory bias. Results: The analysis of data showed that opiate dependent and abstinent groups in comparison with normal individual had implicit memory bias, whereas in explicit memory only opiate dependent individuals showed bias. Conclusion: The identification of explicit and implicit memory governing addiction may have practical implications in diagnosis, treatment and prevention of substance abuse.

  10. Usefulness and limit of CT diagnosis on appendicitis

    International Nuclear Information System (INIS)

    Kuchiki, Megumi; Takanashi, Toshiyasu; Yamaguchi, Koichi.

    1997-01-01

    CT was performed in 104 patients with abdominal pain suspected appendicitis. CT showed positive finding (abnormal appendix, appendicolith, pericecal inflammatory change, fluid collection, LN swelling, abscess) and complication of appendicitis clearly. CT diagnosis showed high accuracy than clinical diagnosis. CT proved its usefulness especially only CT imaging showed the correct diagnosis. On the other hand, diverticulitis and terminal ileitis common diagnostic disease of appendicitis showed similar clinical appearance and CT image, caused to be difficult to diagnose correctly. In the cases showing similar image to appendicitis or atypical image, CT also proved its limit of the diagnosis on appendicitis. (author)

  11. Differential diagnosis of Jakob-Creutzfeldt disease

    OpenAIRE

    Paterson, RW; Torres-Chae, CC; Kuo, AL; Ando, T; Nguyen, EA; Wong, K; DeArmond, SJ; Haman, A; Garcia, P; Johnson, DY; Miller, BL; Geschwind, MD

    2012-01-01

    Objectives: To identify the misdiagnoses of patients with sporadic Jakob-Creutzfeldt disease (sCJD) during the course of their disease and determine which medical specialties saw patients with sCJD prior to the correct diagnosis being made and at what point in the disease course a correct diagnosis was made. Design: Retrospective medical record review. Setting: A specialty referral center of a tertiary academic medical center. Participants: One hundred sixty-three serial patients over a 5.5-y...

  12. Generation of future potential scenarios in an Alpine Catchment by applying bias-correction techniques, delta-change approaches and stochastic Weather Generators at different spatial scale. Analysis of their influence on basic and drought statistics.

    Science.gov (United States)

    Collados-Lara, Antonio-Juan; Pulido-Velazquez, David; Pardo-Iguzquiza, Eulogio

    2017-04-01

    Assessing impacts of potential future climate change scenarios in precipitation and temperature is essential to design adaptive strategies in water resources systems. The objective of this work is to analyze the possibilities of different statistical downscaling methods to generate future potential scenarios in an Alpine Catchment from historical data and the available climate models simulations performed in the frame of the CORDEX EU project. The initial information employed to define these downscaling approaches are the historical climatic data (taken from the Spain02 project for the period 1971-2000 with a spatial resolution of 12.5 Km) and the future series provided by climatic models in the horizon period 2071-2100 . We have used information coming from nine climate model simulations (obtained from five different Regional climate models (RCM) nested to four different Global Climate Models (GCM)) from the European CORDEX project. In our application we have focused on the Representative Concentration Pathways (RCP) 8.5 emissions scenario, which is the most unfavorable scenario considered in the fifth Assessment Report (AR5) by the Intergovernmental Panel on Climate Change (IPCC). For each RCM we have generated future climate series for the period 2071-2100 by applying two different approaches, bias correction and delta change, and five different transformation techniques (first moment correction, first and second moment correction, regression functions, quantile mapping using distribution derived transformation and quantile mapping using empirical quantiles) for both of them. Ensembles of the obtained series were proposed to obtain more representative potential future climate scenarios to be employed to study potential impacts. In this work we propose a non-equifeaseble combination of the future series giving more weight to those coming from models (delta change approaches) or combination of models and techniques that provides better approximation to the basic

  13. Occupational noise exposure and age correction: the problem of selection bias.

    Science.gov (United States)

    Dobie, Robert A

    2009-12-01

    Selection bias often invalidates conclusions about populations based on clinical convenience samples. A recent paper in this journal makes two surprising assertions about noise-induced permanent threshold shift (NIPTS): first, that there is more NIPTS at 2 kHz than at higher frequencies; second, that NIPTS declines with advancing age. Neither assertion can be supported with the data presented, which were obtained from a clinical sample; both are consistent with the hypothesis that people who choose to attend an audiology clinic have worse hearing, especially at 2 kHz, than people of the same age and gender who choose not to attend.

  14. On the estimation of bias in post-closure performance assessment of underground radioactive waste disposal

    International Nuclear Information System (INIS)

    Thompson, B.G.J.; Gralewski, Z.A.; Grindrod, P.

    1995-01-01

    This paper proposes a systematic method for recording and evaluating bias in performance assessments for underground radioactive waste disposal facilities. The bias estimation approach comprises three principal components: (1) creation of a relational database containing historical assumptions and decisions made during the assessment, (2) investigation of the impact of some identified sources of internal bias through alternative assessment calculations, and (3) investigation of the impact of some identified sources of external bias by estimating degrees of belief probability. Bias corrections may help avoid unnecessary concerns by explaining and scoping the impacts of principal differences without the need to undertake additional site investigation, research, and performance analysis

  15. Information bias and lifetime mortality risks of radiation-induced cancer: Low LET radiation

    International Nuclear Information System (INIS)

    Peterson, L.E.; Schull, W.J.; Davis, B.R.; Buffler, P.A.

    1994-04-01

    Additive and multiplicative models of relative risk were used to measure the effect of cancer misclassification and DS86 random errors on lifetime risk projections in the Life Span Study (LSS) of Hiroshima and Nagasaki atomic bomb survivors. The true number of cancer deaths in each stratum of the cancer mortality cross-classification was estimated using sufficient statistics from the EM algorithm. Average survivor doses in the strata were corrected for DS86 random error (σ=0.45) by use of reduction factors. Poisson regression was used to model the corrected and uncorrected mortality rates with risks in RERF Report 11 (Part 2) and the BEIR-V Report. Bias due to DS86 random error typically ranged from -15% to -30% for both sexes, and all sites and models. The total bias, including diagnostic misclassification, of excess risk of nonleukemia for exposure to 1 Sv from age 18 to 65 under the non-constant relative project model was -37.1% for males and -23.3% for females. Total excess risks of leukemia under the relative projection model were biased -27.1% for males and -43.4% for females. Thus, nonleukemia risks for 1 Sv from ages 18 to 65 (DRREF=2) increased from 1.91%/Sv to 2.68%/Sv among males and from 3.23%/Sv to 4.92%/Sv among females. Leukemia excess risk increased from 0.87%/Sv to 1.10/Sv among males and from 0.73%/Sv to 1.04/Sv among females. Bias was dependent on the gender, site, correction method, exposure profile and projection model considered. Future studies that use LSS data for US nuclear workers may be downwardly biased if lifetime risk projections are not adjusted for random and systematic errors

  16. Information bias and lifetime mortality risks of radiation-induced cancer: Low LET radiation

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, L.E.; Schull, W.J.; Davis, B.R. [Texas Univ., Houston, TX (United States). Health Science Center; Buffler, P.A. [California Univ., Berkeley, CA (United States). School of Public Health

    1994-04-01

    Additive and multiplicative models of relative risk were used to measure the effect of cancer misclassification and DS86 random errors on lifetime risk projections in the Life Span Study (LSS) of Hiroshima and Nagasaki atomic bomb survivors. The true number of cancer deaths in each stratum of the cancer mortality cross-classification was estimated using sufficient statistics from the EM algorithm. Average survivor doses in the strata were corrected for DS86 random error ({sigma}=0.45) by use of reduction factors. Poisson regression was used to model the corrected and uncorrected mortality rates with risks in RERF Report 11 (Part 2) and the BEIR-V Report. Bias due to DS86 random error typically ranged from {minus}15% to {minus}30% for both sexes, and all sites and models. The total bias, including diagnostic misclassification, of excess risk of nonleukemia for exposure to 1 Sv from age 18 to 65 under the non-constant relative project model was {minus}37.1% for males and {minus}23.3% for females. Total excess risks of leukemia under the relative projection model were biased {minus}27.1% for males and {minus}43.4% for females. Thus, nonleukemia risks for 1 Sv from ages 18 to 65 (DRREF=2) increased from 1.91%/Sv to 2.68%/Sv among males and from 3.23%/Sv to 4.92%/Sv among females. Leukemia excess risk increased from 0.87%/Sv to 1.10/Sv among males and from 0.73%/Sv to 1.04/Sv among females. Bias was dependent on the gender, site, correction method, exposure profile and projection model considered. Future studies that use LSS data for US nuclear workers may be downwardly biased if lifetime risk projections are not adjusted for random and systematic errors.

  17. The effects of rehabilitative voir dire on juror bias and decision making.

    Science.gov (United States)

    Crocker, Caroline B; Kovera, Margaret Bull

    2010-06-01

    During voir dire, judges frequently attempt to "rehabilitate" venirepersons who express an inability to be impartial. Venirepersons who agree to ignore their biases and base their verdict on the evidence and the law are eligible for jury service. In Experiment 1, biased and unbiased mock jurors participated in either a standard or rehabilitative voir dire conducted by a judge and watched a trial video. Rehabilitation influenced insanity defense attitudes and perceptions of the defendant's mental state, and decreased scaled guilt judgments compared to standard questioning. Although rehabilitation is intended to correct for partiality among biased jurors, rehabilitation similarly influenced biased and unbiased jurors. Experiment 2 found that watching rehabilitation did not influence jurors' perceptions of the judge's personal beliefs about the case.

  18. Should total landings be used to correct estimated catch in numbers or mean-weight-at-age?

    DEFF Research Database (Denmark)

    Lewy, Peter; Lassen, H.

    1997-01-01

    Many ICES fish stock assessment working groups have practised Sum Of Products, SOP, correction. This correction stems from a comparison of total weights of the known landings and the SOP over age of catch in number and mean weight-at-age, which ideally should be identical. In case of SOP...... discrepancies some countries correct catch in numbers while others correct mean weight-at-age by a common factor, the ratio between landing and SOP. The paper shows that for three sampling schemes the SOP corrections are statistically incorrect and should not be made since the SOP is an unbiased estimate...... of the total landings. Calculation of the bias of estimated catch in numbers and mean weight-at-age shows that SOP corrections of either of these estimates may increase the bias. Furthermore, for five demersal and one pelagic North Sea species it is shown that SOP discrepancies greater than 2% from...

  19. A fault detection and diagnosis in a PWR steam generator

    International Nuclear Information System (INIS)

    Park, Seung Yub

    1991-01-01

    The purpose of this study is to develop a fault detection and diagnosis scheme that can monitor process fault and instrument fault of a steam generator. The suggested scheme consists of a Kalman filter and two bias estimators. Method of detecting process and instrument fault in a steam generator uses the mean test on the residual sequence of Kalman filter, designed for the unfailed system, to make a fault decision. Once a fault is detected, two bias estimators are driven to estimate the fault and to discriminate process fault and instrument fault. In case of process fault, the fault diagnosis of outlet temperature, feed-water heater and main steam control valve is considered. In instrument fault, the fault diagnosis of steam generator's three instruments is considered. Computer simulation tests show that on-line prompt fault detection and diagnosis can be performed very successfully.(Author)

  20. The relation between respiratory motion artifact correction and lung standardized uptake value

    International Nuclear Information System (INIS)

    Yin Lijie; Liu Xiaojian; Liu Jie; Xu Rui; Yan Jue

    2014-01-01

    PET/CT is playing an important role in disease diagnosis and therapeutic evaluation. But the respiratory motion artifact may bring trouble in diagnosis and therapy. There are many methods to correct the respiratory motion artifact. Respiratory gated PET/CT is applied most extensively of them. Using respiratory gated PET/CT to correct respiratory motion artifact can increase the maximum standardized uptake value of lung lesion obviously, thereby improving the quality of image and accuracy of diagnosis. (authors)

  1. New use of prescription drugs prior to a cancer diagnosis

    DEFF Research Database (Denmark)

    Pottegård, Anton; Hallas, Jesper

    2017-01-01

    PURPOSE: Cancers often have considerable induction periods. This confers a risk of reverse causation bias in studies of cancer risk associated with drug use, as early symptoms of a yet undiagnosed cancer might lead to drug treatment in the period leading up to the diagnosis. This bias can be alle...

  2. An empirical study on memory bias situations and correction strategies in ERP effort estimation

    NARCIS (Netherlands)

    Erasmus, I.P.; Daneva, Maia; Amrahamsson, Pekka; Corral, Luis; Olivo, Markku; Russo, Barbara

    2016-01-01

    An Enterprise Resource Planning (ERP) project estimation process often relies on experts of various backgrounds to contribute judgments based on their professional experience. Such expert judgments however may not be biasfree. De-biasing techniques therefore have been proposed in the software

  3. Reliability Correction for Functional Connectivity: Theory and Implementation

    Science.gov (United States)

    Mueller, Sophia; Wang, Danhong; Fox, Michael D.; Pan, Ruiqi; Lu, Jie; Li, Kuncheng; Sun, Wei; Buckner, Randy L.; Liu, Hesheng

    2016-01-01

    Network properties can be estimated using functional connectivity MRI (fcMRI). However, regional variation of the fMRI signal causes systematic biases in network estimates including correlation attenuation in regions of low measurement reliability. Here we computed the spatial distribution of fcMRI reliability using longitudinal fcMRI datasets and demonstrated how pre-estimated reliability maps can correct for correlation attenuation. As a test case of reliability-based attenuation correction we estimated properties of the default network, where reliability was significantly lower than average in the medial temporal lobe and higher in the posterior medial cortex, heterogeneity that impacts estimation of the network. Accounting for this bias using attenuation correction revealed that the medial temporal lobe’s contribution to the default network is typically underestimated. To render this approach useful to a greater number of datasets, we demonstrate that test-retest reliability maps derived from repeated runs within a single scanning session can be used as a surrogate for multi-session reliability mapping. Using data segments with different scan lengths between 1 and 30 min, we found that test-retest reliability of connectivity estimates increases with scan length while the spatial distribution of reliability is relatively stable even at short scan lengths. Finally, analyses of tertiary data revealed that reliability distribution is influenced by age, neuropsychiatric status and scanner type, suggesting that reliability correction may be especially important when studying between-group differences. Collectively, these results illustrate that reliability-based attenuation correction is an easily implemented strategy that mitigates certain features of fMRI signal nonuniformity. PMID:26493163

  4. The Use of a Statistical Model of Storm Surge as a Bias Correction for Dynamical Surge Models and its Applicability along the U.S. East Coast

    Directory of Open Access Journals (Sweden)

    Haydee Salmun

    2015-02-01

    Full Text Available The present study extends the applicability of a statistical model for prediction of storm surge originally developed for The Battery, NY in two ways: I. the statistical model is used as a biascorrection for operationally produced dynamical surge forecasts, and II. the statistical model is applied to the region of the east coast of the U.S. susceptible to winter extratropical storms. The statistical prediction is based on a regression relation between the “storm maximum” storm surge and the storm composite significant wave height predicted ata nearby location. The use of the statistical surge prediction as an alternative bias correction for the National Oceanic and Atmospheric Administration (NOAA operational storm surge forecasts is shownhere to be statistically equivalent to the existing bias correctiontechnique and potentially applicable for much longer forecast lead times as well as for storm surge climate prediction. Applying the statistical model to locations along the east coast shows that the regression relation can be “trained” with data from tide gauge measurements and near-shore buoys along the coast from North Carolina to Maine, and that it provides accurate estimates of storm surge.

  5. Evaluation of bias associated with high-multiplex, target-specific pre-amplification

    Directory of Open Access Journals (Sweden)

    Steven T. Okino

    2016-01-01

    Full Text Available We developed a novel PCR-based pre-amplification (PreAmp technology that can increase the abundance of over 350 target genes one million-fold. To assess potential bias introduced by PreAmp we utilized ERCC RNA reference standards, a model system that quantifies measurement error in RNA analysis. We assessed three types of bias: amplification bias, dynamic range bias and fold-change bias. We show that our PreAmp workflow introduces only minimal amplification and fold-change bias under stringent conditions. We do detect dynamic range bias if a target gene is highly abundant and PreAmp occurred for 16 or more PCR cycles; however, this type of bias is easily correctable. To assess PreAmp bias in a gene expression profiling experiment, we analyzed a panel of genes that are regulated during differentiation using the NTera2 stem cell model system. We find that results generated using PreAmp are similar to results obtained using standard qPCR (without the pre-amplification step. Importantly, PreAmp maintains patterns of gene expression changes across samples; the same biological insights would be derived from a PreAmp experiment as with a standard gene expression profiling experiment. We conclude that our PreAmp technology can facilitate analysis of extremely limited samples in gene expression quantification experiments.

  6. Combined scintigraphic and radiographic diagnosis of bone and joint diseases. Including gamma correction interpretation. 4. rev. and enl. ed.

    Energy Technology Data Exchange (ETDEWEB)

    Bahk, Yong-Whee [Sung Ae General Hospital, Seoul (Korea, Republic of). Dept. of Nuclear Medicine and Radiology

    2013-07-01

    In this fourth edition of Combined Scintigraphic and Radiographic Diagnosis of Bone and Joint Diseases, the text has been thoroughly amended, updated, and partially rearranged to reflect the latest advances. In addition to discussing the role of pinhole imaging in the range of disorders previously covered, the new edition pays detailed attention to the novel diagnostic use of gamma correction pinhole bone scan in a broad spectrum of skeletal disorders, including physical, traumatic, and sports injuries, infectious and non-infectious bone diseases, benign and malignant bone tumors, and soft tissue diseases. A large number of state of the art pinhole scans and corroborative CT, MRI, and/or ultrasound images are presented side by side. The book has been enlarged to encompass various new topics, including occult fractures; cervical sprain and whiplash trauma; bone marrow edema; microfractures of trabeculae; evident, gaping, and stress fractures; and differential diagnosis. This new edition will be essential reading for practitioners and researchers in not only nuclear medicine but also radiology, orthopedic surgery, and pathology.

  7. Combined scintigraphic and radiographic diagnosis of bone and joint diseases. Including gamma correction interpretation. 4. rev. and enl. ed.

    International Nuclear Information System (INIS)

    Bahk, Yong-Whee

    2013-01-01

    In this fourth edition of Combined Scintigraphic and Radiographic Diagnosis of Bone and Joint Diseases, the text has been thoroughly amended, updated, and partially rearranged to reflect the latest advances. In addition to discussing the role of pinhole imaging in the range of disorders previously covered, the new edition pays detailed attention to the novel diagnostic use of gamma correction pinhole bone scan in a broad spectrum of skeletal disorders, including physical, traumatic, and sports injuries, infectious and non-infectious bone diseases, benign and malignant bone tumors, and soft tissue diseases. A large number of state of the art pinhole scans and corroborative CT, MRI, and/or ultrasound images are presented side by side. The book has been enlarged to encompass various new topics, including occult fractures; cervical sprain and whiplash trauma; bone marrow edema; microfractures of trabeculae; evident, gaping, and stress fractures; and differential diagnosis. This new edition will be essential reading for practitioners and researchers in not only nuclear medicine but also radiology, orthopedic surgery, and pathology.

  8. Manual Optical Attitude Re-initialization of a Crew Vehicle in Space Using Bias Corrected Gyro Data

    Science.gov (United States)

    Gioia, Christopher J.

    NASA and other space agencies have shown interest in sending humans on missions beyond low Earth orbit. Proposed is an algorithm that estimates the attitude of a manned spacecraft using measured line-of-sight (LOS) vectors to stars and gyroscope measurements. The Manual Optical Attitude Reinitialization (MOAR) algorithm and corresponding device draw inspiration from existing technology from the Gemini, Apollo and Space Shuttle programs. The improvement over these devices is the capability of estimating gyro bias completely independent from re-initializing attitude. It may be applied to the lost-in-space problem, where the spacecraft's attitude is unknown. In this work, a model was constructed that simulated gyro data using the Farrenkopf gyro model, and LOS measurements from a spotting scope were then computed from it. Using these simulated measurements, gyro bias was estimated by comparing measured interior star angles to those derived from a star catalog and then minimizing the difference using an optimization technique. Several optimization techniques were analyzed, and it was determined that the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm performed the best when combined with a grid search technique. Once estimated, the gyro bias was removed and attitude was determined by solving the Wahba Problem via the Singular Value Decomposition (SVD) approach. Several Monte Carlo simulations were performed that looked at different operating conditions for the MOAR algorithm. These included the effects of bias instability, using different constellations for data collection, sampling star measurements in different orders, and varying the time between measurements. A common method of estimating gyro bias and attitude in a Multiplicative Extended Kalman Filter (MEKF) was also explored and disproven for use in the MOAR algorithm. A prototype was also constructed to validate the proposed concepts. It was built using a simple spotting scope, MEMS grade IMU, and a Raspberry

  9. Differential diagnosis of Jakob-Creutzfeldt disease.

    Science.gov (United States)

    Paterson, Ross W; Torres-Chae, Charles C; Kuo, Amy L; Ando, Tim; Nguyen, Elizabeth A; Wong, Katherine; DeArmond, Stephen J; Haman, Aissa; Garcia, Paul; Johnson, David Y; Miller, Bruce L; Geschwind, Michael D

    2012-12-01

    To identify the misdiagnoses of patients with sporadic Jakob-Creutzfeldt disease (sCJD) during the course of their disease and determine which medical specialties saw patients with sCJD prior to the correct diagnosis being made and at what point in the disease course a correct diagnosis was made. Retrospective medical record review. A specialty referral center of a tertiary academic medical center. One hundred sixty-three serial patients over a 5.5-year period who ultimately had pathologically proven sCJD. The study used the subset of 97 patients for whom we had adequate medical records. Other diagnoses considered in the differential diagnosis and types of medical specialties assessing patients with sCJD. Ninety-seven subjects' records were used in the final analysis. The most common disease categories of misdiagnosis were neurodegenerative, autoimmune/paraneoplastic, infectious, and toxic/metabolic disorders. The most common individual misdiagnoses were viral encephalitis, paraneoplastic disorder, depression, vertigo, Alzheimer disease, stroke, unspecified dementia, central nervous system vasculitis, peripheral neuropathy, and Hashimoto encephalopathy. The physicians who most commonly made these misdiagnoses were primary care physicians and neurologists; in the 18% of patients who were diagnosed correctly at their first assessment, the diagnosis was almost always by a neurologist. The mean time from onset to diagnosis was 7.9 months, an average of two-thirds of the way through their disease course. Diagnosis of sCJD is quite delayed. When evaluating patients with rapidly progressive dementia with suspected neurodegenerative, autoimmune, infectious, or toxic/metabolic etiology, sCJD should also be included in the differential diagnosis, and appropriate diagnostic tests, such as diffusion brain magnetic resonance imaging, should be considered. Primary care physicians and neurologists need improved training in sCJD diagnosis.

  10. Potential for bias in 21st century semiempirical sea level projections

    DEFF Research Database (Denmark)

    Jevrejeva, S.; Moore, J. C.; Grinsted, A.

    2012-01-01

    by satellite altimetry. Nonradiative forcing contributors, such as long-term adjustment of Greenland and Antarctica ice sheets since Last Glacial Maximum, abyssal ocean warming, and terrestrial water storage, may bias model calibration which, if corrected for, tend to reduce median sea level projections...

  11. Peak-locking centroid bias in Shack-Hartmann wavefront sensing

    Science.gov (United States)

    Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.

    2018-05-01

    Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.

  12. Specificity of interpretation and judgemental biases in social phobia versus depression.

    Science.gov (United States)

    Voncken, M J; Bögels, S M; Peeters, F

    2007-09-01

    A body of studies shows that social phobia is characterized by content specific interpretation and judgmental biases. That is, they show bias in social situations but not in non-social situations. Comorbid depression, one of the major comorbid disorders in social phobia, might account for these biases in social phobia since depression also is characterized by cognitive distortions in social situations. This study hypothesized that, despite comorbid depression, patients with social phobia would suffer from contentspecific biases. Participants filled out the Interpretation and Judgmental Questionnaire (IJQ) to assess interpretation bias (using open-ended responses and forced-interpretations) and judgmental bias in social and non-social situations. Four groups participated: social phobic patients with high (N=38) and low (N=47) depressive symptoms, depressed patients (N=22) and normal controls (N=33). We found both social phobic groups to interpret social situations more negatively and judge social situations as more threatening than non-social situations relative to depressed patients and normal controls. As expected, depressive symptoms related to increased general interpretation and judgmental biases across social and non-social situations. In contrast to expectations, we did not find these patterns for the open-ended measure of interpretation bias. The content-specific biases for social situations distinguished social phobic patients from depressive patients. This speaks for the importance of establishing the primary diagnosis in patients with mixed depression and social anxiety complaints.

  13. An evaluation of fundus photography and fundus autofluorescence in the diagnosis of cuticular drusen.

    Science.gov (United States)

    Høeg, Tracy B; Moldow, Birgitte; Klein, Ronald; La Cour, Morten; Klemp, Kristian; Erngaard, Ditte; Ellervik, Christina; Buch, Helena

    2016-03-01

    To examine non-mydriatic fundus photography (FP) and fundus autofluorescence (FAF) as alternative non-invasive imaging modalities to fluorescein angiography (FA) in the detection of cuticular drusen (CD). Among 2953 adults from the Danish Rural Eye Study (DRES) with gradable FP, three study groups were selected: (1) All those with suspected CD without age-related macular degeneration (AMD) on FP, (2) all those with suspected CD with AMD on FP and (3) a randomly selected group with early AMD. Groups 1, 2 and 3 underwent FA and FAF and group 4 underwent FAF only as part of DRES CD substudy. Main outcome measures included percentage of correct positive and correct negative diagnoses, Cohen's κ and prevalence-adjusted and bias-adjusted κ (PABAK) coefficients of test and grader reliability. CD was correctly identified on FP 88.9% of the time and correctly identified as not being present 83.3% of the time. CD was correctly identified on FAF 62.0% of the time and correctly identified as not being present 100.0% of the time. Compared with FA, FP has a PABAK of 0.75 (0.60 to 1.5) and FAF a PABAK of 0.44 (0.23 to 0.95). FP is a promising, non-invasive substitute for FA in the diagnosis of CD. FAF was less reliable than FP to detect CD. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  14. Brain extraction in partial volumes T2*@7T by using a quasi-anatomic segmentation with bias field correction.

    Science.gov (United States)

    Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S

    2018-02-01

    Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model

    Science.gov (United States)

    Ahlgren, K.; Li, X.

    2017-12-01

    Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model

  16. A working memory bias for alcohol-related stimuli depends on drinking score.

    Science.gov (United States)

    Kessler, Klaus; Pajak, Katarzyna Malgorzata; Harkin, Ben; Jones, Barry

    2013-03-01

    We tested 44 participants with respect to their working memory (WM) performance on alcohol-related versus neutral visual stimuli. Previously an alcohol attentional bias (AAB) had been reported using these stimuli, where the attention of frequent drinkers was automatically drawn toward alcohol-related items (e.g., beer bottle). The present study set out to provide evidence for an alcohol memory bias (AMB) that would persist over longer time-scales than the AAB. The WM task we used required memorizing 4 stimuli in their correct locations and a visual interference task was administered during a 4-sec delay interval. A subsequent probe required participants to indicate whether a stimulus was shown in the correct or incorrect location. For each participant we calculated a drinking score based on 3 items derived from the Alcohol Use Questionnaire, and we observed that higher scorers better remembered alcohol-related images compared with lower scorers, particularly when these were presented in their correct locations upon recall. This provides first evidence for an AMB. It is important to highlight that this effect persisted over a 4-sec delay period including a visual interference task that erased iconic memories and diverted attention away from the encoded items, thus the AMB cannot be reduced to the previously reported AAB. Our finding calls for further investigation of alcohol-related cognitive biases in WM, and we propose a preliminary model that may guide future research. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  17. Estimating the price elasticity of beer: meta-analysis of data with heterogeneity, dependence, and publication bias.

    Science.gov (United States)

    Nelson, Jon P

    2014-01-01

    Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Ensemble stacking mitigates biases in inference of synaptic connectivity

    Directory of Open Access Journals (Sweden)

    Brendan Chambers

    2018-03-01

    Full Text Available A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches. Mapping the routing of spikes through local circuitry is crucial for understanding neocortical computation. Under appropriate experimental conditions, these maps can be used to infer likely patterns of synaptic recruitment, linking activity to underlying anatomical connections. Such inferences help to reveal the synaptic implementation of population dynamics and computation. We compare a number of standard functional measures to infer underlying connectivity. We find that regularization impacts measures

  19. Correcting surface solar radiation of two data assimilation systems against FLUXNET observations in North America

    Science.gov (United States)

    Zhao, Lei; Lee, Xuhui; Liu, Shoudong

    2013-09-01

    Solar radiation at the Earth's surface is an important driver of meteorological and ecological processes. The objective of this study is to evaluate the accuracy of the reanalysis solar radiation produced by NARR (North American Regional Reanalysis) and MERRA (Modern-Era Retrospective Analysis for Research and Applications) against the FLUXNET measurements in North America. We found that both assimilation systems systematically overestimated the surface solar radiation flux on the monthly and annual scale, with an average bias error of +37.2 Wm-2 for NARR and of +20.2 Wm-2 for MERRA. The bias errors were larger under cloudy skies than under clear skies. A postreanalysis algorithm consisting of empirical relationships between model bias, a clearness index, and site elevation was proposed to correct the model errors. Results show that the algorithm can remove the systematic bias errors for both FLUXNET calibration sites (sites used to establish the algorithm) and independent validation sites. After correction, the average annual mean bias errors were reduced to +1.3 Wm-2 for NARR and +2.7 Wm-2 for MERRA. Applying the correction algorithm to the global domain of MERRA brought the global mean surface incoming shortwave radiation down by 17.3 W m-2 to 175.5 W m-2. Under the constraint of the energy balance, other radiation and energy balance terms at the Earth's surface, estimated from independent global data products, also support the need for a downward adjustment of the MERRA surface solar radiation.

  20. Benchmarking by HbA1c in a national diabetes quality register--does measurement bias matter?

    Science.gov (United States)

    Carlsen, Siri; Thue, Geir; Cooper, John Graham; Røraas, Thomas; Gøransson, Lasse Gunnar; Løvaas, Karianne; Sandberg, Sverre

    2015-08-01

    Bias in HbA1c measurement could give a wrong impression of the standard of care when benchmarking diabetes care. The aim of this study was to evaluate how measurement bias in HbA1c results may influence the benchmarking process performed by a national diabetes register. Using data from 2012 from the Norwegian Diabetes Register for Adults, we included HbA1c results from 3584 patients with type 1 diabetes attending 13 hospital clinics, and 1366 patients with type 2 diabetes attending 18 GP offices. Correction factors for HbA1c were obtained by comparing the results of the hospital laboratories'/GP offices' external quality assurance scheme with the target value from a reference method. Compared with the uncorrected yearly median HbA1c values for hospital clinics and GP offices, EQA corrected HbA1c values were within ±0.2% (2 mmol/mol) for all but one hospital clinic whose value was reduced by 0.4% (4 mmol/mol). Three hospital clinics reduced the proportion of patients with poor glycemic control, one by 9% and two by 4%. For most participants in our study, correcting for measurement bias had little effect on the yearly median HbA1c value or the percentage of patients achieving glycemic goals. However, at three hospital clinics correcting for measurement bias had an important effect on HbA1c benchmarking results especially with regard to percentages of patients achieving glycemic targets. The analytical quality of HbA1c should be taken into account when comparing benchmarking results.

  1. Sex Bias in Classifying Borderline and Narcissistic Personality Disorder.

    Science.gov (United States)

    Braamhorst, Wouter; Lobbestael, Jill; Emons, Wilco H M; Arntz, Arnoud; Witteman, Cilia L M; Bekker, Marrie H J

    2015-10-01

    This study investigated sex bias in the classification of borderline and narcissistic personality disorders. A sample of psychologists in training for a post-master degree (N = 180) read brief case histories (male or female version) and made DSM classification. To differentiate sex bias due to sex stereotyping or to base rate variation, we used different case histories, respectively: (1) non-ambiguous case histories with enough criteria of either borderline or narcissistic personality disorder to meet the threshold for classification, and (2) an ambiguous case with subthreshold features of both borderline and narcissistic personality disorder. Results showed significant differences due to sex of the patient in the ambiguous condition. Thus, when the diagnosis is not straightforward, as in the case of mixed subthreshold features, sex bias is present and is influenced by base-rate variation. These findings emphasize the need for caution in classifying personality disorders, especially borderline or narcissistic traits.

  2. Correction for the Hematocrit Bias in Dried Blood Spot Analysis Using a Nondestructive, Single-Wavelength Reflectance-Based Hematocrit Prediction Method.

    Science.gov (United States)

    Capiau, Sara; Wilk, Leah S; De Kesel, Pieter M M; Aalders, Maurice C G; Stove, Christophe P

    2018-02-06

    bias obtained with Bland and Altman analysis was -0.015 and the limits of agreement were -0.061 and 0.031, indicating that the simplified, noncontact Hct prediction method even outperforms the original method. In addition, using caffeine as a model compound, it was demonstrated that this simplified Hct prediction method can effectively be used to implement a Hct-dependent correction factor to DBS-based results to alleviate the Hct bias.

  3. Methods of Reducing Bias in Combined Thermal/Epithermal Neutron (CTEN) Assays of Heterogeneous Waste

    Energy Technology Data Exchange (ETDEWEB)

    Estep, R.J.; Melton, S.; Miko, D.

    1998-11-17

    We examined the effectiveness of two different methods for correcting CTEN passive and active assays for bias due to variations in the source position in different drum types. Both use the same drum-averaged correction determined from a neural network trained to active flux monitor ratios as a starting point. One method then uses a neural network to obtain a spatial correction factor sensitive to the source location. The other method uses emission tomography. Both methods were found to give significantly improved assay accuracy over the drum-averaged correction, although more study is needed to determine which method works better.

  4. Methods of Reducing Bias in Combined Thermal/Epithermal Neutron (CTEN) Assays of Heterogeneous Waste

    International Nuclear Information System (INIS)

    Estep, R.J.; Melton, S.; Miko, D.

    1998-01-01

    We examined the effectiveness of two different methods for correcting CTEN passive and active assays for bias due to variations in the source position in different drum types. Both use the same drum-averaged correction determined from a neural network trained to active flux monitor ratios as a starting point. One method then uses a neural network to obtain a spatial correction factor sensitive to the source location. The other method uses emission tomography. Both methods were found to give significantly improved assay accuracy over the drum-averaged correction, although more study is needed to determine which method works better

  5. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    Science.gov (United States)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  6. Communication: Electronic and transport properties of molecular junctions under a finite bias: A dual mean field approach

    International Nuclear Information System (INIS)

    Liu, Shuanglong; Feng, Yuan Ping; Zhang, Chun

    2013-01-01

    We show that when a molecular junction is under an external bias, its properties cannot be uniquely determined by the total electron density in the same manner as the density functional theory for ground state properties. In order to correctly incorporate bias-induced nonequilibrium effects, we present a dual mean field (DMF) approach. The key idea is that the total electron density together with the density of current-carrying electrons are sufficient to determine the properties of the system. Two mean fields, one for current-carrying electrons and the other one for equilibrium electrons can then be derived. Calculations for a graphene nanoribbon junction show that compared with the commonly used ab initio transport theory, the DMF approach could significantly reduce the electric current at low biases due to the non-equilibrium corrections to the mean field potential in the scattering region

  7. Social influence bias: a randomized experiment.

    Science.gov (United States)

    Muchnik, Lev; Aral, Sinan; Taylor, Sean J

    2013-08-09

    Our society is increasingly relying on the digitized, aggregated opinions of others to make decisions. We therefore designed and analyzed a large-scale randomized experiment on a social news aggregation Web site to investigate whether knowledge of such aggregates distorts decision-making. Prior ratings created significant bias in individual rating behavior, and positive and negative social influences created asymmetric herding effects. Whereas negative social influence inspired users to correct manipulated ratings, positive social influence increased the likelihood of positive ratings by 32% and created accumulating positive herding that increased final ratings by 25% on average. This positive herding was topic-dependent and affected by whether individuals were viewing the opinions of friends or enemies. A mixture of changing opinion and greater turnout under both manipulations together with a natural tendency to up-vote on the site combined to create the herding effects. Such findings will help interpret collective judgment accurately and avoid social influence bias in collective intelligence in the future.

  8. Imputation of Housing Rents for Owners Using Models With Heckman Correction

    Directory of Open Access Journals (Sweden)

    Beat Hulliger

    2012-07-01

    Full Text Available The direct income of owners and tenants of dwellings is not comparable since the owners have a hidden income from the investment in their dwelling. This hidden income is considered a part of the disposable income of owners. It may be predicted with the help of a linear model of the rent. Since such a model must be developed and estimated for tenants with observed market rents a selection bias may occur. The selection bias can be minimised through a Heckman correction. The paper applies the Heckman correction to data from the Swiss Statistics on Income and Living Conditions. The Heckman method is adapted to the survey context, the modeling process including the choice of covariates is explained and the effect of the prediction using the model is discussed.

  9. A Conceptually Simple Modeling Approach for Jason-1 Sea State Bias Correction Based on 3 Parameters Exclusively Derived from Altimetric Information

    Directory of Open Access Journals (Sweden)

    Nelson Pires

    2016-07-01

    Full Text Available A conceptually simple formulation is proposed for a new empirical sea state bias (SSB model using information retrieved entirely from altimetric data. Nonparametric regression techniques are used, based on penalized smoothing splines adjusted to each predictor and then combined by a Generalized Additive Model. In addition to the significant wave height (SWH and wind speed (U10, a mediator parameter designed by the mean wave period derived from radar altimetry, has proven to improve the model performance in explaining some of the SSB variability, especially in swell ocean regions with medium-high SWH and low U10. A collinear analysis of scaled sea level anomalies (SLA variance differences shows conformity between the proposed model and the established SSB models. The new formulation aims to be a fast, reliable and flexible SSB model, in line with the well-settled SSB corrections, depending exclusively on altimetric information. The suggested method is computationally efficient and capable of generating a stable model with a small training dataset, a useful feature for forthcoming missions.

  10. Assessment of radar altimetry correction slopes for marine gravity recovery: A case study of Jason-1 GM data

    Science.gov (United States)

    Zhang, Shengjun; Li, Jiancheng; Jin, Taoyong; Che, Defu

    2018-04-01

    Marine gravity anomaly derived from satellite altimetry can be computed using either sea surface height or sea surface slope measurements. Here we consider the slope method and evaluate the errors in the slope of the corrections supplied with the Jason-1 geodetic mission data. The slope corrections are divided into three groups based on whether they are small, comparable, or large with respect to the 1 microradian error in the current sea surface slope models. (1) The small and thus negligible corrections include dry tropospheric correction, inverted barometer correction, solid earth tide and geocentric pole tide. (2) The moderately important corrections include wet tropospheric correction, dual-frequency ionospheric correction and sea state bias. The radiometer measurements are more preferred than model values in the geophysical data records for constraining wet tropospheric effect owing to the highly variable water-vapor structure in atmosphere. The items of dual-frequency ionospheric correction and sea state bias should better not be directly added to range observations for obtaining sea surface slopes since their inherent errors may cause abnormal sea surface slopes and along-track smoothing with uniform distribution weight in certain width is an effective strategy for avoiding introducing extra noises. The slopes calculated from radiometer wet tropospheric corrections, and along-track smoothed dual-frequency ionospheric corrections, sea state bias are generally within ±0.5 microradians and no larger than 1 microradians. (3) Ocean tide has the largest influence on obtaining sea surface slopes while most of ocean tide slopes distribute within ±3 microradians. Larger ocean tide slopes mostly occur over marginal and island-surrounding seas, and extra tidal models with better precision or with extending process (e.g. Got-e) are strongly recommended for updating corrections in geophysical data records.

  11. Morphobiochemical diagnosis of acute trabecular microfractures using gamma correction Tc-99m HDP pinhole bone scan with histopathological verification.

    Science.gov (United States)

    Bahk, Yong-Whee; Hwang, Seok-Ha; Lee, U-Young; Chung, Yong-An; Jung, Joo-Young; Jeong, Hyeonseok S

    2017-11-01

    We prospectively performed gamma correction pinhole bone scan (GCPBS) and histopathologic verification study to make simultaneous morphobiochemical diagnosis of trabecular microfractures (TMF) occurred in the femoral head as a part of femoral neck fracture.Materials consisted of surgical specimens of the femoral head in 6 consecutive patients. The specimens were imaged using Tc-99m hydroxymethylene diphosphonate (HDP) pinhole scan and processed by the gamma correction. After cleansing with 10% formalin solution, injured specimen surface was observed using a surgical microscope to record TMF. Morphological findings shown in the photograph, naive pinhole bone scan, GCPBS, and hematoxylin-eosin (H&E) stain of the specimen were reciprocally correlated for histological verification and the usefulness of suppression and enhancement of Tc-99m HDP uptake was biochemically investigated in TMF and edema and hemorrhage using gamma correction.On the one hand, GCPBS was able to depict the calcifying calluses in TMF with enhanced Tc-99m HDP uptake. They were pinpointed, speckled, round, ovoid, rod-like, geographic, and crushed in shape. The smallest callus measured was 0.23 mm in this series. On the other hand, GCPBS biochemically was able to discern the calluses with enhanced high Tc-99m HDP uptake from the normal and edema dipped and hemorrhage irritated trabeculae with washed out uptake.Morphobiochemically, GCPBS can clearly depict microfractures in the femoral head produced by femoral neck fracture. It discerns the microcalluses with enhanced Tc-99m HDP uptake from the intact and edema dipped and hemorrhage irritated trabeculae with suppressed washed out Tc-99m HDP uptake. Both conventional pinhole bone scan and gamma correction are useful imaging means to specifically diagnose the microcalluses naturally formed in TMF.

  12. Diagnosis of adrenal tumors

    Energy Technology Data Exchange (ETDEWEB)

    Richter, E.I.; Loesch, H.

    1987-09-01

    Of 155 patients with adrenal disorders, 120 (77%) were correctly diagnosed as negative. There were no correlations between the results of computer tomography and phlebography or between computer tomography and laboratory tests. In 31 patients (20%) a correct diagnosis was obtained and these patients were sent to surgery. Four cases (3%) were shown to be false positive. In these cases (with one exception), both the computer tomography and phlebography results had been overinterpreted. Computer tomography was shown to be a method of high sensitivity and almost as great specificity. Tumors cannot be distinguished by phlebography; only pheochromocytoma shows a characteristic alteration of vessels in arteriograms. In general, an accurate diagnosis requires positive angiography (arterio- or phlebography) results and clear evidence of elevated hormone levels. Only then is surgery indicated.

  13. Diagnosis of adrenal tumors

    International Nuclear Information System (INIS)

    Richter, E.I.; Loesch, H.

    1987-01-01

    Of 155 patients with adrenal disorders, 120 (77%) were correctly diagnosed as negative. There were no correlations between the results of computer tomography and phlebography or between computer tomography and laboratory tests. In 31 patients (20%) a correct diagnosis was obtained and these patients were sent to surgery. Four cases (3%) were shown to be false positive. In these cases (with one exception), both the computer tomography and phlebography results had been overinterpreted. Computer tomography was shown to be a method of high sensitivity and almost as great specificity. Tumors cannot be distinguished by phlebography; only pheochromocytoma shows a characteristic alteration of vessels in arteriograms. In general, an accurate diagnosis requires positive angiography (arterio- or phlebography) results and clear evidence of elevated hormone levels. Only then is surgery indicated. (orig.) [de

  14. Evolution of Indian Ocean biases in the summer monsoon season hindcasts from the Met Office Global Seasonal Forecasting System GloSea5

    Science.gov (United States)

    Chevuturi, A.; Turner, A. G.; Woolnough, S. J.

    2016-12-01

    In this study we investigate the development of biases in the Indian Ocean region in summer hindcasts of the UK Met Office coupled initialised global seasonal forecasting system, GloSea5-GC2. Previous work has demonstrated the rapid evolution of strong monsoon circulation biases over India from seasonal forecasts initialised in early May, together with coupled strong easterly wind biases on the equator. We analyse a set of three springtime start dates for the 20-year hindcast period (1992-2011) and fifteen total ensemble members for each year. We use comparisons with a variety of observations to test the rate of evolving mean-state biases in the Arabian Sea, over India, and over the equatorial Indian Ocean. Biases are all shown to develop rapidly, particularly for the circulation bias over India that is connected to convection. These circulation biases later reach the surface and lead to responses in Arabian Sea SST in accordance with coastal and Ekman upwelling processes. We also assess the evolution of radiation and turbulent heat fluxes at the surface. Meanwhile at the equator, easterly biases in surface winds are shown to develop rapidly, consistent with an SST pattern that is consistent with positive-Indian Ocean dipole mean state conditions (warm western equatorial Indian Ocean, cold east). This bias develops consistent with coupled ocean-atmosphere exchanges and Bjerknes feedback. We hypothesize that lower tropospheric easterly wind biases developing in the equatorial region originate from the surface, and also that signals of the cold bias in the eastern equatorial Indian Ocean propagate to the Bay of Bengal via coastal Kelvin waves. Earlier work has shown the utility of wind-stress corrections in the Indian Ocean for correcting the easterly winds bias there and ultimately improving the evolution of the Indian Ocean Dipole. We identify and test this wind-stress correction technique in case study years from the hindcast period to see their impact on seasonal

  15. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    Science.gov (United States)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  16. Exemplar-based human action pose correction.

    Science.gov (United States)

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.

  17. CAUSES: Diagnosis of the Summertime Warm Bias in CMIP5 Climate Models at the ARM Southern Great Plains Site

    Science.gov (United States)

    Zhang, Chengzhu; Xie, Shaocheng; Klein, Stephen A.; Ma, Hsi-yen; Tang, Shuaiqi; Van Weverberg, Kwinten; Morcrette, Cyril J.; Petch, Jon

    2018-03-01

    All the weather and climate models participating in the Clouds Above the United States and Errors at the Surface project show a summertime surface air temperature (T2 m) warm bias in the region of the central United States. To understand the warm bias in long-term climate simulations, we assess the Atmospheric Model Intercomparison Project simulations from the Coupled Model Intercomparison Project Phase 5, with long-term observations mainly from the Atmospheric Radiation Measurement program Southern Great Plains site. Quantities related to the surface energy and water budget, and large-scale circulation are analyzed to identify possible factors and plausible links involved in the warm bias. The systematic warm season bias is characterized by an overestimation of T2 m and underestimation of surface humidity, precipitation, and precipitable water. Accompanying the warm bias is an overestimation of absorbed solar radiation at the surface, which is due to a combination of insufficient cloud reflection and clear-sky shortwave absorption by water vapor and an underestimation in surface albedo. The bias in cloud is shown to contribute most to the radiation bias. The surface layer soil moisture impacts T2 m through its control on evaporative fraction. The error in evaporative fraction is another important contributor to T2 m. Similar sources of error are found in hindcast from other Clouds Above the United States and Errors at the Surface studies. In Atmospheric Model Intercomparison Project simulations, biases in meridional wind velocity associated with the low-level jet and the 500 hPa vertical velocity may also relate to T2 m bias through their control on the surface energy and water budget.

  18. Biased but in Doubt: Conflict and Decision Confidence

    Science.gov (United States)

    De Neys, Wim; Cromheeke, Sofie; Osman, Magda

    2011-01-01

    Human reasoning is often biased by intuitive heuristics. A central question is whether the bias results from a failure to detect that the intuitions conflict with traditional normative considerations or from a failure to discard the tempting intuitions. The present study addressed this unresolved debate by using people's decision confidence as a nonverbal index of conflict detection. Participants were asked to indicate how confident they were after solving classic base-rate (Experiment 1) and conjunction fallacy (Experiment 2) problems in which a cued intuitive response could be inconsistent or consistent with the traditional correct response. Results indicated that reasoners showed a clear confidence decrease when they gave an intuitive response that conflicted with the normative response. Contrary to popular belief, this establishes that people seem to acknowledge that their intuitive answers are not fully warranted. Experiment 3 established that younger reasoners did not yet show the confidence decrease, which points to the role of improved bias awareness in our reasoning development. Implications for the long standing debate on human rationality are discussed. PMID:21283574

  19. Lineup identification by children: effects of clothing bias.

    Science.gov (United States)

    Freire, Alejo; Lee, Kang; Williamson, Karen S; Stuart, Sarah J E; Lindsay, R C L

    2004-06-01

    This study examined effects of clothing cues on children's identification accuracy from lineups. Four- to 14-year-olds (n = 228) saw 12 video clips of individuals, each wearing a distinctly colored shirt. After watching each clip children were presented with a target-present or target-absent photo lineup. Three clothing conditions were included. In 2 conditions all lineup members wore the same colored shirt; in the third, biased condition, the shirt color of only one individual matched that seen in the preceding clip (the target in target-present trials and the replacement in target-absent trials). Correct identifications of the target in target-present trials were most frequent in the biased condition, whereas in target-absent trials the biased condition led to more false identifications of the target replacement. Older children were more accurate than younger children, both in choosing the target from target-present lineups and rejecting target-absent lineups. These findings suggest that a simple clothing cue such as shirt color can have a significant impact on children's lineup identification accuracy.

  20. Lax decision criteria lead to negativity bias: evidence from the emotional stroop task.

    Science.gov (United States)

    Liu, Guofang; Xin, Ziqiang; Lin, Chongde

    2014-06-01

    Negativity bias means that negative information is usually given more emphasis than comparable positive information. Under signal detection theory, recent research found that people more frequently and incorrectly identify negative task-related words as having been presented originally than positive words, even when they were not presented. That is, people have lax decision criteria for negative words. However, the response biases for task-unrelated negative words and for emotionally important words are still unclear. This study investigated response bias for these two kinds of words. Study 1 examined the response bias for task-unrelated negative words using an emotional Stroop task. Proportions of correct recognition to negative and positive words were assessed by non-parametric signal detection analysis. Participants have lower (i.e., more lax) decision criteria for task-unrelated negative words than for positive words. Study 2 supported and expanded this result by investigating participants' response bias for highly emotional words. Participants have lower decision criteria for highly emotional words than for less emotional words. Finally, possible evolutionary sources of the response bias were discussed.

  1. New method for eliminating the statistical bias in highly turbulent flow measurements

    International Nuclear Information System (INIS)

    Nakao, S.I.; Terao, Y.; Hirata, K.I.; Kitakyushu Industrial Research Institute, Fukuoka, Japan)

    1987-01-01

    A simple method was developed for eliminating statistical bias which can be applied to highly turbulent flows with the sparse and nonuniform seeding conditions. Unlike the method proposed so far, a weighting function was determined based on the idea that the statistical bias could be eliminated if the asymmetric form of the probability density function of the velocity data were corrected. Moreover, the data more than three standard deviations away from the mean were discarded to remove the apparent turbulent intensity resulting from noise. The present method was applied to data obtained in the wake of a block, which provided local turbulent intensities up to about 120 percent, it was found to eliminate the statistical bias with high accuracy. 9 references

  2. Detectability of the effect of Inflationary non-Gaussianity on halo bias

    CERN Document Server

    Verde, Licia

    2009-01-01

    We consider the description of the clustering of halos for physically-motivated types of non-Gaussian initial conditions. In particular we include non-Gaussianity of the type arising from single field slow-roll, multi fields, curvaton (local type), higher-order derivative-type (equilateral), vacuum-state modifications (enfolded-type) and horizon-scale GR corrections type. We show that large-scale halo bias is a very sensitive tool to probe non-Gaussianity, potentially leading, for some planned surveys, to a detection of non-Gaussianity arising from horizon-scale GR corrections.

  3. RCP: a novel probe design bias correction method for Illumina Methylation BeadChip.

    Science.gov (United States)

    Niu, Liang; Xu, Zongli; Taylor, Jack A

    2016-09-01

    The Illumina HumanMethylation450 BeadChip has been extensively utilized in epigenome-wide association studies. This array and its successor, the MethylationEPIC array, use two types of probes-Infinium I (type I) and Infinium II (type II)-in order to increase genome coverage but differences in probe chemistries result in different type I and II distributions of methylation values. Ignoring the difference in distributions between the two probe types may bias downstream analysis. Here, we developed a novel method, called Regression on Correlated Probes (RCP), which uses the existing correlation between pairs of nearby type I and II probes to adjust the beta values of all type II probes. We evaluate the effect of this adjustment on reducing probe design type bias, reducing technical variation in duplicate samples, improving accuracy of measurements against known standards, and retention of biological signal. We find that RCP is statistically significantly better than unadjusted data or adjustment with alternative methods including SWAN and BMIQ. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website (https://www.bioconductor.org/packages/release/bioc/html/ENmix.html). niulg@ucmail.uc.edu Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  4. [The heuristics of reaching a diagnosis].

    Science.gov (United States)

    Wainstein, Eduardo

    2009-12-01

    Making a diagnosis in medicine is a complex process in which many cognitive and psychological issues are involved. After the first encounter with the patient, an unconscious process ensues to suspect the presence of a particular disease. Usually, complementary tests are requested to confirm the clinical suspicion. The interpretation of requested tests can be biased by the clinical diagnosis that was considered in the first encounter with the patient. The awareness of these sources of error is essential in the interpretation of the findings that will eventually lead to a final diagnosis. This article discusses some aspects of the heuristics involved in the adjudication of priory probabilities and provides a brief review of current concepts of the reasoning process.

  5. Evaluation of bias and variance in low-count OSEM list mode reconstruction

    International Nuclear Information System (INIS)

    Jian, Y; Carson, R E; Planeta, B

    2015-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([ 11 C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. (paper)

  6. Differential diagnosis of rheumatic illnesses. 4. compl. rev. and enl. ed.

    International Nuclear Information System (INIS)

    Zeidler, Henning; Michel, Beat

    2009-01-01

    The number of the possible differential diagnosis of rheumatic illnesses is extraordinarily high. This circumstance makes the diagnostics a difficult field with numerous pitfalls. The correct and complete diagnosis however is a condition for the correct therapy. This book facilitates this way from the symptom to the diagnosis for the reader: A detailed representation of the fundamentals (anamnesis, investigation findings, laboratory diagnostics and imaging) a detailed description of all important differential diagnosis follows. The meanwhile fourth edition of this standard work was completely revised and updated. An indispensable guide book for all persons which treat patients with rheumatic illnesses [de

  7. Selection bias in the reported performances of AD classification pipelines

    Directory of Open Access Journals (Sweden)

    Alex F. Mendelson

    2017-01-01

    Full Text Available The last decade has seen a great proliferation of supervised learning pipelines for individual diagnosis and prognosis in Alzheimer's disease. As more pipelines are developed and evaluated in the search for greater performance, only those results that are relatively impressive will be selected for publication. We present an empirical study to evaluate the potential for optimistic bias in classification performance results as a result of this selection. This is achieved using a novel, resampling-based experiment design that effectively simulates the optimisation of pipeline specifications by individuals or collectives of researchers using cross validation with limited data. Our findings indicate that bias can plausibly account for an appreciable fraction (often greater than half of the apparent performance improvement associated with the pipeline optimisation, particularly in small samples. We discuss the consistency of our findings with patterns observed in the literature and consider strategies for bias reduction and mitigation.

  8. Enzyme-linked immunoassay for plasma-free metanephrines in the biochemical diagnosis of phaeochromocytoma in adults is not ideal.

    LENUS (Irish Health Repository)

    2012-02-01

    Abstract Background: The aim of the study was to define the analytical and diagnostic performance of the Labor Diagnostica Nord (LDN) 2-Met plasma ELISA assay for fractionated plasma metanephrines in the biochemical diagnosis of phaeochromocytoma. Methods: The stated manufacturer\\'s performance characteristics were assessed. Clinical utility was evaluated against liquid chromatography tandem mass spectrometry (LC-MS\\/MS) using bias, sensitivity and specificity outcomes. Samples (n=73) were collected from patients in whom phaeochromocytoma had been excluded (n=60) based on low probability of disease, repeat negative testing for urinary fractionated catecholamines and metanephrines, lack of radiological and histological evidence of a tumour and from a group (n=13) in whom the tumour had been histologically confirmed. Blood collected into k(2)EDTA tubes was processed within 30 min. Separated plasma was aliquoted (x2) and frozen at -40 degrees C prior to analyses. One aliquot was analysed for plasma metanephrines using the LDN 2-Met ELISA and the other by LC-MS\\/MS. Results: The mean bias of -32% for normetanephrine (ELISA) when compared to the reference method (LC-MS\\/MS) makes under-diagnosis of phaeochromocytoma likely. The sensitivity of the assay (100%) was equal to the reference method, but specificity (88.3%) lower than the reference method (95%), making it less than optimum for the biochemical diagnosis of phaeochromocytoma. Conclusions: Plasma-free metanephrines as measured by Labor Diagnostica Nord (LDN) 2-Met ELISA do not display test characteristics that would support their introduction or continuation as part of a screening protocol for the biochemical detection of phaeochromocytoma unless the calibration problem identified is corrected and other more accurate and analytically specific methods remain unavailable.

  9. A Comparison of Three Approaches to Correct for Direct and Indirect Range Restrictions: A Simulation Study

    Science.gov (United States)

    Pfaffel, Andreas; Schober, Barbara; Spiel, Christiane

    2016-01-01

    A common methodological problem in the evaluation of the predictive validity of selection methods, e.g. in educational and employment selection, is that the correlation between predictor and criterion is biased. Thorndike's (1949) formulas are commonly used to correct for this biased correlation. An alternative approach is to view the selection…

  10. Assessment and Correction of on-Orbit Radiometric Calibration for FY-3 VIRR Thermal Infrared Channels

    Directory of Open Access Journals (Sweden)

    Na Xu

    2014-03-01

    Full Text Available FengYun-3 (FY-3 Visible Infrared Radiometer (VIRR, along with its predecessor, Multispectral Visible Infrared Scanning Radiometer (MVISR, onboard FY-1C&D have had continuous global observation more than 14 years. This data record is valuable for weather prediction, climate monitoring, and environment research. Data quality is vital for satellite data assimilations in Numerical Weather Prediction (NWP and quantitative remote sensing applications. In this paper, the accuracies of radiometric calibration for VIRR onboard FY-3A and FY-3B, in thermal infrared (TIR channels, are evaluated using the Low Earth Orbit (LEO-LEO simultaneous nadir overpass intercalibration method. Hyperspectral and high-quality observations from Infrared Atmosphere Sounding Instrument (IASI onboard METOP-A are used as reference. The biases of VIRR measurements with respect to IASI over one-and-a-half years indicate that the TIR calibration accuracy of FY-3B VIRR is better than that of FY-3A VIRR. The brightness temperature (BT measured by FY-3A/VIRR is cooler than that measured by IASI with monthly mean biases ranging from −2 K to −1 K for channel 4 and −1 K to 0.2 K for channel 5. Measurements from FY-3B/VIRR are more consistent with that from IASI, and the annual mean biases are 0.84 ± 0.16 K and −0.66 ± 0.18 K for channels 4 and 5, respectively. The BT biases of FY-3A/VIRR show scene temperature-dependence and seasonal variation, which are not found from FY-3B/VIRR BT biases. The temperature-dependent biases are shown to be attributed to the nonlinearity of detectors. New nonlinear correction coefficients of FY-3A/VIRR TIR channels are reevaluated using various collocation samples. Verification results indicate that the use of the new nonlinear correction can greatly correct the scene temperature-dependent and systematic biases.

  11. Achalasia-an unnecessary long way to diagnosis.

    Science.gov (United States)

    Niebisch, S; Hadzijusufovic, E; Mehdorn, M; Müller, M; Scheuermann, U; Lyros, O; Schulz, H G; Jansen-Winkeln, B; Lang, H; Gockel, I

    2017-05-01

    Although achalasia presents with typical symptoms such as dysphagia, regurgitation, weight loss, and atypical chest pain, the time until first diagnosis often takes years and is frustrating for patients and nevertheless associated with high costs for the healthcare system. A total of 563 patients were interviewed with confirmed diagnosis of achalasia regarding their symptoms leading to diagnosis along with past clinical examinations and treatments. Included were patients who had undergone their medical investigations in Germany. Overall, 527 study subjects were included (male 46%, female 54%, mean age at time of interview 51 ± 14.8 years). Dysphagia was present in 86.7%, regurgitation in 82.9%, atypical chest pain in 79%, and weight loss in 58% of patients before diagnosis. On average, it took 25 months (Interquartile Range (IQR) 9-65) until confirmation of correct diagnosis of achalasia. Though, diagnosis was confirmed significantly quicker (35 months IQR 9-89 vs. 20 months IQR 8-53; p surgical myotomy. Endoscopic dilatation was realized significantly faster compared to esophageal myotomy (1 month IQR 0-4 vs. 3 months IQR 1-11; p achalasia was significantly faster in the past 15 years, it still takes almost 2 years until the correct diagnosis of achalasia is confirmed. Alarming is the fact that although esophageal manometry is known as the gold standard to differentiate primary motility disorders, only three out of four patients had undergone this diagnostic pathway during their diagnostic work-up. Better education of medical professionals and broader utilization of highly sensitive diagnostic tools, such as high-resolution manometry, are strictly necessary in order to correctly diagnose affected patients and to offer therapy faster. © The Authors 2017. Published by Oxford University Press on behalf of International Society for Diseases of the Esophagus. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Isotopic biases for actinide-only burnup credit

    International Nuclear Information System (INIS)

    Rahimi, M.; Lancaster, D.; Hoeffer, B.; Nichols, M.

    1997-01-01

    The primary purpose of this paper is to present the new methodology for establishing bias and uncertainty associated with isotopic prediction in spent fuel assemblies for burnup credit analysis. The analysis applies to the design of criticality control systems for spent fuel casks. A total of 54 spent fuel samples were modeled and analyzed using the Shielding Analyses Sequence (SAS2H). Multiple regression analysis and a trending test were performed to develop isotopic correction factors for 10 actinide burnup credit isotopes. 5 refs., 1 tab

  13. Mixture models reveal multiple positional bias types in RNA-Seq data and lead to accurate transcript concentration estimates.

    Directory of Open Access Journals (Sweden)

    Andreas Tuerk

    2017-05-01

    Full Text Available Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare", a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.

  14. Corrections for the effects of significant wave height and attitude on Geosat radar altimeter measurements

    Science.gov (United States)

    Hayne, G. S.; Hancock, D. W., III

    1990-01-01

    Range estimates from a radar altimeter have biases which are a function of the significant wave height (SWH) and the satellite attitude angle (AA). Based on results of prelaunch Geosat modeling and simulation, a correction for SWH and AA was already applied to the sea-surface height estimates from Geosat's production data processing. By fitting a detailed model radar return waveform to Geosat waveform sampler data, it is possible to provide independent estimates of the height bias, the SWH, and the AA. The waveform fitting has been carried out for 10-sec averages of Geosat waveform sampler data over a wide range of SWH and AA values. The results confirm that Geosat sea-surface-height correction is good to well within the original dm-level specification, but that an additional height correction can be made at the level of several cm.

  15. Time-course of attention biases in social phobia.

    Science.gov (United States)

    Schofield, Casey A; Inhoff, Albrecht W; Coles, Meredith E

    2013-10-01

    Theoretical models of social phobia implicate preferential attention to social threat in the maintenance of anxiety symptoms, though there has been limited work characterizing the nature of these biases over time. The current study utilized eye-movement data to examine the time-course of visual attention over 1500ms trials of a probe detection task. Nineteen participants with a primary diagnosis of social phobia based on DSM-IV criteria and 20 non-clinical controls completed this task with angry, fearful, and happy face trials. Overt visual attention to the emotional and neutral faces was measured in 50ms segments across the trial. Over time, participants with social phobia attend less to emotional faces and specifically less to happy faces compared to controls. Further, attention to emotional relative to neutral expressions did not vary notably by emotion for participants with social phobia, but control participants showed a pattern after 1000ms in which over time they preferentially attended to happy expressions and avoided negative expressions. Findings highlight the importance of considering attention biases to positive stimuli as well as the pattern of attention between groups. These results suggest that attention "bias" in social phobia may be driven by a relative lack of the biases seen in non-anxious participants. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Competing Biases in Mental Arithmetic: When Division Is More and Multiplication Is Less.

    Science.gov (United States)

    Shaki, Samuel; Fischer, Martin H

    2017-01-01

    Mental arithmetic exhibits various biases. Among those is a tendency to overestimate addition and to underestimate subtraction outcomes. Does such "operational momentum" (OM) also affect multiplication and division? Twenty-six adults produced lines whose lengths corresponded to the correct outcomes of multiplication and division problems shown in symbolic format. We found a reliable tendency to over-estimate division outcomes, i.e., reverse OM. We suggest that anchoring on the first operand (a tendency to use this number as a reference for further quantitative reasoning) contributes to cognitive biases in mental arithmetic.

  17. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals

    DEFF Research Database (Denmark)

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G

    2018-01-01

    for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision...... are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation...

  18. Cognitive Reflection, Decision Biases, and Response Times.

    Science.gov (United States)

    Alós-Ferrer, Carlos; Garagnani, Michele; Hügelschäfer, Sabine

    2016-01-01

    We present novel evidence on response times and personality traits in standard questions from the decision-making literature where responses are relatively slow (medians around half a minute or above). To this end, we measured response times in a number of incentivized, framed items (decisions from description) including the Cognitive Reflection Test, two additional questions following the same logic, and a number of classic questions used to study decision biases in probability judgments (base-rate neglect, the conjunction fallacy, and the ratio bias). All questions create a conflict between an intuitive process and more deliberative thinking. For each item, we then created a non-conflict version by either making the intuitive impulse correct (resulting in an alignment question), shutting it down (creating a neutral question), or making it dominant (creating a heuristic question). For CRT questions, the differences in response times are as predicted by dual-process theories, with alignment and heuristic variants leading to faster responses and neutral questions to slower responses than the original, conflict questions. For decision biases (where responses are slower), evidence is mixed. To explore the possible influence of personality factors on both choices and response times, we used standard personality scales including the Rational-Experiential Inventory and the Big Five, and used them as controls in regression analysis.

  19. Return predictability and intertemporal asset allocation: Evidence from a bias-adjusted VAR model

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    with quarterly data from 1952 to 2006. The results show that correcting the VAR parameters for small-sample bias has both quantitatively and qualitatively important e¤ects on the strategic intertemporal part of optimal portfolio choice, especially for bonds: for intermediate values of risk...

  20. Analysis of operators' diagnosis tasks based on cognitive process

    International Nuclear Information System (INIS)

    Zhou Yong; Zhang Li

    2012-01-01

    Diagnosis tasks in nuclear power plants characterized as high-dynamic uncertainties are complex reasoning tasks. Diagnosis errors are the main causes for the error of commission. Firstly, based on mental model theory and perception/action cycle theory, a cognitive model for analyzing operators' diagnosis tasks is proposed. Then, the model is used to investigate a trip event which occurred at crystal river nuclear power plant. The application demonstrates typical cognitive bias and mistakes which operators may make when performing diagnosis tasks. They mainly include the strong confirmation tendency, difficulty to produce complete hypothesis sets, group mindset, non-systematic errors in hypothesis testing, and etc. (authors)

  1. Counteracting estimation bias and social influence to improve the wisdom of crowds.

    Science.gov (United States)

    Kao, Albert B; Berdahl, Andrew M; Hartnett, Andrew T; Lutz, Matthew J; Bak-Coleman, Joseph B; Ioannou, Christos C; Giam, Xingli; Couzin, Iain D

    2018-04-01

    Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds. © 2018 The Author(s).

  2. Off-nadir antenna bias correction using Amazon rain sigma(0) data

    Science.gov (United States)

    Birrer, I. J.; Dome, G. J.; Sweet, J.; Berthold, G.; Moore, R. K.

    1982-01-01

    The radar response from the Amazon rain forest was studied to determine the suitability of this region for use as a standard target to calibrate a scatterometer like that proposed for the National Oceanic Satellite System (NOSS). Backscattering observations made by the SEASAT Scatterometer System (SASS) showed the Amazon rain forest to be a homogeneous, azimuthally-isotropic, radar target which was insensitive to polarization. The variation with angle of incidence was adequately modeled as scattering coefficient (dB) = a theta b with typical values for the incidence-angle coefficient from 0.07 to 0.15 dB/deg. A small diurnal effect occurs, with measurements at sunrise being 0.5 dB to 1 dB higher than the rest of the day. Maximum-likelihood estimation algorithms presented here permit determination of relative bias and true pointing angle for each beam. Specific implementation of these algorithms for the proposed NOSS scatterometer system is also discussed.

  3. Subject de-biasing of data sets: A Bayesian approach

    International Nuclear Information System (INIS)

    Pate-Cornell, M.E.

    1994-01-01

    In this paper, the authors examine the relevance of data sets (for instance, of past incidents) for risk management decisions when there are reasons to believe that all types of incidents have not been reported at the same rate. Their objective is to infer from the data reports what actually happened in order to assess the potential benefits of different safety measures. The authors use a simple Bayesian model to correct (de-bias) the data sets given the nonreport rates, which are assessed (subjectively) by experts and encoded as the probabilities of reports given different characteristics of the events of interest. They compute a probability distribution for the past number of events given the past number of reports. They illustrate the method by the cases of two data sets: incidents in anesthesia in Australia, and oil spills in the Gulf of Mexico. In the first case, the de-biasing allows correcting for the fact that some types of incidents, such as technical malfunctions, are more likely to be reported when they occur than anesthetist mistakes. In the second case, the authors have to account for the fact that the rates of oil spill reports indifferent incident categories have increased over the years, perhaps at the same time as the rates of incidents themselves

  4. Moisture corrections in neutron coincidence counting of PuO2

    International Nuclear Information System (INIS)

    Stewart, J.E.; Menlove, H.O.

    1987-01-01

    Passive neutron coincidence counting is capable of 1% assay accuracy for pure, well-characterized PuO 2 samples that contain plutonium masses from a few tens of grams to several kilograms. Moisture in the sample can significantly bias the assay high by changing the (α,n) neutron production, the sample multiplication, and the detection efficiency. Monte Carlo calculations and an analytical model of coincidence counting have been used to quantify the individual and cumulative effects of moisture biases for two PuO 2 sample sizes and a range of moisture levels from 0 to 9 wt %. Results of the calculations suggest a simple correction procedure for moisture bias that is effective from 0 to 3 wt % H 2 O. The procedure requires that the moisture level in the sample be known before the coincidence measurement

  5. A procedure to correct proxy-reported weight in the National Health Interview Survey, 1976–2002

    Directory of Open Access Journals (Sweden)

    Utz Rebecca L

    2009-01-01

    Full Text Available Abstract Background Data from the National Health Interview Survey (NHIS show a larger-than-expected increase in mean BMI between 1996 and 1997. Proxy-reports of height and weight were discontinued as part of the 1997 NHIS redesign, suggesting that the sharp increase between 1996 and 1997 may be artifactual. Methods We merged NHIS data from 1976–2002 into a single database consisting of approximately 1.7 million adults aged 18 and over. The analysis consisted of two parts: First, we estimated the magnitude of BMI differences by reporting status (i.e., self-reported versus proxy-reported height and weight. Second, we developed a procedure to correct biases in BMI introduced by reporting status. Results Our analyses confirmed that proxy-reports of weight tended to be biased downward, with the degree of bias varying by race, sex, and other characteristics. We developed a correction procedure to minimize BMI underestimation associated with proxy-reporting, substantially reducing the larger-than-expected increase found in NHIS data between 1996 and 1997. Conclusion It is imperative that researchers who use reported estimates of height and weight think carefully about flaws in their data and how existing correction procedures might fail to account for them. The development of this particular correction procedure represents an important step toward improving the quality of BMI estimates in a widely used source of epidemiologic data.

  6. CT on diagnosis and differential diagnosis of adrenal neuroblastoma from nephroblastoma in children

    International Nuclear Information System (INIS)

    Han Jingtian; Shen Guoqiang; Yang Huayuan

    2000-01-01

    Objective: To evaluate the effect of CT on diagnosis and differential diagnosis of children's adrenal neuroblastoma from nephroblastoma. Materials and Method: To analyse the CT manifestations on 36 cases of adrenal neuroblastoma and 32 cases of nephroblastoma both confirmed by postoperative pathologic diagnosis. Results: The adrenal neuroblastoma is a kind of extrarenal tumor, so the kidney kept its original form and showed some compressive features. The incidence of tumor calcification appeared mostly in rough and speckle-piece form was high. While the nephroblastoma is a renal tumor. The surrounding renal parenchyma showed a specific 'new-moon shape' intensification. Conclusion: CT is one of the most valuable and effective means of examination to diagnose adrenal neuroblastoma and differentiate it from nephroblastoma. It can provide important information for making correct diagnosis, planning proper therapy and assessing prognosis

  7. Revising the diagnosis of congenital amusia with the Montreal Battery of Evaluation of Amusia.

    Science.gov (United States)

    Pfeifer, Jasmin; Hamann, Silke

    2015-01-01

    This article presents a critical survey of the prevalent usage of the Montreal Battery of Evaluation of Amusia (MBEA; Peretz et al., 2003) to assess congenital amusia, a neuro-developmental disorder that has been claimed to be present in 4% of the population (Kalmus and Fry, 1980). It reviews and discusses the current usage of the MBEA in relation to cut-off scores, number of used subtests, manner of testing, and employed statistics, as these vary in the literature. Furthermore, data are presented from a large-scale experiment with 228 German undergraduate students who were assessed with the MBEA and a comprehensive questionnaire. This experiment tested the difference between scores that were obtained in a web-based study (at participants' homes) and those obtained under laboratory conditions with a computerized version of the MBEA. In addition to traditional statistical procedures, the data were evaluated using Signal Detection Theory (SDT; Green and Swets, 1966), taking into consideration the individual's ability to discriminate and their response bias. Results show that using SDT for scoring instead of proportion correct offers a bias-free and normally distributed measure of discrimination ability. It is also demonstrated that a diagnosis based on an average score leads to cases of misdiagnosis. The prevalence of congenital amusia is shown to depend highly on the statistical criterion that is applied as cut-off score and on the number of subtests that is considered for the diagnosis. In addition, three different subtypes of amusics were found in our sample. Lastly, significant differences between the web-based and the laboratory group were found, giving rise to questions about the validity of web-based experimentation.

  8. Revising the diagnosis of congenital amusia with the Montreal Battery of Evaluation of Amusia

    Directory of Open Access Journals (Sweden)

    Jasmin ePfeifer

    2015-04-01

    Full Text Available This article presents a critical survey of the prevalent usage of the Montreal Battery of Evaluation of Amusia (MBEA; Peretz et al., 2003 to assess congenital amusia, a neuro-developmental disorder that has been claimed to be present in 4% of the population (Kalmus and Fry, 1980. It reviews and discusses the current usage of the MBEA in relation to cut-off scores, number of used subtests, manner of testing, and employed statistics, as these vary in the literature. Furthermore, data are presented from a large-scale experiment with 228 German undergraduate students who were assessed with the MBEA and a comprehensive questionnaire. This experiment tested the difference between scores that were obtained in a web-based study (at participants’ homes and those obtained under laboratory conditions with a computerized version of the MBEA. In addition to traditional statistical procedures, the data were evaluated using Signal Detection Theory (SDT; Green and Swets, 1966, taking into consideration the individual’s ability to discriminate and their response bias. Results show that using SDT for scoring instead of proportion correct offers a bias-free and normally distributed measure of discrimination ability. In addition it is also demonstrated that a diagnosis based on an average score leads to cases of misdiagnosis. The prevalence of congenital amusia is shown to depend highly on the statistical criterion that is applied as cut-off score and on the number of subtests that is considered for the diagnosis. In addition, three different subtypes of amusics were found in our sample. Lastly, significant differences between the web-based and the laboratory group were found, giving rise to questions about the validity of web-based experimentation.

  9. Revising the diagnosis of congenital amusia with the Montreal Battery of Evaluation of Amusia

    Science.gov (United States)

    Pfeifer, Jasmin; Hamann, Silke

    2015-01-01

    This article presents a critical survey of the prevalent usage of the Montreal Battery of Evaluation of Amusia (MBEA; Peretz et al., 2003) to assess congenital amusia, a neuro-developmental disorder that has been claimed to be present in 4% of the population (Kalmus and Fry, 1980). It reviews and discusses the current usage of the MBEA in relation to cut-off scores, number of used subtests, manner of testing, and employed statistics, as these vary in the literature. Furthermore, data are presented from a large-scale experiment with 228 German undergraduate students who were assessed with the MBEA and a comprehensive questionnaire. This experiment tested the difference between scores that were obtained in a web-based study (at participants’ homes) and those obtained under laboratory conditions with a computerized version of the MBEA. In addition to traditional statistical procedures, the data were evaluated using Signal Detection Theory (SDT; Green and Swets, 1966), taking into consideration the individual’s ability to discriminate and their response bias. Results show that using SDT for scoring instead of proportion correct offers a bias-free and normally distributed measure of discrimination ability. It is also demonstrated that a diagnosis based on an average score leads to cases of misdiagnosis. The prevalence of congenital amusia is shown to depend highly on the statistical criterion that is applied as cut-off score and on the number of subtests that is considered for the diagnosis. In addition, three different subtypes of amusics were found in our sample. Lastly, significant differences between the web-based and the laboratory group were found, giving rise to questions about the validity of web-based experimentation. PMID:25883562

  10. Does registration of serial MRI improve diagnosis of dementia?

    International Nuclear Information System (INIS)

    Barnes, Josephine; Kennedy, Jonathan; Barker, Suzie; Lehmann, Manja; Nordstrom, R.C.; Smith, Joseph R.; Rossor, Martin N.; Fox, Nick C.; Mitchell, L.A.; Bastos-Leite, Antonio J.; Frost, Chris; Garde, Ellen

    2010-01-01

    We aimed to assess the value of a second MR scan in the radiological diagnosis of dementia. One hundred twenty subjects with clinical follow-up of at least 1 year with two scans were selected from a cognitive disorders clinic. Scans were reviewed as a single first scan (method A), two unregistered scans presented side-by-side (method B) and a registered pair (method C). Scans were presented to two neuroradiologists and a clinician together with approximate scan interval (if applicable) and age. Raters decided on a main and subtype diagnosis. There was no evidence that differences between methods (expressed as relative odds of a correct response) differed between reviewers (p = 0.17 for degenerative condition or not, p = 0.5 for main diagnosis, p = 0.16 for subtype). Accordingly, results were pooled over reviewers. For distinguishing normal/non-progressors from degenerative conditions, the proportions correctly diagnosed were higher with methods B and C than with A (p = 0.001, both tests). The difference between method B and C was not statistically significant (p = 0.18). For main diagnosis, the proportion of correct diagnoses were highest with method C for all three reviewers; however, this was not statistically significant comparing with method A (p = 0.23) or with method B (p = 0.16). For subtype diagnosis, there was some evidence that method C was better than method A (p = 0.01) and B (p = 0.048). Serial MRI and registration may improve visual diagnosis in dementia. (orig.)

  11. Bias analysis to improve monitoring an HIV epidemic and its response: approach and application to a survey of female sex workers in Iran.

    Science.gov (United States)

    Mirzazadeh, Ali; Mansournia, Mohammad-Ali; Nedjat, Saharnaz; Navadeh, Soodabeh; McFarland, Willi; Haghdoost, Ali Akbar; Mohammad, Kazem

    2013-10-01

    We present probabilistic and Bayesian techniques to correct for bias in categorical and numerical measures and empirically apply them to a recent survey of female sex workers (FSW) conducted in Iran. We used bias parameters from a previous validation study to correct estimates of behaviours reported by FSW. Monte-Carlo Sensitivity Analysis and Bayesian bias analysis produced point and simulation intervals (SI). The apparent and corrected prevalence differed by a minimum of 1% for the number of 'non-condom use sexual acts' (36.8% vs 35.8%) to a maximum of 33% for 'ever associated with a venue to sell sex' (35.5% vs 68.0%). The negative predictive value of the questionnaire for 'history of STI' and 'ever associated with a venue to sell sex' was 36.3% (95% SI 4.2% to 69.1%) and 46.9% (95% SI 6.3% to 79.1%), respectively. Bias-adjusted numerical measures of behaviours increased by 0.1 year for 'age at first sex act for money' to 1.5 for 'number of sexual contacts in last 7 days'. The 'true' estimates of most behaviours are considerably higher than those reported and the related SIs are wider than conventional CIs. Our analysis indicates the need for and applicability of bias analysis in surveys, particularly in stigmatised settings.

  12. HYPERCORTISOLISM: CLASSIFICATION, PATHOGENESIS, CLINICAL MANIFESTATIONS. DIAGNOSIS OF ENDOGENOUS HYPERCORTISOLISM

    Directory of Open Access Journals (Sweden)

    Nikonova L. V.

    2017-02-01

    Full Text Available The relevance of the study of Cushing's syndrome with different etiology as well as the states of hypercorticism, which is not associated with endogenous hypercortisolism, is due to the difficulty of the diagnosis of this disease. Accurate knowledge of the classification criteria for the diagnosis of hypercorticism enables subsequently to establish the correct diagnosis and to administer the appropriate treatment. It was found that the cause of hypercorticism can be endogenous and exogenous factors. There is a particular group of patients requiring screening for hypercorticism using special diagnostic tests. Only a clear understanding of etiopathogenesis of hypercorticism and its clinical manifestations by the specialist, the correct interpretation of diagnostic results make it possible to establish the diagnosis, to administer the appropriate treatment and significantly reduce the morbidity and mortality of patients of this profile and improve their quality of life.

  13. Estimation of ionized calcium, total calcium and albumin corrected calcium for the diagnosis of hypocalcaemia of malignancy

    International Nuclear Information System (INIS)

    Ijaz, A.; Mehmood, T.; Qureshi, A.H.; Anwar, M.; Dilawar, M.; Hussain, I.; Khan, F.A.; Khan, D.A.; Hussain, S.; Khan, I.A.

    2006-01-01

    Objective: To measure levels of ionized calcium, total calcium and albumin corrected calcium in patients with different malignant disorders for the diagnosis of hypercalcaemia of malignancy. Design: A case control comparative study. Place and Duration of Study: The study was carried out in the Department of Pathology, Army Medical College Rawalpindi, Armed Forces Institute of Pathology and Department of Oncology CMH, Rawalpindi from March 2003 to December 2003. Subjects and Methods: Ninety-seven patients of various malignant disorders, admitted in the Department of Oncology, CMH, Rawalpindi, and 39 age and gender-matched disease-free persons (as control) were included in the study. Blood ionized calcium (Ca/sup ++/), pH, sodium (Na/sup +/) and potassium (K/sup +/) were analysed by Ion selective electrode (ISE) on Easylyte> auto analyser. Other related parameters were measured by colorimetric methods. Results: Blood Ca/sup ++/ levels in patients suffering from malignant disorders were found significantly high (mean +- j 1.30+017 mmoV/L) as compared to control subjects (mean +- 1.23+0.03 mmoV/L) (p<0.001). The number of patients with hypercalcaemia of malignancy detected by Ca/sup ++/ estimation was significantly higher (38%) as compared to total calcium (8.4%) and albumin corrected calcium ACC (10.6%) (p<0.001). There was no statistically significant difference in other parameters e.g. phosphate, urea, creatinine, pH, Na/sup +/ and K/sup +/ levels in study subjects and controls. Conclusion: Detection of hypercalcaemia can be markedly improved if ionized calcium estimation is used in patients with malignant disorders. (author)

  14. Ultrasound in the diagnosis of palpable abdominal masses in children.

    Science.gov (United States)

    Annuar, Z; Sakijan, A S; Annuar, N; Kooi, G H

    1990-12-01

    Ultrasound examinations were done to evaluate clinically palpable abdominal masses in 125 children. The examinations were normal in 21 patients. In 15 patients, the clinically palpable masses were actually anterior abdominal wall abscesses or hematomas. Final diagnosis was available in 87 of 89 patients with intraabdominal masses detected on ultrasound. The majority (71%) were retroperitoneal masses where two-thirds were of renal origin. Ultrasound diagnosis was correct in 68 patients (78%). All cases of hydronephrosis were correctly diagnosed based on characteristic ultrasound appearances. Correct diagnoses of all cases of adrenal hematoma, psoas abscess, liver hematoma, liver abscess and one case of liver metastases were achieved with correlation of relevant clinical information.

  15. G-stack modulated probe intensities on expression arrays - sequence corrections and signal calibration

    Directory of Open Access Journals (Sweden)

    Fasold Mario

    2010-04-01

    Full Text Available Abstract Background The brightness of the probe spots on expression microarrays intends to measure the abundance of specific mRNA targets. Probes with runs of at least three guanines (G in their sequence show abnormal high intensities which reflect rather probe effects than target concentrations. This G-bias requires correction prior to downstream expression analysis. Results Longer runs of three or more consecutive G along the probe sequence and in particular triple degenerated G at its solution end ((GGG1-effect are associated with exceptionally large probe intensities on GeneChip expression arrays. This intensity bias is related to non-specific hybridization and affects both perfect match and mismatch probes. The (GGG1-effect tends to increase gradually for microarrays of later GeneChip generations. It was found for DNA/RNA as well as for DNA/DNA probe/target-hybridization chemistries. Amplification of sample RNA using T7-primers is associated with strong positive amplitudes of the G-bias whereas alternative amplification protocols using random primers give rise to much smaller and partly even negative amplitudes. We applied positional dependent sensitivity models to analyze the specifics of probe intensities in the context of all possible short sequence motifs of one to four adjacent nucleotides along the 25meric probe sequence. Most of the longer motifs are adequately described using a nearest-neighbor (NN model. In contrast, runs of degenerated guanines require explicit consideration of next nearest neighbors (GGG terms. Preprocessing methods such as vsn, RMA, dChip, MAS5 and gcRMA only insufficiently remove the G-bias from data. Conclusions Positional and motif dependent sensitivity models accounts for sequence effects of oligonucleotide probe intensities. We propose a positional dependent NN+GGG hybrid model to correct the intensity bias associated with probes containing poly-G motifs. It is implemented as a single-chip based calibration

  16. Sympathetic bias.

    Science.gov (United States)

    Levy, David M; Peart, Sandra J

    2008-06-01

    We wish to deal with investigator bias in a statistical context. We sketch how a textbook solution to the problem of "outliers" which avoids one sort of investigator bias, creates the temptation for another sort. We write down a model of the approbation seeking statistician who is tempted by sympathy for client to violate the disciplinary standards. We give a simple account of one context in which we might expect investigator bias to flourish. Finally, we offer tentative suggestions to deal with the problem of investigator bias which follow from our account. As we have given a very sparse and stylized account of investigator bias, we ask what might be done to overcome this limitation.

  17. Bias modification training can alter approach bias and chocolate consumption.

    Science.gov (United States)

    Schumacher, Sophie E; Kemps, Eva; Tiggemann, Marika

    2016-01-01

    Recent evidence has demonstrated that bias modification training has potential to reduce cognitive biases for attractive targets and affect health behaviours. The present study investigated whether cognitive bias modification training could be applied to reduce approach bias for chocolate and affect subsequent chocolate consumption. A sample of 120 women (18-27 years) were randomly assigned to an approach-chocolate condition or avoid-chocolate condition, in which they were trained to approach or avoid pictorial chocolate stimuli, respectively. Training had the predicted effect on approach bias, such that participants trained to approach chocolate demonstrated an increased approach bias to chocolate stimuli whereas participants trained to avoid such stimuli showed a reduced bias. Further, participants trained to avoid chocolate ate significantly less of a chocolate muffin in a subsequent taste test than participants trained to approach chocolate. Theoretically, results provide support for the dual process model's conceptualisation of consumption as being driven by implicit processes such as approach bias. In practice, approach bias modification may be a useful component of interventions designed to curb the consumption of unhealthy foods. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Rational learning and information sampling: on the "naivety" assumption in sampling explanations of judgment biases.

    Science.gov (United States)

    Le Mens, Gaël; Denrell, Jerker

    2011-04-01

    Recent research has argued that several well-known judgment biases may be due to biases in the available information sample rather than to biased information processing. Most of these sample-based explanations assume that decision makers are "naive": They are not aware of the biases in the available information sample and do not correct for them. Here, we show that this "naivety" assumption is not necessary. Systematically biased judgments can emerge even when decision makers process available information perfectly and are also aware of how the information sample has been generated. Specifically, we develop a rational analysis of Denrell's (2005) experience sampling model, and we prove that when information search is interested rather than disinterested, even rational information sampling and processing can give rise to systematic patterns of errors in judgments. Our results illustrate that a tendency to favor alternatives for which outcome information is more accessible can be consistent with rational behavior. The model offers a rational explanation for behaviors that had previously been attributed to cognitive and motivational biases, such as the in-group bias or the tendency to prefer popular alternatives. 2011 APA, all rights reserved

  19. An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method

    Directory of Open Access Journals (Sweden)

    Jingyang Fu

    2018-04-01

    Full Text Available Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB, which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP. Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL time to first fix (TTFF in PPP ambiguity resolution (AR as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have

  20. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  1. Relative codon adaptation: a generic codon bias index for prediction of gene expression.

    Science.gov (United States)

    Fox, Jesse M; Erill, Ivan

    2010-06-01

    The development of codon bias indices (CBIs) remains an active field of research due to their myriad applications in computational biology. Recently, the relative codon usage bias (RCBS) was introduced as a novel CBI able to estimate codon bias without using a reference set. The results of this new index when applied to Escherichia coli and Saccharomyces cerevisiae led the authors of the original publications to conclude that natural selection favours higher expression and enhanced codon usage optimization in short genes. Here, we show that this conclusion was flawed and based on the systematic oversight of an intrinsic bias for short sequences in the RCBS index and of biases in the small data sets used for validation in E. coli. Furthermore, we reveal that how the RCBS can be corrected to produce useful results and how its underlying principle, which we here term relative codon adaptation (RCA), can be made into a powerful reference-set-based index that directly takes into account the genomic base composition. Finally, we show that RCA outperforms the codon adaptation index (CAI) as a predictor of gene expression when operating on the CAI reference set and that this improvement is significantly larger when analysing genomes with high mutational bias.

  2. Cognitive Reflection, Decision Biases, and Response Times

    Directory of Open Access Journals (Sweden)

    Carlos Alos-Ferrer

    2016-09-01

    Full Text Available We present novel evidence on decision times and personality traits in standard questions from the decision-making literature where responses are relatively slow (medians around half a minute or above. To this end, we measured decision times in a number of incentivized, framed items (decisions from description including the Cognitive Reflection Test, two additional questions following the same logic, and a number of classic questions used to study decision biases in probability judgments (base-rate neglect, the conjunction fallacy, and the ratio bias. All questions create a conflict between an intuitive process and more deliberative thinking. For each item, we then created a non-conflict version by either making the intuitive impulse correct (resulting in an alignment question, shutting it down (creating a neutral question, or making it dominant (creating a heuristic question. For CRT questions, the differences in decision times are as predicted by dual-process theories, with alignment and heuristic variants leading to faster responses and neutral questions to slower responses than the original, conflict questions. For decision biases (where responses are slower, evidence is mixed. To explore the possible influence of personality factors on both choices and decision times, we used standard personality scales including the Rational-Experiential Inventory and the Big Five, and used the mas controls in regression analysis.

  3. Addressing criticisms of existing predictive bias research: cognitive ability test scores still overpredict African Americans' job performance.

    Science.gov (United States)

    Berry, Christopher M; Zhao, Peng

    2015-01-01

    Predictive bias studies have generally suggested that cognitive ability test scores overpredict job performance of African Americans, meaning these tests are not predictively biased against African Americans. However, at least 2 issues call into question existing over-/underprediction evidence: (a) a bias identified by Aguinis, Culpepper, and Pierce (2010) in the intercept test typically used to assess over-/underprediction and (b) a focus on the level of observed validity instead of operational validity. The present study developed and utilized a method of assessing over-/underprediction that draws on the math of subgroup regression intercept differences, does not rely on the biased intercept test, allows for analysis at the level of operational validity, and can use meta-analytic estimates as input values. Therefore, existing meta-analytic estimates of key parameters, corrected for relevant statistical artifacts, were used to determine whether African American job performance remains overpredicted at the level of operational validity. African American job performance was typically overpredicted by cognitive ability tests across levels of job complexity and across conditions wherein African American and White regression slopes did and did not differ. Because the present study does not rely on the biased intercept test and because appropriate statistical artifact corrections were carried out, the present study's results are not affected by the 2 issues mentioned above. The present study represents strong evidence that cognitive ability tests generally overpredict job performance of African Americans. (c) 2015 APA, all rights reserved.

  4. Biases in Understanding Attention Deficit Hyperactivity Disorder and Autism Spectrum Disorder in Japan

    Directory of Open Access Journals (Sweden)

    Mami Miyasaka

    2018-02-01

    Full Text Available Recent research has shown high rates of comorbidity between attention deficit hyperactivity disorder (ADHD and autism spectrum disorder (ASD and difficulties regarding differential diagnosis. Unlike those in Western countries, the Japanese ADHD prevalence rate is lower relative to that of ASD. This inconsistency could have occurred because of cultural diversities among professionals such as physicians. However, little is known about attitudes toward ADHD and ASD in non-Western cultural contexts. We conducted two experiments to identify biases in ASD and ADHD assessment. In Study 1, we examined attitudes toward these disorders in medical doctors and mental health professionals, using a web-based questionnaire. In Study 2, medical doctors and clinical psychologists assessed four fictional cases based on criteria for ADHD, ASD, oppositional defiant disorder, and disinhibited social engagement disorder (DSED. Diagnosis of ASD was considered more difficult relative to that of ADHD. Most participants assessed the fictional DSED case as ASD, rather than DSED or ADHD. The results provide evidence that Japanese professionals are more likely to attribute children’s behavioral problems to ASD, relative to other disorders. Therefore, Japanese therapists could be more sensitive to and likely to diagnose ASD, relative to therapists in other countries. These findings suggest that cultural biases could influence clinicians’ diagnosis of ADHD and ASD.

  5. Ascertainment correction for Markov chain Monte Carlo segregation and linkage analysis of a quantitative trait.

    Science.gov (United States)

    Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E

    2007-09-01

    Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.

  6. Clinical diagnosis versus autopsy diagnosis in head trauma

    Directory of Open Access Journals (Sweden)

    Velnic Andreea-Alexandra

    2017-12-01

    Full Text Available The correct and complete diagnosis is essential for the adequate care and the favourable clinical evolution of the patients with head trauma. Purpose: To identify the error rate in the clinical diagnosis of head injuries as shown in comparison with the autopsy diagnosis and to identify the most common sources of error. Material and method: We performed a retrospective study based on data from the medical files and the autopsy reports of patients with head trauma who died in the hospital and underwent forensic autopsy. We collected: demographic data, clinical and laboratory data and autopsy findings. To quantify the concordance rate between the clinical diagnosis of death and the autopsy diagnosis we used a 4 classes classification, which ranged from 100% concordance (C1 to total discordance (C4 and two classes of partial discordance: C2 (partial discordance in favour of the clinical diagnosis- missing injuries in the autopsy reports and C3 (partial discordance in favor of the necroptic diagnosis- missing injuries in the medical files. Data were analyzed with SPSS version 20.0. Results: We analyzed 194 cases of death due to head injuries. We found a total concordance between the clinical death diagnosis and autopsy diagnosis in 30.4% of cases and at least one discrepancy in 69.6% of cases. Increasing the duration of hospitalization directly correlates with the amount of the imaging investigations and these in turn correlates with an increased rate of diagnosis concordance. Among the patients with stage 3 coma who associated a spinal cord injury, we found a partial diagnosis discordance in 50% of cases and a total discordance in 50% of cases, possibly due to the need for conducting emergency imaging investigation and the need for surgical treatment. In cases with partial and total discordant diagnosis, at least one lesion was omitted in 45.1% of the cases. The most commonly omitted injuries in C2 cases were subdural hematoma, intracerebral

  7. Efficient Photometry In-Frame Calibration (EPIC) Gaussian Corrections for Automated Background Normalization of Rate-Tracked Satellite Imagery

    Science.gov (United States)

    Griesbach, J.; Wetterer, C.; Sydney, P.; Gerber, J.

    Photometric processing of non-resolved Electro-Optical (EO) images has commonly required the use of dark and flat calibration frames that are obtained to correct for charge coupled device (CCD) dark (thermal) noise and CCD quantum efficiency/optical path vignetting effects respectively. It is necessary to account/calibrate for these effects so that the brightness of objects of interest (e.g. stars or resident space objects (RSOs)) may be measured in a consistent manner across the CCD field of view. Detected objects typically require further calibration using aperture photometry to compensate for sky background (shot noise). For this, annuluses are measured around each detected object whose contained pixels are used to estimate an average background level that is subtracted from the detected pixel measurements. In a new photometric calibration software tool developed for AFRL/RD, called Efficient Photometry In-Frame Calibration (EPIC), an automated background normalization technique is proposed that eliminates the requirement to capture dark and flat calibration images. The proposed technique simultaneously corrects for dark noise, shot noise, and CCD quantum efficiency/optical path vignetting effects. With this, a constant detection threshold may be applied for constant false alarm rate (CFAR) object detection without the need for aperture photometry corrections. The detected pixels may be simply summed (without further correction) for an accurate instrumental magnitude estimate. The noise distribution associated with each pixel is assumed to be sampled from a Poisson distribution. Since Poisson distributed data closely resembles Gaussian data for parameterized means greater than 10, the data may be corrected by applying bias subtraction and standard-deviation division. EPIC performs automated background normalization on rate-tracked satellite images using the following technique. A deck of approximately 50-100 images is combined by performing an independent median

  8. A review of bias flow liners for acoustic damping in gas turbine combustors

    Science.gov (United States)

    Lahiri, C.; Bake, F.

    2017-07-01

    The optimized design of bias flow liner is a key element for the development of low emission combustion systems in modern gas turbines and aero-engines. The research of bias flow liners has a fairly long history concerning both the parameter dependencies as well as the methods to model the acoustic behaviour of bias flow liners under the variety of different bias and grazing flow conditions. In order to establish an overview over the state of the art, this paper provides a comprehensive review about the published research on bias flow liners and modelling approaches with an extensive study of the most relevant parameters determining the acoustic behaviour of these liners. The paper starts with a historical description of available investigations aiming on the characterization of the bias flow absorption principle. This chronological compendium is extended by the recent and ongoing developments in this field. In a next step the fundamental acoustic property of bias flow liner in terms of the wall impedance is introduced and the different derivations and formulations of this impedance yielding the different published model descriptions are explained and compared. Finally, a parametric study reveals the most relevant parameters for the acoustic damping behaviour of bias flow liners and how this is reflected by the various model representations. Although the general trend of the investigated acoustic behaviour is captured by the different models fairly well for a certain range of parameters, in the transition region between the resonance dominated and the purely bias flow related regime all models lack the correct damping prediction. This seems to be connected to the proper implementation of the reactance as a function of bias flow Mach number.

  9. SU-E-QI-03: Compartment Modeling of Dynamic Brain PET - The Effect of Scatter and Random Corrections On Parameter Errors

    International Nuclear Information System (INIS)

    Häggström, I; Karlsson, M; Larsson, A; Schmidtlein, C

    2014-01-01

    Purpose: To investigate the effects of corrections for random and scattered coincidences on kinetic parameters in brain tumors, by using ten Monte Carlo (MC) simulated dynamic FLT-PET brain scans. Methods: The GATE MC software was used to simulate ten repetitions of a 1 hour dynamic FLT-PET scan of a voxelized head phantom. The phantom comprised six normal head tissues, plus inserted regions for blood and tumor tissue. Different time-activity-curves (TACs) for all eight tissue types were used in the simulation and were generated in Matlab using a 2-tissue model with preset parameter values (K1,k2,k3,k4,Va,Ki). The PET data was reconstructed into 28 frames by both ordered-subset expectation maximization (OSEM) and 3D filtered back-projection (3DFBP). Five image sets were reconstructed, all with normalization and different additional corrections C (A=attenuation, R=random, S=scatter): Trues (AC), trues+randoms (ARC), trues+scatters (ASC), total counts (ARSC) and total counts (AC). Corrections for randoms and scatters were based on real random and scatter sinograms that were back-projected, blurred and then forward projected and scaled to match the real counts. Weighted non-linearleast- squares fitting of TACs from the blood and tumor regions was used to obtain parameter estimates. Results: The bias was not significantly different for trues (AC), trues+randoms (ARC), trues+scatters (ASC) and total counts (ARSC) for either 3DFBP or OSEM (p<0.05). Total counts with only AC stood out however, with an up to 160% larger bias. In general, there was no difference in bias found between 3DFBP and OSEM, except in parameter Va and Ki. Conclusion: According to our results, the methodology of correcting the PET data for randoms and scatters performed well for the dynamic images where frames have much lower counts compared to static images. Generally, no bias was introduced by the corrections and their importance was emphasized since omitting them increased bias extensively

  10. Technical Basis for the Use of Alpha Absorption Corrections on RCF Gross Alpha Data

    International Nuclear Information System (INIS)

    Ceffalo, G.M.

    1999-01-01

    This document provides the supporting data and rationale for making absorption corrections to gross alpha data to correct alpha data for loss due to absorption in the sample matrix. For some time there has been concern that the gross alpha data produced by the Environmental Restoration Contractor Radiological Counting Facility, particularly gross alpha analysis on soils, has been biased toward low results, as no correction for self-absorption was applied to the counting data. The process was investigated, and a new methodology for alpha self-absorption has been developed

  11. Application of bias factor method using random sampling technique for prediction accuracy improvement of critical eigenvalue of BWR

    International Nuclear Information System (INIS)

    Ito, Motohiro; Endo, Tomohiro; Yamamoto, Akio; Kuroda, Yusuke; Yoshii, Takashi

    2017-01-01

    The bias factor method based on the random sampling technique is applied to the benchmark problem of Peach Bottom Unit 2. Validity and availability of the present method, i.e. correction of calculation results and reduction of uncertainty, are confirmed in addition to features and performance of the present method. In the present study, core characteristics in cycle 3 are corrected with the proposed method using predicted and 'measured' critical eigenvalues in cycles 1 and 2. As the source of uncertainty, variance-covariance of cross sections is considered. The calculation results indicate that bias between predicted and measured results, and uncertainty owing to cross section can be reduced. Extension to other uncertainties such as thermal hydraulics properties will be a future task. (author)

  12. Efficacy of cardiovascular complications correction in patients with breast cancer

    International Nuclear Information System (INIS)

    Savchenko, A.S.

    2011-01-01

    The work was performed with the purpose to assess the efficacy of cardiovascular complications correction at combination treatment for breast cancer (BC). Timely diagnosis and correction of cardiovascular diseases in BC with the use of inhalation cardioactive drugs (nitrates and calcium antagonists) improved the efficacy of accompanying therapy, prevented progress of early and late RT complications, improved the quality of life.

  13. Validation and empirical correction of MODIS AOT and AE over ocean

    Directory of Open Access Journals (Sweden)

    N. A. J. Schutgens

    2013-09-01

    Full Text Available We present a validation study of Collection 5 MODIS level 2 Aqua and Terra AOT (aerosol optical thickness and AE (Ångström exponent over ocean by comparison to coastal and island AERONET (AErosol RObotic NETwork sites for the years 2003–2009. We show that MODIS (MODerate-resolution Imaging Spectroradiometer AOT exhibits significant biases due to wind speed and cloudiness of the observed scene, while MODIS AE, although overall unbiased, exhibits less spatial contrast on global scales than the AERONET observations. The same behaviour can be seen when MODIS AOT is compared against Maritime Aerosol Network (MAN data, suggesting that the spatial coverage of our datasets does not preclude global conclusions. Thus, we develop empirical correction formulae for MODIS AOT and AE that significantly improve agreement of MODIS and AERONET observations. We show these correction formulae to be robust. Finally, we study random errors in the corrected MODIS AOT and AE and show that they mainly depend on AOT itself, although small contributions are present due to wind speed and cloud fraction in AOT random errors and due to AE and cloud fraction in AE random errors. Our analysis yields significantly higher random AOT errors than the official MODIS error estimate (0.03 + 0.05 τ, while random AE errors are smaller than might be expected. This new dataset of bias-corrected MODIS AOT and AE over ocean is intended for aerosol model validation and assimilation studies, but also has consequences as a stand-alone observational product. For instance, the corrected dataset suggests that much less fine mode aerosol is transported across the Pacific and Atlantic oceans.

  14. Halo assembly bias and the tidal anisotropy of the local halo environment

    Science.gov (United States)

    Paranjape, Aseem; Hahn, Oliver; Sheth, Ravi K.

    2018-05-01

    We study the role of the local tidal environment in determining the assembly bias of dark matter haloes. Previous results suggest that the anisotropy of a halo's environment (i.e. whether it lies in a filament or in a more isotropic region) can play a significant role in determining the eventual mass and age of the halo. We statistically isolate this effect, using correlations between the large-scale and small-scale environments of simulated haloes at z = 0 with masses between 1011.6 ≲ (m/h-1 M⊙) ≲ 1014.9. We probe the large-scale environment, using a novel halo-by-halo estimator of linear bias. For the small-scale environment, we identify a variable αR that captures the tidal anisotropy in a region of radius R = 4R200b around the halo and correlates strongly with halo bias at fixed mass. Segregating haloes by αR reveals two distinct populations. Haloes in highly isotropic local environments (αR ≲ 0.2) behave as expected from the simplest, spherically averaged analytical models of structure formation, showing a negative correlation between their concentration and large-scale bias at all masses. In contrast, haloes in anisotropic, filament-like environments (αR ≳ 0.5) tend to show a positive correlation between bias and concentration at any mass. Our multiscale analysis cleanly demonstrates how the overall assembly bias trend across halo mass emerges as an average over these different halo populations, and provides valuable insights towards building analytical models that correctly incorporate assembly bias. We also discuss potential implications for the nature and detectability of galaxy assembly bias.

  15. Classification bias in commercial business lists for retail food stores in the U.S.

    Science.gov (United States)

    2012-01-01

    Background Aspects of the food environment such as the availability of different types of food stores have recently emerged as key modifiable factors that may contribute to the increased prevalence of obesity. Given that many of these studies have derived their results based on secondary datasets and the relationship of food stores with individual weight outcomes has been reported to vary by store type, it is important to understand the extent to which often-used secondary data correctly classify food stores. We evaluated the classification bias of food stores in Dun & Bradstreet (D&B) and InfoUSA commercial business lists. Methods We performed a full census in 274 randomly selected census tracts in the Chicago metropolitan area and collected detailed store attributes inside stores for classification. Store attributes were compared by classification match status and store type. Systematic classification bias by census tract characteristics was assessed in multivariate regression. Results D&B had a higher classification match rate than InfoUSA for supermarkets and grocery stores, while InfoUSA was higher for convenience stores. Both lists were more likely to correctly classify large supermarkets, grocery stores, and convenience stores with more cash registers and different types of service counters (supermarkets and grocery stores only). The likelihood of a correct classification match for supermarkets and grocery stores did not vary systemically by tract characteristics whereas convenience stores were more likely to be misclassified in predominately Black tracts. Conclusion Researches can rely on classification of food stores in commercial datasets for supermarkets and grocery stores whereas classifications for convenience and specialty food stores are subject to some systematic bias by neighborhood racial/ethnic composition. PMID:22512874

  16. Classification bias in commercial business lists for retail food stores in the U.S.

    Directory of Open Access Journals (Sweden)

    Han Euna

    2012-04-01

    Full Text Available Abstract Background Aspects of the food environment such as the availability of different types of food stores have recently emerged as key modifiable factors that may contribute to the increased prevalence of obesity. Given that many of these studies have derived their results based on secondary datasets and the relationship of food stores with individual weight outcomes has been reported to vary by store type, it is important to understand the extent to which often-used secondary data correctly classify food stores. We evaluated the classification bias of food stores in Dun & Bradstreet (D&B and InfoUSA commercial business lists. Methods We performed a full census in 274 randomly selected census tracts in the Chicago metropolitan area and collected detailed store attributes inside stores for classification. Store attributes were compared by classification match status and store type. Systematic classification bias by census tract characteristics was assessed in multivariate regression. Results D&B had a higher classification match rate than InfoUSA for supermarkets and grocery stores, while InfoUSA was higher for convenience stores. Both lists were more likely to correctly classify large supermarkets, grocery stores, and convenience stores with more cash registers and different types of service counters (supermarkets and grocery stores only. The likelihood of a correct classification match for supermarkets and grocery stores did not vary systemically by tract characteristics whereas convenience stores were more likely to be misclassified in predominately Black tracts. Conclusion Researches can rely on classification of food stores in commercial datasets for supermarkets and grocery stores whereas classifications for convenience and specialty food stores are subject to some systematic bias by neighborhood racial/ethnic composition.

  17. Classification bias in commercial business lists for retail food stores in the U.S.

    Science.gov (United States)

    Han, Euna; Powell, Lisa M; Zenk, Shannon N; Rimkus, Leah; Ohri-Vachaspati, Punam; Chaloupka, Frank J

    2012-04-18

    Aspects of the food environment such as the availability of different types of food stores have recently emerged as key modifiable factors that may contribute to the increased prevalence of obesity. Given that many of these studies have derived their results based on secondary datasets and the relationship of food stores with individual weight outcomes has been reported to vary by store type, it is important to understand the extent to which often-used secondary data correctly classify food stores. We evaluated the classification bias of food stores in Dun & Bradstreet (D&B) and InfoUSA commercial business lists. We performed a full census in 274 randomly selected census tracts in the Chicago metropolitan area and collected detailed store attributes inside stores for classification. Store attributes were compared by classification match status and store type. Systematic classification bias by census tract characteristics was assessed in multivariate regression. D&B had a higher classification match rate than InfoUSA for supermarkets and grocery stores, while InfoUSA was higher for convenience stores. Both lists were more likely to correctly classify large supermarkets, grocery stores, and convenience stores with more cash registers and different types of service counters (supermarkets and grocery stores only). The likelihood of a correct classification match for supermarkets and grocery stores did not vary systemically by tract characteristics whereas convenience stores were more likely to be misclassified in predominately Black tracts. Researches can rely on classification of food stores in commercial datasets for supermarkets and grocery stores whereas classifications for convenience and specialty food stores are subject to some systematic bias by neighborhood racial/ethnic composition.

  18. Arthrographic diagnosis of ruptured calcaneofibular ligament. I

    International Nuclear Information System (INIS)

    Vuust, M.

    1980-01-01

    A new projection, oblique axial, is recommended for the arthrography of the acute sprained ankle for the correct diagnosis of a ruptured calcaneofibular ligament. Its value is experimentally confirmed. (Auth.)

  19. Non-common path aberration correction in an adaptive optics scanning ophthalmoscope.

    Science.gov (United States)

    Sulai, Yusufu N; Dubra, Alfredo

    2014-09-01

    The correction of non-common path aberrations (NCPAs) between the imaging and wavefront sensing channel in a confocal scanning adaptive optics ophthalmoscope is demonstrated. NCPA correction is achieved by maximizing an image sharpness metric while the confocal detection aperture is temporarily removed, effectively minimizing the monochromatic aberrations in the illumination path of the imaging channel. Comparison of NCPA estimated using zonal and modal orthogonal wavefront corrector bases provided wavefronts that differ by ~λ/20 in root-mean-squared (~λ/30 standard deviation). Sequential insertion of a cylindrical lens in the illumination and light collection paths of the imaging channel was used to compare image resolution after changing the wavefront correction to maximize image sharpness and intensity metrics. Finally, the NCPA correction was incorporated into the closed-loop adaptive optics control by biasing the wavefront sensor signals without reducing its bandwidth.

  20. Short-Term Wind Speed Hybrid Forecasting Model Based on Bias Correcting Study and Its Application

    OpenAIRE

    Mingfei Niu; Shaolong Sun; Jie Wu; Yuanlei Zhang

    2015-01-01

    The accuracy of wind speed forecasting is becoming increasingly important to improve and optimize renewable wind power generation. In particular, reliable short-term wind speed forecasting can enable model predictive control of wind turbines and real-time optimization of wind farm operation. However, due to the strong stochastic nature and dynamic uncertainty of wind speed, the forecasting of wind speed data using different patterns is difficult. This paper proposes a novel combination bias c...

  1. Circularity bias in abusive head trauma studies could be diminished with a new ranking scale

    DEFF Research Database (Denmark)

    Högberg, Göran; Colville-Ebeling, Bonnie; Högberg, Ulf

    2016-01-01

    Causality in abusive head trauma has never been fully established and hence no gold standard exists for the diagnosis. Implications hereof include bias introduced by circular reasoning and a shift from a trustful doctor patient relationship to a distrustful one when the caregiver statement...

  2. Costs of detection bias in index-based population monitoring

    Science.gov (United States)

    Moore, C.T.; Kendall, W.L.

    2004-01-01

    Managers of wildlife populations commonly rely on indirect, count-based measures of the population in making decisions regarding conservation, harvest, or control. The main appeal in the use of such counts is their low material expense compared to methods that directly measure the population. However, their correct use rests on the rarely-tested but often-assumed premise that they proportionately reflect population size, i.e., that they constitute a population index. This study investigates forest management for the endangered Red-cockaded Woodpecker (Picoides borealis) and the Wood Thrush (Hylocichla mustelina) at the Piedmont National Wildlife Refuge in central Georgia, U.S.A. Optimal decision policies for a joint species objective were derived for two alternative models of Wood Thrush population dynamics. Policies were simulated under scenarios of unbiasedness, consistent negative bias, and habitat-dependent negative bias in observed Wood Thrush densities. Differences in simulation outcomes between biased and unbiased detection scenarios indicated the expected loss in resource objectives (here, forest habitat and birds) through decision-making based on biased population counts. Given the models and objective function used in our analysis, expected losses were as great as 11%, a degree of loss perhaps not trivial for applications such as endangered species management. Our analysis demonstrates that costs of uncertainty about the relationship between the population and its observation can be measured in units of the resource, costs which may offset apparent savings achieved by collecting uncorrected population counts.

  3. Learning-based diagnosis and repair

    NARCIS (Netherlands)

    Roos, Nico

    2017-01-01

    This paper proposes a new form of diagnosis and repair based on reinforcement learning. Self-interested agents learn locally which agents may provide a low quality of service for a task. The correctness of learned assessments of other agents is proved under conditions on exploration versus

  4. Absence of bias in clinician ratings of everyday functioning among African American, Hispanic and Caucasian patients with schizophrenia

    OpenAIRE

    Sabbag, Samir; Prestia, Davide; Robertson, Belinda; Ruiz, Pedro; Durand, Dante; Strassnig, Martin; Harvey, Philip D.

    2015-01-01

    A substantial research literature implicates potential racial/ethnic bias in the diagnosis of schizophrenia and in clinical ratings of psychosis. There is no similar information regarding bias effects on ratings of everyday functioning. Our aims were to determine if Caucasian raters vary in their ratings of the everyday functioning of schizophrenia patients of different ethnicities, to find out which factors determine accurate self-report of everyday functioning in different ethnic groups, an...

  5. Correcting Estimates of the Occurrence Rate of Earth-like Exoplanets for Stellar Multiplicity

    Science.gov (United States)

    Cantor, Elliot; Dressing, Courtney D.; Ciardi, David R.; Christiansen, Jessie

    2018-06-01

    One of the most prominent questions in the exoplanet field has been determining the true occurrence rate of potentially habitable Earth-like planets. NASA’s Kepler mission has been instrumental in answering this question by searching for transiting exoplanets, but follow-up observations of Kepler target stars are needed to determine whether or not the surveyed Kepler targets are in multi-star systems. While many researchers have searched for companions to Kepler planet host stars, few studies have investigated the larger target sample. Regardless of physical association, the presence of nearby stellar companions biases our measurements of a system’s planetary parameters and reduces our sensitivity to small planets. Assuming that all Kepler target stars are single (as is done in many occurrence rate calculations) would overestimate our search completeness and result in an underestimate of the frequency of potentially habitable Earth-like planets. We aim to correct for this bias by characterizing the set of targets for which Kepler could have detected Earth-like planets. We are using adaptive optics (AO) imaging to reveal potential stellar companions and near-infrared spectroscopy to refine stellar parameters for a subset of the Kepler targets that are most amenable to the detection of Earth-like planets. We will then derive correction factors to correct for the biases in the larger set of target stars and determine the true frequency of systems with Earth-like planets. Due to the prevalence of stellar multiples, we expect to calculate an occurrence rate for Earth-like exoplanets that is higher than current figures.

  6. Permutation importance: a corrected feature importance measure.

    Science.gov (United States)

    Altmann, André; Toloşi, Laura; Sander, Oliver; Lengauer, Thomas

    2010-05-15

    In life sciences, interpretability of machine learning models is as important as their prediction accuracy. Linear models are probably the most frequently used methods for assessing feature relevance, despite their relative inflexibility. However, in the past years effective estimators of feature relevance have been derived for highly complex or non-parametric models such as support vector machines and RandomForest (RF) models. Recently, it has been observed that RF models are biased in such a way that categorical variables with a large number of categories are preferred. In this work, we introduce a heuristic for normalizing feature importance measures that can correct the feature importance bias. The method is based on repeated permutations of the outcome vector for estimating the distribution of measured importance for each variable in a non-informative setting. The P-value of the observed importance provides a corrected measure of feature importance. We apply our method to simulated data and demonstrate that (i) non-informative predictors do not receive significant P-values, (ii) informative variables can successfully be recovered among non-informative variables and (iii) P-values computed with permutation importance (PIMP) are very helpful for deciding the significance of variables, and therefore improve model interpretability. Furthermore, PIMP was used to correct RF-based importance measures for two real-world case studies. We propose an improved RF model that uses the significant variables with respect to the PIMP measure and show that its prediction accuracy is superior to that of other existing models. R code for the method presented in this article is available at http://www.mpi-inf.mpg.de/ approximately altmann/download/PIMP.R CONTACT: altmann@mpi-inf.mpg.de, laura.tolosi@mpi-inf.mpg.de Supplementary data are available at Bioinformatics online.

  7. REAL-TIME ANALYSIS AND SELECTION BIASES IN THE SUPERNOVA LEGACY SURVEY

    International Nuclear Information System (INIS)

    Perrett, K.; Conley, A.; Carlberg, R.; Balam, D.; Hook, I. M.; Sullivan, M.; Pritchet, C.; Astier, P.; Balland, C.; Guy, J.; Hardin, D.; Pain, R.; Regnault, N.; Basa, S.; Fouchez, D.; Howell, D. A.

    2010-01-01

    The Supernova Legacy Survey (SNLS) has produced a high-quality, homogeneous sample of Type Ia supernovae (SNe Ia) out to redshifts greater than z = 1. In its first four years of full operation (to 2007 June), the SNLS discovered more than 3000 transient candidates, 373 of which have been spectroscopically confirmed as SNe Ia. Use of these SNe Ia in precision cosmology critically depends on an analysis of the observational biases incurred in the SNLS survey due to the incomplete sampling of the underlying SN Ia population. This paper describes our real-time supernova detection and analysis procedures, and uses detailed Monte Carlo simulations to examine the effects of Malmquist bias and spectroscopic sampling. Such sampling effects are found to become apparent at z ∼ 0.6, with a significant shift in the average magnitude of the spectroscopically confirmed SN Ia sample toward brighter values for z ∼> 0.75. We describe our approach to correct for these selection biases in our three-year SNLS cosmological analysis (SNLS3) and present a breakdown of the systematic uncertainties involved.

  8. Time-delayed fronts from biased random walks

    International Nuclear Information System (INIS)

    Fort, Joaquim; Pujol, Toni

    2007-01-01

    We generalize a previous model of time-delayed reaction-diffusion fronts (Fort and Mendez 1999 Phys. Rev. Lett. 82 867) to allow for a bias in the microscopic random walk of particles or individuals. We also present a second model which takes the time order of events (diffusion and reproduction) into account. As an example, we apply them to the human invasion front across the USA in the 19th century. The corrections relative to the previous model are substantial. Our results are relevant to physical and biological systems with anisotropic fronts, including particle diffusion in disordered lattices, population invasions, the spread of epidemics, etc

  9. Diversity Matters in Academic Radiology: Acknowledging and Addressing Unconscious Bias.

    Science.gov (United States)

    Allen, Brenda J; Garg, Kavita

    2016-12-01

    To meet challenges related to changing demographics, and to optimize the promise of diversity, radiologists must bridge the gap between numbers of women and historically underrepresented minorities in radiology and radiation oncology as contrasted with other medical specialties. Research reveals multiple ways that women and underrepresented minorities can benefit radiology education, research, and practice. To achieve those benefits, promising practices promote developing and implementing strategies that support diversity as an institutional priority and cultivate shared responsibility among all members to create inclusive learning and workplace environments. Strategies also include providing professional development to empower and equip members to accomplish diversity-related goals. Among topics for professional development about diversity, unconscious bias has shown positive results. Unconscious bias refers to ways humans unknowingly draw upon assumptions about individuals and groups to make decisions about them. Researchers have documented unconscious bias in a variety of contexts and professions, including health care, in which they have studied differential treatment, diagnosis, prescribed care, patient well-being and compliance, physician-patient interactions, clinical decision making, and medical school education. These studies demonstrate unfavorable impacts on members of underrepresented groups and women. Learning about and striving to counteract unconscious bias points to promising practices for increasing the numbers of women and underrepresented minorities in the radiology and radiation oncology workforce. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  10. Hypopituitarism in Children. Modern Laboratory and Genetic Diagnosis

    Directory of Open Access Journals (Sweden)

    Ye.V. Hloba

    2016-04-01

    Full Text Available The lecture presents the current international guidelines on the diagnosis of hypopituitarism in children, in particular the rules of stimulation tests for the diagnosis of growth hormone deficiency, secondary hypogonadism and hypocorticism. It is recommended to use the anti-Müllerian hormone and inhibin B to diagnose different forms of hypogonadism. Genetic methods are also recommended to make a correct diagnosis, to prescribe a proper treatment and to provide a medical and genetic counseling of family members.

  11. Impact of Dengue Vaccination on Serological Diagnosis: Insights From Phase III Dengue Vaccine Efficacy Trials.

    Science.gov (United States)

    Plennevaux, Eric; Moureau, Annick; Arredondo-García, José L; Villar, Luis; Pitisuttithum, Punnee; Tran, Ngoc H; Bonaparte, Matthew; Chansinghakul, Danaya; Coronel, Diana L; L'Azou, Maïna; Ochiai, R Leon; Toh, Myew-Ling; Noriega, Fernando; Bouckenooghe, Alain

    2018-04-03

    We previously reported that vaccination with the tetravalent dengue vaccine (CYD-TDV; Dengvaxia) may bias the diagnosis of dengue based on immunoglobulin M (IgM) and immunoglobulin G (IgG) assessments. We undertook a post hoc pooled analysis of febrile episodes that occurred during the active surveillance phase (the 25 months after the first study injection) of 2 pivotal phase III, placebo-controlled CYD-TDV efficacy studies that involved ≥31000 children aged 2-16 years across 10 countries in Asia and Latin America. Virologically confirmed dengue (VCD) episode was defined with a positive test for dengue nonstructural protein 1 antigen or dengue polymerase chain reaction. Probable dengue episode was serologically defined as (1) IgM-positive acute- or convalescent-phase sample, or (2) IgG-positive acute-phase sample and ≥4-fold IgG increase between acute- and convalescent-phase samples. There were 1284 VCD episodes (575 and 709 in the CYD-TDV and placebo groups, respectively) and 17673 other febrile episodes (11668 and 6005, respectively). Compared with VCD, the sensitivity and specificity of probable dengue definition were 93.1% and 77.2%, respectively. Overall positive and negative predictive values were 22.9% and 99.5%, respectively, reflecting the much lower probability of correctly confirming probable dengue in a population including a vaccinated cohort. Vaccination-induced bias toward false-positive diagnosis was more pronounced among individuals seronegative at baseline. Caution will be required when interpreting IgM and IgG data obtained during routine surveillance in those vaccinated with CYD-TDV. There is an urgent need for new practical, dengue-specific diagnostic algorithms now that CYD-TDV is approved in a number of dengue-endemic countries. NCT01373281 and NCT01374516.

  12. Eliminating the Effect of Rating Bias on Reputation Systems

    Directory of Open Access Journals (Sweden)

    Leilei Wu

    2018-01-01

    Full Text Available The ongoing rapid development of the e-commercial and interest-base websites makes it more pressing to evaluate objects’ accurate quality before recommendation. The objects’ quality is often calculated based on their historical information, such as selected records or rating scores. Usually high quality products obtain higher average ratings than low quality products regardless of rating biases or errors. However, many empirical cases demonstrate that consumers may be misled by rating scores added by unreliable users or deliberate tampering. In this case, users’ reputation, that is, the ability to rate trustily and precisely, makes a big difference during the evaluation process. Thus, one of the main challenges in designing reputation systems is eliminating the effects of users’ rating bias. To give an objective evaluation of each user’s reputation and uncover an object’s intrinsic quality, we propose an iterative balance (IB method to correct users’ rating biases. Experiments on two datasets show that the IB method is a highly self-consistent and robust algorithm and it can accurately quantify movies’ actual quality and users’ stability of rating. Compared with existing methods, the IB method has higher ability to find the “dark horses,” that is, not so popular yet good movies, in the Academy Awards.

  13. It's all relative: The role of object weight in toddlers' gravity bias.

    Science.gov (United States)

    Hast, Michael

    2018-02-01

    Work over the past 20 years has demonstrated a gravity bias in toddlers; when an object is dropped into a curved tube, they will frequently search at a point immediately beneath the entry of the tube rather than in the object's actual location. The current study tested 2- to 3½-year-olds' (N = 88) gravity bias under consideration of object weight. They were tested with either a heavy or light ball, and they had information about either one of the balls only or both balls. Evaluating their first search behavior showed that participants generally displayed the same age trends as other studies had demonstrated, with older toddlers passing more advanced task levels by being able to locate objects in the correct location. Object weight appeared to have no particular impact on the direction of these trends. However, where weight was accessible as relative information, toddlers were younger at passing levels and older at failing levels, although significantly so only from around 3 years of age onward. When they failed levels, toddlers made significantly more gravity errors with the heavy ball when they had information about both balls and made more correct choices with the light ball. As a whole, the findings suggest that nonvisual object variables, such as weight, affect young children's search behaviors in the gravity task, but only if these variables are presented in relation to other objects. This relational information has the potential to enhance or diminish the gravity bias. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  15. Statistical reconstruction for cone-beam CT with a post-artifact-correction noise model: application to high-quality head imaging

    International Nuclear Information System (INIS)

    Dang, H; Stayman, J W; Sisniega, A; Xu, J; Zbijewski, W; Siewerdsen, J H; Wang, X; Foos, D H; Aygun, N; Koliatsos, V E

    2015-01-01

    Non-contrast CT reliably detects fresh blood in the brain and is the current front-line imaging modality for intracranial hemorrhage such as that occurring in acute traumatic brain injury (contrast ∼40–80 HU, size  >  1 mm). We are developing flat-panel detector (FPD) cone-beam CT (CBCT) to facilitate such diagnosis in a low-cost, mobile platform suitable for point-of-care deployment. Such a system may offer benefits in the ICU, urgent care/concussion clinic, ambulance, and sports and military theatres. However, current FPD-CBCT systems face significant challenges that confound low-contrast, soft-tissue imaging. Artifact correction can overcome major sources of bias in FPD-CBCT but imparts noise amplification in filtered backprojection (FBP). Model-based reconstruction improves soft-tissue image quality compared to FBP by leveraging a high-fidelity forward model and image regularization. In this work, we develop a novel penalized weighted least-squares (PWLS) image reconstruction method with a noise model that includes accurate modeling of the noise characteristics associated with the two dominant artifact corrections (scatter and beam-hardening) in CBCT and utilizes modified weights to compensate for noise amplification imparted by each correction. Experiments included real data acquired on a FPD-CBCT test-bench and an anthropomorphic head phantom emulating intra-parenchymal hemorrhage. The proposed PWLS method demonstrated superior noise-resolution tradeoffs in comparison to FBP and PWLS with conventional weights (viz. at matched 0.50 mm spatial resolution, CNR = 11.9 compared to CNR = 5.6 and CNR = 9.9, respectively) and substantially reduced image noise especially in challenging regions such as skull base. The results support the hypothesis that with high-fidelity artifact correction and statistical reconstruction using an accurate post-artifact-correction noise model, FPD-CBCT can achieve image quality allowing reliable detection of

  16. Zero-bias tunneling anomaly at a vortex core

    International Nuclear Information System (INIS)

    Overhauser, A.W.; Daemen, L.L.

    1989-01-01

    The sharp peak in the tunneling conductance at a vortex core, reported by Hess et al. in NbSe 2 , is attributed to self-energy corrections of the normal electrons (in the core) caused by their coupling to excitations of the superconducting region (outside the core). The shape of the zero-bias anomaly is reproduced without benefit from adjustable parameters, though the predicted size is a little too large. If the critical currents in the superconducting region (outside the core) are recognized by letting the excitation density (at zero energy) be finite, then a perfect fit can be obtained

  17. Quantifying sources of bias in National Healthcare Safety Network laboratory-identified Clostridium difficile infection rates.

    Science.gov (United States)

    Haley, Valerie B; DiRienzo, A Gregory; Lutterloh, Emily C; Stricof, Rachel L

    2014-01-01

    To assess the effect of multiple sources of bias on state- and hospital-specific National Healthcare Safety Network (NHSN) laboratory-identified Clostridium difficile infection (CDI) rates. Sensitivity analysis. A total of 124 New York hospitals in 2010. New York NHSN CDI events from audited hospitals were matched to New York hospital discharge billing records to obtain additional information on patient age, length of stay, and previous hospital discharges. "Corrected" hospital-onset (HO) CDI rates were calculated after (1) correcting inaccurate case reporting found during audits, (2) incorporating knowledge of laboratory results from outside hospitals, (3) excluding days when patients were not at risk from the denominator of the rates, and (4) adjusting for patient age. Data sets were simulated with each of these sources of bias reintroduced individually and combined. The simulated rates were compared with the corrected rates. Performance (ie, better, worse, or average compared with the state average) was categorized, and misclassification compared with the corrected data set was measured. Counting days patients were not at risk in the denominator reduced the state HO rate by 45% and resulted in 8% misclassification. Age adjustment and reporting errors also shifted rates (7% and 6% misclassification, respectively). Changing the NHSN protocol to require reporting of age-stratified patient-days and adjusting for patient-days at risk would improve comparability of rates across hospitals. Further research is needed to validate the risk-adjustment model before these data should be used as hospital performance measures.

  18. Labor Dystocia: A Common Approach to Diagnosis.

    Science.gov (United States)

    Neal, Jeremy L; Lowe, Nancy K; Schorn, Mavis N; Holley, Sharon L; Ryan, Sharon L; Buxton, Margaret; Wilson-Liverman, Angela M

    2015-01-01

    Contemporary labor and birth population norms should be the basis for evaluating labor progression and determining slow progress that may benefit from intervention. The aim of this article is to present guidelines for a common, evidence-based approach for determination of active labor onset and diagnosis of labor dystocia based on a synthesis of existing professional guidelines and relevant contemporary publications. A 3-point approach for diagnosing active labor onset and classifying labor dystocia-related labor aberrations into well-defined, mutually exclusive categories that can be used clinically and validated by researchers is proposed. The approach comprises identification of 1) an objective point that strictly defines active labor onset (point of active labor determination); 2) an objective point that identifies when labor progress becomes atypical, beyond which interventions aimed at correcting labor dystocia may be justified (point of protraction diagnosis); and 3) an objective point that identifies when interventions aimed at correcting labor dystocia, if used, can first be determined to be unsuccessful, beyond which assisted vaginal or cesarean birth may be justified (earliest point of arrest diagnosis). Widespread adoption of a common approach for diagnosing labor dystocia will facilitate consistent evaluation of labor progress, improve communications between clinicians and laboring women, indicate when intervention aimed at speeding labor progress or facilitating birth may be appropriate, and allow for more efficient translation of safe and effective management strategies into clinical practice. Correct application of the diagnosis of labor dystocia may lead to a decrease in the rate of cesarean birth, decreased health care costs, and improved health of childbearing women and neonates. © 2015 by the American College of Nurse-Midwives.

  19. Pain, Precautions and Present-biased Preferences: A Theory of Health Insurance

    OpenAIRE

    Schumacher, Heiner; Kesternich, Iris

    2010-01-01

    We develop an insurance market model where consumers (i) exhibit present-biased preferences, and (ii) suffer from physical pain in case of (health-) damage. They can exert preventive effort to reduce the probability of damage. Sophisticated consumers correctly anticipate their effort and purchase full insurance. Naive consumers overestimate their future effort, purchase no insurance and end up with less effort than sophisticated ones. We allow consumers to differ in their wealth and risk pref...

  20. Potential fitting biases resulting from grouping data into variable width bins

    International Nuclear Information System (INIS)

    Towers, S.

    2014-01-01

    When reading peer-reviewed scientific literature describing any analysis of empirical data, it is natural and correct to proceed with the underlying assumption that experiments have made good faith efforts to ensure that their analyses yield unbiased results. However, particle physics experiments are expensive and time consuming to carry out, thus if an analysis has inherent bias (even if unintentional), much money and effort can be wasted trying to replicate or understand the results, particularly if the analysis is fundamental to our understanding of the universe. In this note we discuss the significant biases that can result from data binning schemes. As we will show, if data are binned such that they provide the best comparison to a particular (but incorrect) model, the resulting model parameter estimates when fitting to the binned data can be significantly biased, leading us to too often accept the model hypothesis when it is not in fact true. When using binned likelihood or least squares methods there is of course no a priori requirement that data bin sizes need to be constant, but we show that fitting to data grouped into variable width bins is particularly prone to produce biased results if the bin boundaries are chosen to optimize the comparison of the binned data to a wrong model. The degree of bias that can be achieved simply with variable binning can be surprisingly large. Fitting the data with an unbinned likelihood method, when possible to do so, is the best way for researchers to show that their analyses are not biased by binning effects. Failing that, equal bin widths should be employed as a cross-check of the fitting analysis whenever possible