WorldWideScience

Sample records for regression dilution bias

  1. Regression dilution bias: tools for correction methods and sample size calculation.

    Science.gov (United States)

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  2. Removing Malmquist bias from linear regressions

    Science.gov (United States)

    Verter, Frances

    1993-01-01

    Malmquist bias is present in all astronomical surveys where sources are observed above an apparent brightness threshold. Those sources which can be detected at progressively larger distances are progressively more limited to the intrinsically luminous portion of the true distribution. This bias does not distort any of the measurements, but distorts the sample composition. We have developed the first treatment to correct for Malmquist bias in linear regressions of astronomical data. A demonstration of the corrected linear regression that is computed in four steps is presented.

  3. Exchange bias in diluted-antiferromagnet/antiferromagnet bilayers

    International Nuclear Information System (INIS)

    Mao, Zhongquan; Zhan, Xiaozhi; Chen, Xi

    2015-01-01

    The hysteresis-loop properties of a diluted-antiferromagnetic (DAF) layer exchange coupling to an antiferromagnetic (AF) layer are investigated by means of numerical simulations. Remarkable loop shift and coercivity enhancement are observed in such DAF/AF bilayers, while they are absent in the uncoupled DAF single layer. The influences of pinned domains, dilution, cooling field and DAF layer thickness on the loop shift are investigated systematically. The result unambiguously confirms an exchange bias (EB) effect in the DAF/AF bilayers. It also reveals that the EB effect originates from the pinned AF domains within the DAF layer. In contrast to conventional EB systems, frozen uncompensated spins are not found at the interface of the AF pinning layer. (paper)

  4. Tax Evasion, Information Reporting, and the Regressive Bias Hypothesis

    DEFF Research Database (Denmark)

    Boserup, Simon Halphen; Pinje, Jori Veng

    A robust prediction from the tax evasion literature is that optimal auditing induces a regressive bias in effective tax rates compared to statutory rates. If correct, this will have important distributional consequences. Nevertheless, the regressive bias hypothesis has never been tested empirically...

  5. Tax Evasion, Information Reporting, and the Regressive Bias Prediction

    DEFF Research Database (Denmark)

    Boserup, Simon Halphen; Pinje, Jori Veng

    2013-01-01

    evasion and audit probabilities once we account for information reporting in the tax compliance game. When conditioning on information reporting, we find that both reduced-form evidence and simulations exhibit the predicted regressive bias. However, in the overall economy, this bias is negated by the tax......Models of rational tax evasion and optimal enforcement invariably predict a regressive bias in the effective tax system, which reduces redistribution in the economy. Using Danish administrative data, we show that a calibrated structural model of this type replicates moments and correlations of tax...

  6. Bias in regression coefficient estimates upon different treatments of ...

    African Journals Online (AJOL)

    MS and PW consistently overestimated the population parameter. EM and RI, on the other hand, tended to consistently underestimate the population parameter under non-monotonic pattern. Keywords: Missing data, bias, regression, percent missing, non-normality, missing pattern > East African Journal of Statistics Vol.

  7. Two biased estimation techniques in linear regression: Application to aircraft

    Science.gov (United States)

    Klein, Vladislav

    1988-01-01

    Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.

  8. Bias due to Preanalytical Dilution of Rodent Serum for Biochemical Analysis on the Siemens Dimension Xpand Plus

    Directory of Open Access Journals (Sweden)

    Jennifer L. Johns

    2018-02-01

    Full Text Available Clinical pathology testing of rodents is often challenging due to insufficient sample volume. One solution in clinical veterinary and exploratory research environments is dilution of samples prior to analysis. However, published information on the impact of preanalytical sample dilution on rodent biochemical data is incomplete. The objective of this study was to evaluate the effects of preanalytical sample dilution on biochemical analysis of mouse and rat serum samples utilizing the Siemens Dimension Xpand Plus. Rats were obtained from end of study research projects. Mice were obtained from sentinel testing programs. For both, whole blood was collected via terminal cardiocentesis into empty tubes and serum was harvested. Biochemical parameters were measured on fresh and thawed frozen samples run straight and at dilution factors 2–10. Dilutions were performed manually, utilizing either ultrapure water or enzyme diluent per manufacturer recommendations. All diluted samples were generated directly from the undiluted sample. Preanalytical dilution caused clinically unacceptable bias in most analytes at dilution factors four and above. Dilution-induced bias in total calcium, creatinine, total bilirubin, and uric acid was considered unacceptable with any degree of dilution, based on the more conservative of two definitions of acceptability. Dilution often caused electrolyte values to fall below assay range precluding evaluation of bias. Dilution-induced bias occurred in most biochemical parameters to varying degrees and may render dilution unacceptable in the exploratory research and clinical veterinary environments. Additionally, differences between results obtained at different dilution factors may confound statistical comparisons in research settings. Comparison of data obtained at a single dilution factor is highly recommended.

  9. Performance of a New Restricted Biased Estimator in Logistic Regression

    Directory of Open Access Journals (Sweden)

    Yasin ASAR

    2017-12-01

    Full Text Available It is known that the variance of the maximum likelihood estimator (MLE inflates when the explanatory variables are correlated. This situation is called the multicollinearity problem. As a result, the estimations of the model may not be trustful. Therefore, this paper introduces a new restricted estimator (RLTE that may be applied to get rid of the multicollinearity when the parameters lie in some linear subspace  in logistic regression. The mean squared errors (MSE and the matrix mean squared errors (MMSE of the estimators considered in this paper are given. A Monte Carlo experiment is designed to evaluate the performances of the proposed estimator, the restricted MLE (RMLE, MLE and Liu-type estimator (LTE. The criterion of performance is chosen to be MSE. Moreover, a real data example is presented. According to the results, proposed estimator has better performance than MLE, RMLE and LTE.

  10. Declining Bias and Gender Wage Discrimination? A Meta-Regression Analysis

    Science.gov (United States)

    Jarrell, Stephen B.; Stanley, T. D.

    2004-01-01

    The meta-regression analysis reveals that there is a strong tendency for discrimination estimates to fall and wage discrimination exist against the woman. The biasing effect of researchers' gender of not correcting for selection bias has weakened and changes in labor market have made it less important.

  11. Large biases in regression-based constituent flux estimates: causes and diagnostic tools

    Science.gov (United States)

    Hirsch, Robert M.

    2014-01-01

    It has been documented in the literature that, in some cases, widely used regression-based models can produce severely biased estimates of long-term mean river fluxes of various constituents. These models, estimated using sample values of concentration, discharge, and date, are used to compute estimated fluxes for a multiyear period at a daily time step. This study compares results of the LOADEST seven-parameter model, LOADEST five-parameter model, and the Weighted Regressions on Time, Discharge, and Season (WRTDS) model using subsampling of six very large datasets to better understand this bias problem. This analysis considers sample datasets for dissolved nitrate and total phosphorus. The results show that LOADEST-7 and LOADEST-5, although they often produce very nearly unbiased results, can produce highly biased results. This study identifies three conditions that can give rise to these severe biases: (1) lack of fit of the log of concentration vs. log discharge relationship, (2) substantial differences in the shape of this relationship across seasons, and (3) severely heteroscedastic residuals. The WRTDS model is more resistant to the bias problem than the LOADEST models but is not immune to them. Understanding the causes of the bias problem is crucial to selecting an appropriate method for flux computations. Diagnostic tools for identifying the potential for bias problems are introduced, and strategies for resolving bias problems are described.

  12. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection.

    Science.gov (United States)

    Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C

    2011-09-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.

  13. Length bias correction in gene ontology enrichment analysis using logistic regression.

    Science.gov (United States)

    Mi, Gu; Di, Yanming; Emerson, Sarah; Cumbie, Jason S; Chang, Jeff H

    2012-01-01

    When assessing differential gene expression from RNA sequencing data, commonly used statistical tests tend to have greater power to detect differential expression of genes encoding longer transcripts. This phenomenon, called "length bias", will influence subsequent analyses such as Gene Ontology enrichment analysis. In the presence of length bias, Gene Ontology categories that include longer genes are more likely to be identified as enriched. These categories, however, are not necessarily biologically more relevant. We show that one can effectively adjust for length bias in Gene Ontology analysis by including transcript length as a covariate in a logistic regression model. The logistic regression model makes the statistical issue underlying length bias more transparent: transcript length becomes a confounding factor when it correlates with both the Gene Ontology membership and the significance of the differential expression test. The inclusion of the transcript length as a covariate allows one to investigate the direct correlation between the Gene Ontology membership and the significance of testing differential expression, conditional on the transcript length. We present both real and simulated data examples to show that the logistic regression approach is simple, effective, and flexible.

  14. Comparison of some biased estimation methods (including ordinary subset regression) in the linear model

    Science.gov (United States)

    Sidik, S. M.

    1975-01-01

    Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.

  15. An evaluation of bias in propensity score-adjusted non-linear regression models.

    Science.gov (United States)

    Wan, Fei; Mitra, Nandita

    2018-03-01

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  16. Bias and Uncertainty in Regression-Calibrated Models of Groundwater Flow in Heterogeneous Media

    DEFF Research Database (Denmark)

    Cooley, R.L.; Christensen, Steen

    2006-01-01

    small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate θ* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear...... are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis....

  17. The effect of dust on electron heating and dc self-bias in hydrogen diluted silane discharges

    International Nuclear Information System (INIS)

    Schüngel, E; Mohr, S; Iwashita, S; Schulze, J; Czarnetzki, U

    2013-01-01

    In capacitive hydrogen diluted silane discharges the formation of dust affects plasma processes used, e.g. for thin film solar cell manufacturing. Thus, a basic understanding of the interaction between plasma and dust is required to optimize such processes. We investigate a highly diluted silane discharge experimentally using phase-resolved optical emission spectroscopy to study the electron dynamics, laser light scattering on the dust particles to relate the electron dynamics with the spatial distribution of dust, and current and voltage measurements to characterize the electrical symmetry of the discharge via the dc self-bias. The measurements are performed in single and dual frequency discharges. A mode transition from the α-mode to a bulk drift mode (Ω-mode) is found, if the amount of silane and, thereby, the amount of dust and negative ions is increased. By controlling the electrode temperatures, the dust can be distributed asymmetrically between the electrodes via the thermophoretic force. This affects both the electron heating and the discharge symmetry, i.e. a dc self-bias develops in a single frequency discharge. Using the Electrical Asymmetry Effect (EAE), the dc self-bias can be controlled in dual frequency discharges via the phase angle between the two applied frequencies. The Ω-mode is observed for all phase angles and is explained by a simple model of the electron power dissipation. The model shows that the mode transition is characterized by a phase shift between the applied voltage and the electron conduction current, and that the plasma density profile can be estimated using the measured phase shift. The control interval of the dc self-bias obtained using the EAE will be shifted, if an asymmetric dust distribution is present. However, the width of the interval remains unchanged, because the dust distribution is hardly affected by the phase angle. (paper)

  18. Bias due to two-stage residual-outcome regression analysis in genetic association studies.

    Science.gov (United States)

    Demissie, Serkalem; Cupples, L Adrienne

    2011-11-01

    Association studies of risk factors and complex diseases require careful assessment of potential confounding factors. Two-stage regression analysis, sometimes referred to as residual- or adjusted-outcome analysis, has been increasingly used in association studies of single nucleotide polymorphisms (SNPs) and quantitative traits. In this analysis, first, a residual-outcome is calculated from a regression of the outcome variable on covariates and then the relationship between the adjusted-outcome and the SNP is evaluated by a simple linear regression of the adjusted-outcome on the SNP. In this article, we examine the performance of this two-stage analysis as compared with multiple linear regression (MLR) analysis. Our findings show that when a SNP and a covariate are correlated, the two-stage approach results in biased genotypic effect and loss of power. Bias is always toward the null and increases with the squared-correlation between the SNP and the covariate (). For example, for , 0.1, and 0.5, two-stage analysis results in, respectively, 0, 10, and 50% attenuation in the SNP effect. As expected, MLR was always unbiased. Since individual SNPs often show little or no correlation with covariates, a two-stage analysis is expected to perform as well as MLR in many genetic studies; however, it produces considerably different results from MLR and may lead to incorrect conclusions when independent variables are highly correlated. While a useful alternative to MLR under , the two -stage approach has serious limitations. Its use as a simple substitute for MLR should be avoided. © 2011 Wiley Periodicals, Inc.

  19. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection

    OpenAIRE

    Kwan, Johnny S. H.; Kung, Annie W. C.; Sham, Pak C.

    2011-01-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias. © The Author(s) 2011.

  20. Assessment of participation bias in cohort studies: systematic review and meta-regression analysis

    Directory of Open Access Journals (Sweden)

    Sérgio Henrique Almeida da Silva Junior

    2015-11-01

    Full Text Available Abstract The proportion of non-participation in cohort studies, if associated with both the exposure and the probability of occurrence of the event, can introduce bias in the estimates of interest. The aim of this study is to evaluate the impact of participation and its characteristics in longitudinal studies. A systematic review (MEDLINE, Scopus and Web of Science for articles describing the proportion of participation in the baseline of cohort studies was performed. Among the 2,964 initially identified, 50 were selected. The average proportion of participation was 64.7%. Using a meta-regression model with mixed effects, only age, year of baseline contact and study region (borderline were associated with participation. Considering the decrease in participation in recent years, and the cost of cohort studies, it is essential to gather information to assess the potential for non-participation, before committing resources. Finally, journals should require the presentation of this information in the papers.

  1. Bias and efficiency loss in regression estimates due to duplicated observations: a Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Francesco Sarracino

    2017-04-01

    Full Text Available Recent studies documented that survey data contain duplicate records. We assess how duplicate records affect regression estimates, and we evaluate the effectiveness of solutions to deal with duplicate records. Results show that the chances of obtaining unbiased estimates when data contain 40 doublets (about 5% of the sample range between 3.5% and 11.5% depending on the distribution of duplicates. If 7 quintuplets are present in the data (2% of the sample, then the probability of obtaining biased estimates ranges between 11% and 20%. Weighting the duplicate records by the inverse of their multiplicity, or dropping superfluous duplicates outperform other solutions in all considered scenarios. Our results illustrate the risk of using data in presence of duplicate records and call for further research on strategies to analyze affected data.

  2. The Collinearity Free and Bias Reduced Regression Estimation Project: The Theory of Normalization Ridge Regression. Report No. 2.

    Science.gov (United States)

    Bulcock, J. W.; And Others

    Multicollinearity refers to the presence of highly intercorrelated independent variables in structural equation models, that is, models estimated by using techniques such as least squares regression and maximum likelihood. There is a problem of multicollinearity in both the natural and social sciences where theory formulation and estimation is in…

  3. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    International Nuclear Information System (INIS)

    Malik, Afshan N.; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-01-01

    Highlights: → Mitochondrial dysfunction is central to many diseases of oxidative stress. → 95% of the mitochondrial genome is duplicated in the nuclear genome. → Dilution of untreated genomic DNA leads to dilution bias. → Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as β-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  4. Internal correction of spectral interferences and mass bias for selenium metabolism studies using enriched stable isotopes in combination with multiple linear regression.

    Science.gov (United States)

    Lunøe, Kristoffer; Martínez-Sierra, Justo Giner; Gammelgaard, Bente; Alonso, J Ignacio García

    2012-03-01

    The analytical methodology for the in vivo study of selenium metabolism using two enriched selenium isotopes has been modified, allowing for the internal correction of spectral interferences and mass bias both for total selenium and speciation analysis. The method is based on the combination of an already described dual-isotope procedure with a new data treatment strategy based on multiple linear regression. A metabolic enriched isotope ((77)Se) is given orally to the test subject and a second isotope ((74)Se) is employed for quantification. In our approach, all possible polyatomic interferences occurring in the measurement of the isotope composition of selenium by collision cell quadrupole ICP-MS are taken into account and their relative contribution calculated by multiple linear regression after minimisation of the residuals. As a result, all spectral interferences and mass bias are corrected internally allowing the fast and independent quantification of natural abundance selenium ((nat)Se) and enriched (77)Se. In this sense, the calculation of the tracer/tracee ratio in each sample is straightforward. The method has been applied to study the time-related tissue incorporation of (77)Se in male Wistar rats while maintaining the (nat)Se steady-state conditions. Additionally, metabolically relevant information such as selenoprotein synthesis and selenium elimination in urine could be studied using the proposed methodology. In this case, serum proteins were separated by affinity chromatography while reverse phase was employed for urine metabolites. In both cases, (74)Se was used as a post-column isotope dilution spike. The application of multiple linear regression to the whole chromatogram allowed us to calculate the contribution of bromine hydride, selenium hydride, argon polyatomics and mass bias on the observed selenium isotope patterns. By minimising the square sum of residuals for the whole chromatogram, internal correction of spectral interferences and mass

  5. The Systematic Bias of Ingestible Core Temperature Sensors Requires a Correction by Linear Regression.

    Science.gov (United States)

    Hunt, Andrew P; Bach, Aaron J E; Borg, David N; Costello, Joseph T; Stewart, Ian B

    2017-01-01

    An accurate measure of core body temperature is critical for monitoring individuals, groups and teams undertaking physical activity in situations of high heat stress or prolonged cold exposure. This study examined the range in systematic bias of ingestible temperature sensors compared to a certified and traceable reference thermometer. A total of 119 ingestible temperature sensors were immersed in a circulated water bath at five water temperatures (TEMP A: 35.12 ± 0.60°C, TEMP B: 37.33 ± 0.56°C, TEMP C: 39.48 ± 0.73°C, TEMP D: 41.58 ± 0.97°C, and TEMP E: 43.47 ± 1.07°C) along with a certified traceable reference thermometer. Thirteen sensors (10.9%) demonstrated a systematic bias > ±0.1°C, of which 4 (3.3%) were > ± 0.5°C. Limits of agreement (95%) indicated that systematic bias would likely fall in the range of -0.14 to 0.26°C, highlighting that it is possible for temperatures measured between sensors to differ by more than 0.4°C. The proportion of sensors with systematic bias > ±0.1°C (10.9%) confirms that ingestible temperature sensors require correction to ensure their accuracy. An individualized linear correction achieved a mean systematic bias of 0.00°C, and limits of agreement (95%) to 0.00-0.00°C, with 100% of sensors achieving ±0.1°C accuracy. Alternatively, a generalized linear function (Corrected Temperature (°C) = 1.00375 × Sensor Temperature (°C) - 0.205549), produced as the average slope and intercept of a sub-set of 51 sensors and excluding sensors with accuracy outside ±0.5°C, reduced the systematic bias to Correction of sensor temperature to a reference thermometer by linear function eliminates this systematic bias (individualized functions) or ensures systematic bias is within ±0.1°C in 98% of the sensors (generalized function).

  6. The Systematic Bias of Ingestible Core Temperature Sensors Requires a Correction by Linear Regression

    Directory of Open Access Journals (Sweden)

    Andrew P. Hunt

    2017-04-01

    Full Text Available An accurate measure of core body temperature is critical for monitoring individuals, groups and teams undertaking physical activity in situations of high heat stress or prolonged cold exposure. This study examined the range in systematic bias of ingestible temperature sensors compared to a certified and traceable reference thermometer. A total of 119 ingestible temperature sensors were immersed in a circulated water bath at five water temperatures (TEMP A: 35.12 ± 0.60°C, TEMP B: 37.33 ± 0.56°C, TEMP C: 39.48 ± 0.73°C, TEMP D: 41.58 ± 0.97°C, and TEMP E: 43.47 ± 1.07°C along with a certified traceable reference thermometer. Thirteen sensors (10.9% demonstrated a systematic bias > ±0.1°C, of which 4 (3.3% were > ± 0.5°C. Limits of agreement (95% indicated that systematic bias would likely fall in the range of −0.14 to 0.26°C, highlighting that it is possible for temperatures measured between sensors to differ by more than 0.4°C. The proportion of sensors with systematic bias > ±0.1°C (10.9% confirms that ingestible temperature sensors require correction to ensure their accuracy. An individualized linear correction achieved a mean systematic bias of 0.00°C, and limits of agreement (95% to 0.00–0.00°C, with 100% of sensors achieving ±0.1°C accuracy. Alternatively, a generalized linear function (Corrected Temperature (°C = 1.00375 × Sensor Temperature (°C − 0.205549, produced as the average slope and intercept of a sub-set of 51 sensors and excluding sensors with accuracy outside ±0.5°C, reduced the systematic bias to < ±0.1°C in 98.4% of the remaining sensors (n = 64. In conclusion, these data show that using an uncalibrated ingestible temperature sensor may provide inaccurate data that still appears to be statistically, physiologically, and clinically meaningful. Correction of sensor temperature to a reference thermometer by linear function eliminates this systematic bias (individualized functions or ensures

  7. Bias in logistic regression due to imperfect diagnostic test results and practical correction approaches.

    Science.gov (United States)

    Valle, Denis; Lima, Joanna M Tucker; Millar, Justin; Amratia, Punam; Haque, Ubydul

    2015-11-04

    Logistic regression is a statistical model widely used in cross-sectional and cohort studies to identify and quantify the effects of potential disease risk factors. However, the impact of imperfect tests on adjusted odds ratios (and thus on the identification of risk factors) is under-appreciated. The purpose of this article is to draw attention to the problem associated with modelling imperfect diagnostic tests, and propose simple Bayesian models to adequately address this issue. A systematic literature review was conducted to determine the proportion of malaria studies that appropriately accounted for false-negatives/false-positives in a logistic regression setting. Inference from the standard logistic regression was also compared with that from three proposed Bayesian models using simulations and malaria data from the western Brazilian Amazon. A systematic literature review suggests that malaria epidemiologists are largely unaware of the problem of using logistic regression to model imperfect diagnostic test results. Simulation results reveal that statistical inference can be substantially improved when using the proposed Bayesian models versus the standard logistic regression. Finally, analysis of original malaria data with one of the proposed Bayesian models reveals that microscopy sensitivity is strongly influenced by how long people have lived in the study region, and an important risk factor (i.e., participation in forest extractivism) is identified that would have been missed by standard logistic regression. Given the numerous diagnostic methods employed by malaria researchers and the ubiquitous use of logistic regression to model the results of these diagnostic tests, this paper provides critical guidelines to improve data analysis practice in the presence of misclassification error. Easy-to-use code that can be readily adapted to WinBUGS is provided, enabling straightforward implementation of the proposed Bayesian models.

  8. Avoiding and Correcting Bias in Score-Based Latent Variable Regression with Discrete Manifest Items

    Science.gov (United States)

    Lu, Irene R. R.; Thomas, D. Roland

    2008-01-01

    This article considers models involving a single structural equation with latent explanatory and/or latent dependent variables where discrete items are used to measure the latent variables. Our primary focus is the use of scores as proxies for the latent variables and carrying out ordinary least squares (OLS) regression on such scores to estimate…

  9. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    Science.gov (United States)

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  10. Wave-optics uncertainty propagation and regression-based bias model in GNSS radio occultation bending angle retrievals

    Directory of Open Access Journals (Sweden)

    M. E. Gorbunov

    2018-01-01

    Full Text Available A new reference occultation processing system (rOPS will include a Global Navigation Satellite System (GNSS radio occultation (RO retrieval chain with integrated uncertainty propagation. In this paper, we focus on wave-optics bending angle (BA retrieval in the lower troposphere and introduce (1 an empirically estimated boundary layer bias (BLB model then employed to reduce the systematic uncertainty of excess phases and bending angles in about the lowest 2 km of the troposphere and (2 the estimation of (residual systematic uncertainties and their propagation together with random uncertainties from excess phase to bending angle profiles. Our BLB model describes the estimated bias of the excess phase transferred from the estimated bias of the bending angle, for which the model is built, informed by analyzing refractivity fluctuation statistics shown to induce such biases. The model is derived from regression analysis using a large ensemble of Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC RO observations and concurrent European Centre for Medium-Range Weather Forecasts (ECMWF analysis fields. It is formulated in terms of predictors and adaptive functions (powers and cross products of predictors, where we use six main predictors derived from observations: impact altitude, latitude, bending angle and its standard deviation, canonical transform (CT amplitude, and its fluctuation index. Based on an ensemble of test days, independent of the days of data used for the regression analysis to establish the BLB model, we find the model very effective for bias reduction and capable of reducing bending angle and corresponding refractivity biases by about a factor of 5. The estimated residual systematic uncertainty, after the BLB profile subtraction, is lower bounded by the uncertainty from the (indirect use of ECMWF analysis fields but is significantly lower than the systematic uncertainty without BLB correction. The

  11. Wave-optics uncertainty propagation and regression-based bias model in GNSS radio occultation bending angle retrievals

    Science.gov (United States)

    Gorbunov, Michael E.; Kirchengast, Gottfried

    2018-01-01

    A new reference occultation processing system (rOPS) will include a Global Navigation Satellite System (GNSS) radio occultation (RO) retrieval chain with integrated uncertainty propagation. In this paper, we focus on wave-optics bending angle (BA) retrieval in the lower troposphere and introduce (1) an empirically estimated boundary layer bias (BLB) model then employed to reduce the systematic uncertainty of excess phases and bending angles in about the lowest 2 km of the troposphere and (2) the estimation of (residual) systematic uncertainties and their propagation together with random uncertainties from excess phase to bending angle profiles. Our BLB model describes the estimated bias of the excess phase transferred from the estimated bias of the bending angle, for which the model is built, informed by analyzing refractivity fluctuation statistics shown to induce such biases. The model is derived from regression analysis using a large ensemble of Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) RO observations and concurrent European Centre for Medium-Range Weather Forecasts (ECMWF) analysis fields. It is formulated in terms of predictors and adaptive functions (powers and cross products of predictors), where we use six main predictors derived from observations: impact altitude, latitude, bending angle and its standard deviation, canonical transform (CT) amplitude, and its fluctuation index. Based on an ensemble of test days, independent of the days of data used for the regression analysis to establish the BLB model, we find the model very effective for bias reduction and capable of reducing bending angle and corresponding refractivity biases by about a factor of 5. The estimated residual systematic uncertainty, after the BLB profile subtraction, is lower bounded by the uncertainty from the (indirect) use of ECMWF analysis fields but is significantly lower than the systematic uncertainty without BLB correction. The systematic and

  12. Bias and Uncertainty in Regression-Calibrated Models of Groundwater Flow in Heterogeneous Media

    DEFF Research Database (Denmark)

    Cooley, R.L.; Christensen, Steen

    2006-01-01

    by a lumped or smoothed m-dimensional approximation γθ*, where γ is an interpolation matrix and θ* is a stochastic vector of parameters. Vector θ* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(γθ*) written in terms......Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector β that reflects both small and large scales of heterogeneity in the inputs...... small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate θ* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear...

  13. Bias correction by use of errors-in-variables regression models in studies with K-X-ray fluorescence bone lead measurements.

    Science.gov (United States)

    Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard

    2011-01-01

    In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Investigation of the UK37' vs. SST relationship for Atlantic Ocean suspended particulate alkenones: An alternative regression model and discussion of possible sampling bias

    Science.gov (United States)

    Gould, Jessica; Kienast, Markus; Dowd, Michael

    2017-05-01

    Alkenone unsaturation, expressed as the UK37' index, is closely related to growth temperature of prymnesiophytes, thus providing a reliable proxy to infer past sea surface temperatures (SSTs). Here we address two lingering uncertainties related to this SST proxy. First, calibration models developed for core-top sediments and those developed for surface suspended particulates organic material (SPOM) show systematic offsets, raising concerns regarding the transfer of the primary signal into the sedimentary record. Second, questions remain regarding changes in slope of the UK37' vs. growth temperature relationship at the temperature extremes. Based on (re)analysis of 31 new and 394 previously published SPOM UK37' data from the Atlantic Ocean, a new regression model to relate UK37' to SST is introduced; the Richards curve (Richards, 1959). This non-linear regression model provides a robust calibration of the UK37' vs. SST relationship for Atlantic SPOM samples and uniquely accounts for both the fact that the UK37' index is a proportion, and so must lie between 0 and 1, as well as for the observed reduction in slope at the warm and cold ends of the temperature range. As with prior fits of SPOM UK37' vs. SST, the Richards model is offset from traditional regression models of sedimentary UK37' vs. SST. We posit that (some of) this offset can be attributed to the seasonally and depth biased sampling of SPOM material.

  15. Dilution Confusion: Conventions for Defining a Dilution

    Science.gov (United States)

    Fishel, Laurence A.

    2010-01-01

    Two conventions for preparing dilutions are used in clinical laboratories. The first convention defines an "a:b" dilution as "a" volumes of solution A plus "b" volumes of solution B. The second convention defines an "a:b" dilution as "a" volumes of solution A diluted into a final volume of "b". Use of the incorrect dilution convention could affect…

  16. Human immunophenotyping via low-variance, low-bias, interpretive regression modeling of small, wide data sets: Application to aging and immune response to influenza vaccination.

    Science.gov (United States)

    Holmes, Tyson H; He, Xiao-Song

    2016-10-01

    Small, wide data sets are commonplace in human immunophenotyping research. As defined here, a small, wide data set is constructed by sampling a small to modest quantity n,1small, wide data sets. These prescriptions are distinctive in their especially heavy emphasis on minimizing the use of out-of-sample information for conducting statistical inference. This allows the working immunologist to proceed without being encumbered by imposed and often untestable statistical assumptions. Problems of unmeasured confounders, confidence-interval coverage, feature selection, and shrinkage/denoising are defined clearly and treated in detail. We propose an extension of an existing nonparametric technique for improved small-sample confidence-interval tail coverage from the univariate case (single immune feature) to the multivariate (many, possibly correlated immune features). An important role for derived features in the immunological interpretation of regression analyses is stressed. Areas of further research are discussed. Presented principles and methods are illustrated through application to a small, wide data set of adults spanning a wide range in ages and multiple immunophenotypes that were assayed before and after immunization with inactivated influenza vaccine (IIV). Our regression modeling prescriptions identify some potentially important topics for future immunological research. 1) Immunologists may wish to distinguish age-related differences in immune features from changes in immune features caused by aging. 2) A form of the bootstrap that employs linear extrapolation may prove to be an invaluable analytic tool because it allows the working immunologist to obtain accurate estimates of the stability of immune parameter estimates with a bare minimum of imposed assumptions. 3) Liberal inclusion of immune features in phenotyping panels can facilitate accurate separation of biological signal of interest from noise. In addition, through a combination of denoising and

  17. Dual Regression

    OpenAIRE

    Spady, Richard; Stouli, Sami

    2012-01-01

    We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

  18. Regression Phalanxes

    OpenAIRE

    Zhang, Hongyang; Welch, William J.; Zamar, Ruben H.

    2017-01-01

    Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems. A phalanx is a subset of features that work well for classification tasks. In this paper, we propose a different class of phalanxes for application in regression settings. We define a "Regression Phalanx" - a subset of features that work well together for prediction. We propose a novel algorithm which automatically chooses Regression Phalanxes from high-dimensi...

  19. Hybridizing pines with diluted pollen

    Science.gov (United States)

    Robert Z. Callaham

    1967-01-01

    Diluted pollens would have many uses by the tree breeder. Dilutions would be particularly advantageous in making many controlled pollinations with a limited amount of pollen. They also would be useful in artificial mass pollinations of orchards or single trees. Diluted pollens might help overcome troublesome genetic barriers to crossing. Feasibility o,f using diluted...

  20. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  1. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...

  2. Helium dilution refrigerator

    International Nuclear Information System (INIS)

    1973-01-01

    A new system of continuous heat exchange for a helium dilution refrigerator is proposed. The 3 He effluent tube is concurrent with the affluent mixed helium tube in a vertical downward direction. Heat exchange efficiency is enhanced by placing in series a number of elements with an enlarged surface area

  3. Isotope dilution analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fudge, A.

    1978-12-15

    The following aspects of isotope dilution analysis are covered in this report: fundamental aspects of the technique; elements of interest in the nuclear field, choice and standardization of spike nuclide; pre-treatment to achieve isotopic exchange and chemical separation; sensitivity; selectivity; and accuracy.

  4. Autistic Regression

    Science.gov (United States)

    Matson, Johnny L.; Kozlowski, Alison M.

    2010-01-01

    Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

  5. Defects in dilute nitrides

    International Nuclear Information System (INIS)

    Chen, W.M.; Buyanova, I.A.; Tu, C.W.; Yonezu, H.

    2005-01-01

    We provide a brief review our recent results from optically detected magnetic resonance studies of grown-in non-radiative defects in dilute nitrides, i.e. Ga(In)NAs and Ga(Al,In)NP. Defect complexes involving intrinsic defects such as As Ga antisites and Ga i self interstitials were positively identified.Effects of growth conditions, chemical compositions and post-growth treatments on formation of the defects are closely examined. These grown-in defects are shown to play an important role in non-radiative carrier recombination and thus in degrading optical quality of the alloys, harmful to performance of potential optoelectronic and photonic devices based on these dilute nitrides. (author)

  6. Regression Analysis of the Effect of Bias Voltage on Nano- and Macrotribological Properties of Diamond-Like Carbon Films Deposited by a Filtered Cathodic Vacuum Arc Ion-Plating Method

    Directory of Open Access Journals (Sweden)

    Shojiro Miyake

    2014-01-01

    Full Text Available Diamond-like carbon (DLC films are deposited by bend filtered cathodic vacuum arc (FCVA technique with DC and pulsed bias voltage. The effects of varying bias voltage on nanoindentation and nanowear properties were evaluated by atomic force microscopy. DLC films deposited with DC bias voltage of −50 V exhibited the greatest hardness at approximately 50 GPa, a low modulus of dissipation, low elastic modulus to nanoindentation hardness ratio, and high nanowear resistance. Nanoindentation hardness was positively correlated with the Raman peak ratio Id/Ig, whereas wear depth was negatively correlated with this ratio. These nanotribological properties highly depend on the films’ nanostructures. The tribological properties of the FCVA-DLC films were also investigated using a ball-on-disk test. The average friction coefficient of DLC films deposited with DC bias voltage was lower than that of DLC films deposited with pulse bias voltage. The friction coefficient calculated from the ball-on-disk test was correlated with the nanoindentation hardness in dry conditions. However, under boundary lubrication conditions, the friction coefficient and specific wear rate had little correlation with nanoindentation hardness, and wear behavior seemed to be influenced by other factors such as adhesion strength between the film and substrate.

  7. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  8. SEPARATION PHENOMENA LOGISTIC REGRESSION

    Directory of Open Access Journals (Sweden)

    Ikaro Daniel de Carvalho Barreto

    2014-03-01

    Full Text Available This paper proposes an application of concepts about the maximum likelihood estimation of the binomial logistic regression model to the separation phenomena. It generates bias in the estimation and provides different interpretations of the estimates on the different statistical tests (Wald, Likelihood Ratio and Score and provides different estimates on the different iterative methods (Newton-Raphson and Fisher Score. It also presents an example that demonstrates the direct implications for the validation of the model and validation of variables, the implications for estimates of odds ratios and confidence intervals, generated from the Wald statistics. Furthermore, we present, briefly, the Firth correction to circumvent the phenomena of separation.

  9. Sympathetic bias.

    Science.gov (United States)

    Levy, David M; Peart, Sandra J

    2008-06-01

    We wish to deal with investigator bias in a statistical context. We sketch how a textbook solution to the problem of "outliers" which avoids one sort of investigator bias, creates the temptation for another sort. We write down a model of the approbation seeking statistician who is tempted by sympathy for client to violate the disciplinary standards. We give a simple account of one context in which we might expect investigator bias to flourish. Finally, we offer tentative suggestions to deal with the problem of investigator bias which follow from our account. As we have given a very sparse and stylized account of investigator bias, we ask what might be done to overcome this limitation.

  10. Simulating publication bias

    DEFF Research Database (Denmark)

    Paldam, Martin

    is censoring: selection by the size of estimate; SR3 selects the optimal combination of fit and size; and SR4 selects the first satisficing result. The last four SRs are steered by priors and result in bias. The MST and the FAT-PET have been developed for detection and correction of such bias. The simulations......Economic research typically runs J regressions for each selected for publication – it is often selected as the ‘best’ of the regressions. The paper examines five possible meanings of the word ‘best’: SR0 is ideal selection with no bias; SR1 is polishing: selection by statistical fit; SR2...... are made by data variation, while the model is the same. It appears that SR0 generates narrow funnels much at odds with observed funnels, while the other four funnels look more realistic. SR1 to SR4 give the mean a substantial bias that confirms the prior causing the bias. The FAT-PET MRA works well...

  11. Reducing Bias and Increasing Precision by Adding Either a Pretest Measure of the Study Outcome or a Nonequivalent Comparison Group to the Basic Regression Discontinuity Design: An Example from Education

    Science.gov (United States)

    Tang, Yang; Cook, Thomas D.; Kisbu-Sakarya, Yasemin

    2015-01-01

    Regression discontinuity design (RD) has been widely used to produce reliable causal estimates. Researchers have validated the accuracy of RD design using within study comparisons (Cook, Shadish & Wong, 2008; Cook & Steiner, 2010; Shadish et al, 2011). Within study comparisons examines the validity of a quasi-experiment by comparing its…

  12. Controlling attribute effect in linear regression

    KAUST Repository

    Calders, Toon; Karim, Asim A.; Kamiran, Faisal; Ali, Wasif Mohammad; Zhang, Xiangliang

    2013-01-01

    In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE.

  13. Controlling attribute effect in linear regression

    KAUST Repository

    Calders, Toon

    2013-12-01

    In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE.

  14. Journal bias or author bias?

    Science.gov (United States)

    Harris, Ian

    2016-01-01

    I read with interest the comment by Mark Wilson in the Indian Journal of Medical Ethics regarding bias and conflicts of interest in medical journals. Wilson targets one journal (the New England Journal of Medicine: NEJM) and one particular "scandal" to make his point that journals' decisions on publication are biased by commercial conflicts of interest (CoIs). It is interesting that he chooses the NEJM which, by his own admission, had one of the strictest CoI policies and had published widely on this topic. The feeling is that if the NEJM can be guilty, they can all be guilty.

  15. Regression analysis with categorized regression calibrated exposure: some interesting findings

    Directory of Open Access Journals (Sweden)

    Hjartåker Anette

    2006-07-01

    Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

  16. Multi-element isotope dilution analyses using ICP-MS

    International Nuclear Information System (INIS)

    Volpe, A.M.

    1996-01-01

    Presently, 37 elements ranging from light (Li,B) through transition metals, noble, rare earth and heavy elements, to actinides and transuranics (Pu, Am, Cm) are measured by isotope dilution at Lawrence Livermore National Laboratory. Projects range from geological and hydrological to biological. The research goal is to measure accurately many elements present in diverse matrices at trace (ppb) levels using isotope dilution methods. Major advantages of isotope dilution methods are accuracy, elimination of ion intensity calibration, and quantitation for samples that require chemical separation. Accuracy depends on tracer isotope calibration, tracer-sample isotopic equilibration, and appropriate background, isobaric and mass bias corrections. Propagation of isotope ratio error due to improper tracer isotope addition is a major concern with multi-element analyses when abundances vary widely. 11 refs., 3 figs

  17. Biased Supervision

    OpenAIRE

    Josse Delfgaauw; Michiel Souverijn

    2014-01-01

    markdownabstract__Abstract__ When verifiable performance measures are imperfect, organizations often resort to subjective performance pay. This may give supervisors the power to direct employees towards tasks that mainly benefit the supervisor rather than the organization. We cast a principal-supervisor-agent model in a multitask setting, where the supervisor has an intrinsic preference towards specific tasks. We show that subjective performance pay based on evaluation by a biased supervisor ...

  18. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  19. Differentiating regressed melanoma from regressed lichenoid keratosis.

    Science.gov (United States)

    Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

    2017-04-01

    Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Dilute chemical decontamination program review

    International Nuclear Information System (INIS)

    Anstine, L.D.; Blomgren, J.C.; Pettit, P.J.

    1980-01-01

    The objective of the Dilute Chemical Decontamination Program is to develop and evaluate a process which utilizes reagents in dilute concentrations for the decontamination of BWR primary systems and for the maintenance of dose rates on the out-of-core surfaces at acceptable levels. A discussion is presented of the process concept, solvent development, advantages and disadvantages of reagent systems, and VNC loop tests. Based on the work completed to date it is concluded that (1) rapid decontamination of BWRs using dilute reagents is feasible; (2) reasonable reagent conditions for rapid chemical decontamination are: 0.01M oxalic acid + 0.005M citric acid, pH3.0, 90/degree/C, 0.5 to 1.0 ppm dissolved oxygen; (3) control of dissolved oxygen concentration is important, since high levels suppress the rate of decontamination and low levels allow precipitation of ferrous oxalate. 4 refs

  1. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  2. Forecaster Behaviour and Bias in Macroeconomic Forecasts

    OpenAIRE

    Roy Batchelor

    2007-01-01

    This paper documents the presence of systematic bias in the real GDP and inflation forecasts of private sector forecasters in the G7 economies in the years 1990–2005. The data come from the monthly Consensus Economics forecasting service, and bias is measured and tested for significance using parametric fixed effect panel regressions and nonparametric tests on accuracy ranks. We examine patterns across countries and forecasters to establish whether the bias reflects the inefficient use of i...

  3. Primary system boron dilution analysis

    International Nuclear Information System (INIS)

    Crump, R.J.; Naretto, C.J.; Borgen, R.A.; Rockhold, H.C.

    1978-01-01

    The results are presented for an analysis conducted to determine the potential paths through which nonborated water or water with insufficient boron concentration might enter the LOFT primary coolant piping system or reactor vessel to cause dilution of the borated primary coolant water. No attempt was made in the course of this analysis to identify possible design modifications nor to suggest changes in administrative procedures or controls

  4. Cryogen-free dilution refrigerators

    International Nuclear Information System (INIS)

    Uhlig, K

    2012-01-01

    We review briefly our first cryogen-free dilution refrigerator (CF-DR) which was precooled by a GM cryocooler. We then show how today's dry DRs with pulse tube precooling have developed. A few examples of commercial DRs are explained and noteworthy features pointed out. Thereby we describe the general advantages of cryogen-free DRs, but also show where improvements are still desirable. At present, our dry DR has a base temperature of 10 mK and a cooling capacity of 700 μW at a mixing chamber temperature of 100 mK. In our cryostat, in most recent work, an additional refrigeration loop was added to the dilution circuit. This 4 He circuit has a lowest temperature of about 1 K and a refrigeration capacity of up to 100 mW at temperatures slightly above 1 K; the dilution circuit and the 4 He circuit can be run separately or together. The purpose of this additional loop is to increase the cooling capacity for experiments where the cooling power of the still of the DR is not sufficient to cool cold amplifiers and cables, e.g. in studies on superconducting quantum circuits or astrophysical applications.

  5. Cryogen-free dilution refrigerators

    Science.gov (United States)

    Uhlig, K.

    2012-12-01

    We review briefly our first cryogen-free dilution refrigerator (CF-DR) which was precooled by a GM cryocooler. We then show how today's dry DRs with pulse tube precooling have developed. A few examples of commercial DRs are explained and noteworthy features pointed out. Thereby we describe the general advantages of cryogen-free DRs, but also show where improvements are still desirable. At present, our dry DR has a base temperature of 10 mK and a cooling capacity of 700 μW at a mixing chamber temperature of 100 mK. In our cryostat, in most recent work, an additional refrigeration loop was added to the dilution circuit. This 4He circuit has a lowest temperature of about 1 K and a refrigeration capacity of up to 100 mW at temperatures slightly above 1 K; the dilution circuit and the 4He circuit can be run separately or together. The purpose of this additional loop is to increase the cooling capacity for experiments where the cooling power of the still of the DR is not sufficient to cool cold amplifiers and cables, e.g. in studies on superconducting quantum circuits or astrophysical applications.

  6. Plutonium determination by isotope dilution

    International Nuclear Information System (INIS)

    Lucas, M.

    1980-01-01

    The principle is to add to a known amount of the analysed solution a known amount of a spike solution consisting of plutonium 242. The isotopic composition of the resulting mixture is then determined by surface ionization mass spectrometry, and the plutonium concentration in the solution is deduced, from this measurement. For irradiated fuels neutronic studies or for fissile materials balance measurements, requiring the knowledge of the ratio U/Pu or of concentration both uranium and plutonium, it is better to use the double spike isotope dilution method, with a spike solution of known 233 U- 242 Pu ratio. Using this method, the ratio of uranium to plutonium concentration in the irradiated fuel solution can be determined without any accurate measurement of the mixed amounts of sample and spike solutions. For fissile material balance measurements, the uranium concentration is determined by using single isotope dilution, and the plutonium concentration is deduced from the ratio Pu/U and U concentration. The main advantages of isotope dilution are its selectivity, accuracy and very high sensitivity. The recent improvements made to surface ionization mass spectrometers have considerably increased the precision of the measurements; a relative precision of about 0.2% to 0.3% is obtained currently, but it could be reduced to 0.1%, in the future, with a careful control of the experimental procedures. The detection limite is around 0.1 ppb [fr

  7. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  8. Bias against research on gender bias.

    Science.gov (United States)

    Cislak, Aleksandra; Formanowicz, Magdalena; Saguy, Tamar

    2018-01-01

    The bias against women in academia is a documented phenomenon that has had detrimental consequences, not only for women, but also for the quality of science. First, gender bias in academia affects female scientists, resulting in their underrepresentation in academic institutions, particularly in higher ranks. The second type of gender bias in science relates to some findings applying only to male participants, which produces biased knowledge. Here, we identify a third potentially powerful source of gender bias in academia: the bias against research on gender bias. In a bibliometric investigation covering a broad range of social sciences, we analyzed published articles on gender bias and race bias and established that articles on gender bias are funded less often and published in journals with a lower Impact Factor than articles on comparable instances of social discrimination. This result suggests the possibility of an underappreciation of the phenomenon of gender bias and related research within the academic community. Addressing this meta-bias is crucial for the further examination of gender inequality, which severely affects many women across the world.

  9. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

    Science.gov (United States)

    Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

    2016-12-01

    : MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We

  10. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, Miguel; Alessie, Rob; Teulings, Coen

    2010-01-01

    The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these

  11. A miniaturized plastic dilution refrigerator

    International Nuclear Information System (INIS)

    Bindilatti, V.; Oliveira, N.F.Jr.; Martin, R.V.; Frossati, G.

    1996-01-01

    We have built and tested a miniaturized dilution refrigerator, completely contained (still, heat exchanger and mixing chamber) inside a plastic (PVC) tube of 10 mm diameter and 170 mm length. With a 25 cm 2 CuNi heat exchanger, it reached temperatures below 50 mK, for circulation rates below 70 μmol/s. The cooling power at 100 mK and 63 μmol/s was 45 μW. The experimental space could accommodate samples up to 6 mm in diameter. (author)

  12. Competition explains limited attention and perceptual resources: implications for perceptual load and dilution theories

    Directory of Open Access Journals (Sweden)

    Paige E. Scalf

    2013-05-01

    Full Text Available Both perceptual load theory and dilution theory purport to explain when and why task-irrelevant information, or so-called distractors are processed. Central to both explanations is the notion of limited resources, although the theories differ in the precise way in which those limitations affect distractor processing. We have recently proposed a neurally plausible explanation of limited resources in which neural competition among stimuli hinders their representation in the brain. This view of limited capacity can also explain distractor processing, whereby the competitive interactions and bias imposed to resolve the competition determine the extent to which a distractor is processed. This idea is compatible with aspects of both perceptual load and dilution models of distractor processing, but also serves to highlight their differences. Here we review the evidence in favor of a biased competition view of limited resources and relate these ideas to both classic perceptual load theory and dilution theory.

  13. Competition explains limited attention and perceptual resources: implications for perceptual load and dilution theories.

    Science.gov (United States)

    Scalf, Paige E; Torralbo, Ana; Tapia, Evelina; Beck, Diane M

    2013-01-01

    Both perceptual load theory and dilution theory purport to explain when and why task-irrelevant information, or so-called distractors are processed. Central to both explanations is the notion of limited resources, although the theories differ in the precise way in which those limitations affect distractor processing. We have recently proposed a neurally plausible explanation of limited resources in which neural competition among stimuli hinders their representation in the brain. This view of limited capacity can also explain distractor processing, whereby the competitive interactions and bias imposed to resolve the competition determine the extent to which a distractor is processed. This idea is compatible with aspects of both perceptual load and dilution models of distractor processing, but also serves to highlight their differences. Here we review the evidence in favor of a biased competition view of limited resources and relate these ideas to both classic perceptual load theory and dilution theory.

  14. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  15. Buffer erosion in dilute groundwater

    International Nuclear Information System (INIS)

    Schatz, T.; Kanerva, N.; Martikainen, J.; Sane, P.; Olin, M.; Seppaelae, A.; Koskinen, K.

    2013-08-01

    One scenario of interest for repository safety assessment involves the loss of bentonite buffer material in contact with dilute groundwater flowing through a transmissive fracture interface. In order to examine the extrusion/erosion behavior of bentonite buffer material under such circumstances, a series of experiments were performed in a flow-through, 1 mm aperture, artificial fracture system. These experiments covered a range of solution chemistry (salt concentration and composition), material composition (sodium montmorillonite and admixtures with calcium montmorillonite), and flow velocity conditions. No erosion was observed for sodium montmorillonite against solution compositions from 0.5 g/L to 10 g/L NaCl. No erosion was observed for 50/50 calcium/sodium montmorillonite against 0.5 g/L NaCl. Erosion was observed for both sodium montmorillonite and 50/50 calcium/sodium montmorillonite against solution compositions ≤ 0.25 g/L NaCl. The calculated erosion rates for the tests with the highest levels of measured erosion, i.e., the tests run under the most dilute conditions (ionic strength (IS) < ∼1 mM), were well-correlated to flow velocity, whereas the calculated erosion rates for the tests with lower levels of measured erosion, i.e., the tests run under somewhat less dilute conditions (∼1 mM < IS < ∼4 mM), were not similarly correlated indicating that material and solution composition can significantly affect erosion rates. In every experiment, both erosive and non-erosive, emplaced buffer material extruded into the fracture and was observed to be impermeable to water flowing in the fracture effectively forming an extended diffusive barrier around the intersecting fracture/buffer interface. Additionally, a model which was developed previously to predict the rate of erosion of bentonite buffer material in low ionic strength water in rock fracture environments was applied to three different cases: sodium montmorillonite expansion in a vertical tube, a

  16. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  17. Desynchronization in diluted neural networks

    International Nuclear Information System (INIS)

    Zillmer, Ruediger; Livi, Roberto; Politi, Antonio; Torcini, Alessandro

    2006-01-01

    The dynamical behavior of a weakly diluted fully inhibitory network of pulse-coupled spiking neurons is investigated. Upon increasing the coupling strength, a transition from regular to stochasticlike regime is observed. In the weak-coupling phase, a periodic dynamics is rapidly approached, with all neurons firing with the same rate and mutually phase locked. The strong-coupling phase is characterized by an irregular pattern, even though the maximum Lyapunov exponent is negative. The paradox is solved by drawing an analogy with the phenomenon of 'stable chaos', i.e., by observing that the stochasticlike behavior is 'limited' to an exponentially long (with the system size) transient. Remarkably, the transient dynamics turns out to be stationary

  18. A compact rotating dilution refrigerator

    Science.gov (United States)

    Fear, M. J.; Walmsley, P. M.; Chorlton, D. A.; Zmeev, D. E.; Gillott, S. J.; Sellers, M. C.; Richardson, P. P.; Agrawal, H.; Batey, G.; Golov, A. I.

    2013-10-01

    We describe the design and performance of a new rotating dilution refrigerator that will primarily be used for investigating the dynamics of quantized vortices in superfluid 4He. All equipment required to operate the refrigerator and perform experimental measurements is mounted on two synchronously driven, but mechanically decoupled, rotating carousels. The design allows for relative simplicity of operation and maintenance and occupies a minimal amount of space in the laboratory. Only two connections between the laboratory and rotating frames are required for the transmission of electrical power and helium gas recovery. Measurements on the stability of rotation show that rotation is smooth to around 10-3 rad s-1 up to angular velocities in excess of 2.5 rad s-1. The behavior of a high-Q mechanical resonator during rapid changes in rotation has also been investigated.

  19. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Combination of biased forecasts: Bias correction or bias based weights?

    OpenAIRE

    Wenzel, Thomas

    1999-01-01

    Most of the literature on combination of forecasts deals with the assumption of unbiased individual forecasts. Here, we consider the case of biased forecasts and discuss two different combination techniques resulting in an unbiased forecast. On the one hand we correct the individual forecasts, and on the other we calculate bias based weights. A simulation study gives some insight in the situations where we should use the different methods.

  1. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...... in the theoretical predictive equation by suggesting a data generating process, where returns are generated as linear functions of a lagged latent I(0) risk process. The observed predictor is a function of this latent I(0) process, but it is corrupted by a fractionally integrated noise. Such a process may arise due...... to aggregation or unexpected level shifts. In this setup, the practitioner estimates a misspecified, unbalanced, and endogenous predictive regression. We show that the OLS estimate of this regression is inconsistent, but standard inference is possible. To obtain a consistent slope estimate, we then suggest...

  2. Understanding logistic regression analysis

    OpenAIRE

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using ex...

  3. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  4. Alternative Methods of Regression

    CERN Document Server

    Birkes, David

    2011-01-01

    Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

  5. Adaptable history biases in human perceptual decisions.

    Science.gov (United States)

    Abrahamyan, Arman; Silva, Laura Luz; Dakin, Steven C; Carandini, Matteo; Gardner, Justin L

    2016-06-21

    When making choices under conditions of perceptual uncertainty, past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations, we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject's default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics.

  6. Benefits of being biased!

    Indian Academy of Sciences (India)

    Administrator

    Journal of Genetics, Vol. 83, No. 2, August 2004. Keywords. codon bias; alcohol dehydrogenase; Darwinian ... RESEARCH COMMENTARY. Benefits of being biased! SUTIRTH DEY*. Evolutionary Biology Laboratory, Evolutionary & Organismal Biology Unit,. Jawaharlal Nehru Centre for Advanced Scientific Research,.

  7. Dynamics of dilute polymer solutions

    International Nuclear Information System (INIS)

    Nicholson, L.K.; Higgins, J.S.

    1980-01-01

    Neutrons scattered by nuclei undergoing slow motion e.g. the internal motion within polymer chains, lose or gain very small amounts of energy. It is therefore the quasi-elastic region of the neutron scattering spectrum which is of interest and in particular the time correlation function (or intermediate scattering law S(Q,t)) which is ideally required to define the motion. The neutron spin echo spectrometer (IN11) at the ILL facilitates the measurement of very small energy changes (down to 10 neV) on scattering from a sample, by changing and keeping track of neutron beam polarization non-parallel to the magnetic guide-field (1). The resultant neutron beam polarization, when normalized against a standard (totally elastic) scatterer is directly proportional to the cosine Fourier Transform of the scattering law S(Q,ω), which is to say the time correlation function is measured directly. Dilute solutions of deuterated polystyrene (PSD) and deuterated polytetrahydrofuran (PTDF) in carbon disulphide, and of their hydrogeneous counterparts (PSH and PTHF respectively) in deuterated benzene were investigated in the range 0.027 A -1 -1 , at 30 0 C. (orig./FKS)

  8. Storm Sewage Dilution in Smaller Streams

    DEFF Research Database (Denmark)

    Larsen, Torben; Vestergaard, Kristian

    1987-01-01

    A numerical model has been used to show how dilution in smaller streams can be effected by unsteady hydraulic conditions caused by a storm sewage overflow.......A numerical model has been used to show how dilution in smaller streams can be effected by unsteady hydraulic conditions caused by a storm sewage overflow....

  9. Cost effectiveness of dilute chemical decontamination

    International Nuclear Information System (INIS)

    LeSurf, J.E.; Weyman, G.D.

    The basic principles of dilute chemical decontamination are described, as well as the method of application. Methods of computing savings in radiation dose and costs are presented, with results from actual experience and illustrative examples. It is concluded that dilute chemical decontamination is beneficial in many cases. It reduces radiation exposure of workers, saves money, and simplifies maintenance work

  10. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  11. Understanding logistic regression analysis.

    Science.gov (United States)

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.

  12. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  13. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  14. Spatial correlation in Bayesian logistic regression with misclassification

    DEFF Research Database (Denmark)

    Bihrmann, Kristine; Toft, Nils; Nielsen, Søren Saxmose

    2014-01-01

    Standard logistic regression assumes that the outcome is measured perfectly. In practice, this is often not the case, which could lead to biased estimates if not accounted for. This study presents Bayesian logistic regression with adjustment for misclassification of the outcome applied to data...

  15. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Understanding poisson regression.

    Science.gov (United States)

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  17. Theoretical modeling of diluted antiferromagnetic systems

    International Nuclear Information System (INIS)

    Pozo, J; Elgueta, R; Acevedo, R

    2000-01-01

    Some magnetic properties of a Diluted Antiferromagnetic System (DAFS) are studied. The model of the two sub-networks for antiferromagnetism is used and a Heisenberg Hamiltonian type is proposed, where the square operators are expressed in terms of boson operators with the approach of spin waves. The behavior of the diluted system's fundamental state depends basically on the competition effect between the anisotropy field and the Weiss molecular field. The approach used allows the diluted system to be worked for strong anisotropies as well as when these are very weak

  18. A comparison of the performances of an artificial neural network and a regression model for GFR estimation.

    Science.gov (United States)

    Liu, Xun; Li, Ning-shan; Lv, Lin-sheng; Huang, Jian-hua; Tang, Hua; Chen, Jin-xia; Ma, Hui-juan; Wu, Xiao-ming; Lou, Tan-qi

    2013-12-01

    Accurate estimation of glomerular filtration rate (GFR) is important in clinical practice. Current models derived from regression are limited by the imprecision of GFR estimates. We hypothesized that an artificial neural network (ANN) might improve the precision of GFR estimates. A study of diagnostic test accuracy. 1,230 patients with chronic kidney disease were enrolled, including the development cohort (n=581), internal validation cohort (n=278), and external validation cohort (n=371). Estimated GFR (eGFR) using a new ANN model and a new regression model using age, sex, and standardized serum creatinine level derived in the development and internal validation cohort, and the CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration) 2009 creatinine equation. Measured GFR (mGFR). GFR was measured using a diethylenetriaminepentaacetic acid renal dynamic imaging method. Serum creatinine was measured with an enzymatic method traceable to isotope-dilution mass spectrometry. In the external validation cohort, mean mGFR was 49±27 (SD) mL/min/1.73 m2 and biases (median difference between mGFR and eGFR) for the CKD-EPI, new regression, and new ANN models were 0.4, 1.5, and -0.5 mL/min/1.73 m2, respectively (P30% from mGFR) were 50.9%, 77.4%, and 78.7%, respectively (Psource of systematic bias in comparisons of new models to CKD-EPI, and both the derivation and validation cohorts consisted of a group of patients who were referred to the same institution. An ANN model using 3 variables did not perform better than a new regression model. Whether ANN can improve GFR estimation using more variables requires further investigation. Copyright © 2013 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  19. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  20. Multicollinearity and Regression Analysis

    Science.gov (United States)

    Daoud, Jamal I.

    2017-12-01

    In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.

  1. CPI Bias in Korea

    Directory of Open Access Journals (Sweden)

    Chul Chung

    2007-12-01

    Full Text Available We estimate the CPI bias in Korea by employing the approach of Engel’s Law as suggested by Hamilton (2001. This paper is the first attempt to estimate the bias using Korean panel data, Korean Labor and Income Panel Study(KLIPS. Following Hamilton’s model with non­linear specification correction, our estimation result shows that the cumulative CPI bias over the sample period (2000-2005 was 0.7 percent annually. This CPI bias implies that about 21 percent of the inflation rate during the period can be attributed to the bias. In light of purchasing power parity, we provide an interpretation of the estimated bias.

  2. Minimax Regression Quantiles

    DEFF Research Database (Denmark)

    Bache, Stefan Holst

    A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

  3. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...... functionals. The software presented here is implemented in the riskRegression package....

  4. Gluconeogenesis from labeled carbon: estimating isotope dilution

    International Nuclear Information System (INIS)

    Kelleher, J.K.

    1986-01-01

    To estimate the rate of gluconeogenesis from steady-state incorporation of labeled 3-carbon precursors into glucose, isotope dilution must be considered so that the rate of labeling of glucose can be quantitatively converted to the rate of gluconeogenesis. An expression for the value of this isotope dilution can be derived using mathematical techniques and a model of the tricarboxylic acid (TCA) cycle. The present investigation employs a more complex model than that used in previous studies. This model includes the following pathways that may affect the correction for isotope dilution: 1) flux of 3-carbon precursor to the oxaloacetate pool via acetyl-CoA and the TCA cycle; 2) flux of 4- or 5-carbon compounds into the TCA cycle; 3) reversible flux between oxaloacetate (OAA) and pyruvate and between OAA and fumarate; 4) incomplete equilibrium between OAA pools; and 5) isotope dilution of 3-carbon tracers between the experimentally measured pool and the precursor for the TCA-cycle OAA pool. Experimental tests are outlined which investigators can use to determine whether these pathways are significant in a specific steady-state system. The study indicated that flux through these five pathways can significantly affect the correction for isotope dilution. To correct for the effects of these pathways an alternative method for calculating isotope dilution is proposed using citrate to relate the specific activities of acetyl-CoA and OAA

  5. Multiple linear regression analysis

    Science.gov (United States)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  6. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  7. Linear Regression Analysis

    CERN Document Server

    Seber, George A F

    2012-01-01

    Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.

  8. Nonlinear Regression with R

    CERN Document Server

    Ritz, Christian; Parmigiani, Giovanni

    2009-01-01

    R is a rapidly evolving lingua franca of graphical display and statistical analysis of experiments from the applied sciences. This book provides a coherent treatment of nonlinear regression with R by means of examples from a diversity of applied sciences such as biology, chemistry, engineering, medicine and toxicology.

  9. Bayesian ARTMAP for regression.

    Science.gov (United States)

    Sasu, L M; Andonie, R

    2013-10-01

    Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Bounded Gaussian process regression

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan

    2013-01-01

    We extend the Gaussian process (GP) framework for bounded regression by introducing two bounded likelihood functions that model the noise on the dependent variable explicitly. This is fundamentally different from the implicit noise assumption in the previously suggested warped GP framework. We...... with the proposed explicit noise-model extension....

  11. and Multinomial Logistic Regression

    African Journals Online (AJOL)

    This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for classifying students based on their academic performance. The predictive accuracy for each model was measured by their average Classification Correct Rate (CCR).

  12. Mechanisms of neuroblastoma regression

    Science.gov (United States)

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  13. Accounting for measurement error in log regression models with applications to accelerated testing.

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    Full Text Available In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  14. Accounting for measurement error in log regression models with applications to accelerated testing.

    Science.gov (United States)

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  15. Sampler bias -- Phase 1

    International Nuclear Information System (INIS)

    Blanchard, R.J.

    1995-01-01

    This documents Phase 1 determinations on sampler induced bias for four sampler types used in tank characterization. Each sampler, grab sampler or bottle-on-a-string, auger sampler, sludge sampler and universal sampler, is briefly discussed and their physical limits noted. Phase 2 of this document will define additional testing and analysis to further define Sampler Bias

  16. Photovoltaic Bias Generator

    Science.gov (United States)

    2018-02-01

    Department of the Army position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an... Interior view of the photovoltaic bias generator showing wrapped-wire side of circuit board...3 Fig. 4 Interior view of the photovoltaic bias generator showing component side of circuit board

  17. Biases in categorization

    NARCIS (Netherlands)

    Das-Smaal, E.A.

    1990-01-01

    On what grounds can we conclude that an act of categorization is biased? In this chapter, it is contended that in the absence of objective norms of what categories actually are, biases in categorization can only be specified in relation to theoretical understandings of categorization. Therefore, the

  18. Ridge Regression Signal Processing

    Science.gov (United States)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  19. Subset selection in regression

    CERN Document Server

    Miller, Alan

    2002-01-01

    Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

  20. Better Autologistic Regression

    Directory of Open Access Journals (Sweden)

    Mark A. Wolters

    2017-11-01

    Full Text Available Autologistic regression is an important probability model for dichotomous random variables observed along with covariate information. It has been used in various fields for analyzing binary data possessing spatial or network structure. The model can be viewed as an extension of the autologistic model (also known as the Ising model, quadratic exponential binary distribution, or Boltzmann machine to include covariates. It can also be viewed as an extension of logistic regression to handle responses that are not independent. Not all authors use exactly the same form of the autologistic regression model. Variations of the model differ in two respects. First, the variable coding—the two numbers used to represent the two possible states of the variables—might differ. Common coding choices are (zero, one and (minus one, plus one. Second, the model might appear in either of two algebraic forms: a standard form, or a recently proposed centered form. Little attention has been paid to the effect of these differences, and the literature shows ambiguity about their importance. It is shown here that changes to either coding or centering in fact produce distinct, non-nested probability models. Theoretical results, numerical studies, and analysis of an ecological data set all show that the differences among the models can be large and practically significant. Understanding the nature of the differences and making appropriate modeling choices can lead to significantly improved autologistic regression analyses. The results strongly suggest that the standard model with plus/minus coding, which we call the symmetric autologistic model, is the most natural choice among the autologistic variants.

  1. Regression in organizational leadership.

    Science.gov (United States)

    Kernberg, O F

    1979-02-01

    The choice of good leaders is a major task for all organizations. Inforamtion regarding the prospective administrator's personality should complement questions regarding his previous experience, his general conceptual skills, his technical knowledge, and the specific skills in the area for which he is being selected. The growing psychoanalytic knowledge about the crucial importance of internal, in contrast to external, object relations, and about the mutual relationships of regression in individuals and in groups, constitutes an important practical tool for the selection of leaders.

  2. Classification and regression trees

    CERN Document Server

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

  3. Logistic regression models

    CERN Document Server

    Hilbe, Joseph M

    2009-01-01

    This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

  4. Dilution effects on ultrafine particle emissions from Euro 5 and Euro 6 diesel and gasoline vehicles

    Science.gov (United States)

    Louis, Cédric; Liu, Yao; Martinet, Simon; D'Anna, Barbara; Valiente, Alvaro Martinez; Boreave, Antoinette; R'Mili, Badr; Tassel, Patrick; Perret, Pascal; André, Michel

    2017-11-01

    Dilution and temperature used during sampling of vehicle exhaust can modify particle number concentration and size distribution. Two experiments were performed on a chassis dynamometer to assess exhaust dilution and temperature on particle number and particle size distribution for Euro 5 and Euro 6 vehicles. In the first experiment, the effects of dilution (ratio from 8 to 4 000) and temperature (ranging from 50 °C to 150 °C) on particle quantification were investigated directly from tailpipe for a diesel and a gasoline Euro 5 vehicles. In the second experiment, particle emissions from Euro 6 diesel and gasoline vehicles directly sampled from the tailpipe were compared to the constant volume sampling (CVS) measurements under similar sampling conditions. Low primary dilutions (3-5) induced an increase in particle number concentration by a factor of 2 compared to high primary dilutions (12-20). Low dilution temperatures (50 °C) induced 1.4-3 times higher particle number concentration than high dilution temperatures (150 °C). For the Euro 6 gasoline vehicle with direct injection, constant volume sampling (CVS) particle number concentrations were higher than after the tailpipe by a factor of 6, 80 and 22 for Artemis urban, road and motorway, respectively. For the same vehicle, particle size distribution measured after the tailpipe was centred on 10 nm, and particles were smaller than the ones measured after CVS that was centred between 50 nm and 70 nm. The high particle concentration (≈106 #/cm3) and the growth of diameter, measured in the CVS, highlighted aerosol transformations, such as nucleation, condensation and coagulation occurring in the sampling system and this might have biased the particle measurements.

  5. Approximate Bias Correction in Econometrics

    OpenAIRE

    James G. MacKinnon; Anthony A. Smith Jr.

    1995-01-01

    This paper discusses ways to reduce the bias of consistent estimators that are biased in finite samples. It is necessary that the bias function, which relates parameter values to bias, should be estimable by computer simulation or by some other method. If so, bias can be reduced or, in some cases that may not be unrealistic, even eliminated. In general, several evaluations of the bias function will be required to do this. Unfortunately, reducing bias may increase the variance, or even the mea...

  6. Steganalysis using logistic regression

    Science.gov (United States)

    Lubenko, Ivans; Ker, Andrew D.

    2011-02-01

    We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.

  7. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface......-product we obtain fast access to the baseline hazards (compared to survival::basehaz()) and predictions of survival probabilities, their confidence intervals and confidence bands. Confidence intervals and confidence bands are based on point-wise asymptotic expansions of the corresponding statistical...

  8. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  9. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  10. Composite systems of dilute and dense couplings

    International Nuclear Information System (INIS)

    Raymond, J R; Saad, D

    2008-01-01

    Composite systems, where couplings are of two types, a combination of strong dilute and weak dense couplings of Ising spins, are examined through the replica method. The dilute and dense parts are considered to have independent canonical disordered or uniform bond distributions; mixing the models by variation of a parameter γ alongside inverse temperature β we analyse the respective thermodynamic solutions. We describe the variation in high temperature transitions as mixing occurs; in the vicinity of these transitions we exactly analyse the competing effects of the dense and sparse models. By using the replica symmetric ansatz and population dynamics we described the low temperature behaviour of mixed systems

  11. Computer automation of a dilution cryogenic system

    International Nuclear Information System (INIS)

    Nogues, C.

    1992-09-01

    This study has been realized in the framework of studies on developing new technic for low temperature detectors for neutrinos and dark matter. The principles of low temperature physics and helium 4 and dilution cryostats, are first reviewed. The cryogenic system used and the technic for low temperature thermometry and regulation systems are then described. The computer automation of the dilution cryogenic system involves: numerical measurement of the parameter set (pressure, temperature, flow rate); computer assisted operating of the cryostat and the pump bench; numerical regulation of pressure and temperature; operation sequence full automation allowing the system to evolve from a state to another (temperature descent for example)

  12. Interaction Studies of Dilute Aqueous Oxalic Acid

    Directory of Open Access Journals (Sweden)

    Kiran Kandpal

    2007-01-01

    Full Text Available Molecular conductance λm, relative viscosity and density of oxalicacid at different concentration in dilute aqueous solution were measured at 293 K.The conductance data were used to calculate the value association constant.Viscosity and density data were used to calculate the A and B coefficient ofJone-Dole equation and apparent molar volume respectively. The viscosityresults were utilized for the applicability of Modified Jone-Dole equation andStaurdinger equations. Mono oxalate anion acts, as structure maker and thesolute-solvent interaction were present in the dilute aqueous oxalic acid.

  13. Dilution refrigeration with multiple mixing chambers

    International Nuclear Information System (INIS)

    Coops, G.M.

    1981-01-01

    A dilution refrigerator is an instrument to reach temperatures in the mK region in a continuous way. The temperature range can be extended and the cooling power can be enlarged by adding an extra mixing chamber. In this way we obtain a double mixing chamber system. In this thesis the theory of the multiple mixing chamber is presented and tested on its validity by comparison with the measurements. Measurements on a dilution refrigerator with a circulation rate up to 2.5 mmol/s are also reported. (Auth.)

  14. Diluted magnetic semiconductor nanowires exhibiting magnetoresistance

    Science.gov (United States)

    Yang, Peidong [El Cerrito, CA; Choi, Heonjin [Seoul, KR; Lee, Sangkwon [Daejeon, KR; He, Rongrui [Albany, CA; Zhang, Yanfeng [El Cerrito, CA; Kuykendal, Tevye [Berkeley, CA; Pauzauskie, Peter [Berkeley, CA

    2011-08-23

    A method for is disclosed for fabricating diluted magnetic semiconductor (DMS) nanowires by providing a catalyst-coated substrate and subjecting at least a portion of the substrate to a semiconductor, and dopant via chloride-based vapor transport to synthesize the nanowires. Using this novel chloride-based chemical vapor transport process, single crystalline diluted magnetic semiconductor nanowires Ga.sub.1-xMn.sub.xN (x=0.07) were synthesized. The nanowires, which have diameters of .about.10 nm to 100 nm and lengths of up to tens of micrometers, show ferromagnetism with Curie temperature above room temperature, and magnetoresistance up to 250 Kelvin.

  15. Bias aware Kalman filters

    DEFF Research Database (Denmark)

    Drecourt, J.-P.; Madsen, H.; Rosbjerg, Dan

    2006-01-01

    This paper reviews two different approaches that have been proposed to tackle the problems of model bias with the Kalman filter: the use of a colored noise model and the implementation of a separate bias filter. Both filters are implemented with and without feedback of the bias into the model state....... The colored noise filter formulation is extended to correct both time correlated and uncorrelated model error components. A more stable version of the separate filter without feedback is presented. The filters are implemented in an ensemble framework using Latin hypercube sampling. The techniques...... are illustrated on a simple one-dimensional groundwater problem. The results show that the presented filters outperform the standard Kalman filter and that the implementations with bias feedback work in more general conditions than the implementations without feedback. 2005 Elsevier Ltd. All rights reserved....

  16. Biases in casino betting

    Directory of Open Access Journals (Sweden)

    James Sundali

    2006-07-01

    Full Text Available We examine two departures of individual perceptions of randomness from probability theory: the hot hand and the gambler's fallacy, and their respective opposites. This paper's first contribution is to use data from the field (individuals playing roulette in a casino to demonstrate the existence and impact of these biases that have been previously documented in the lab. Decisions in the field are consistent with biased beliefs, although we observe significant individual heterogeneity in the population. A second contribution is to separately identify these biases within a given individual, then to examine their within-person correlation. We find a positive and significant correlation across individuals between hot hand and gambler's fallacy biases, suggesting a common (root cause of the two related errors. We speculate as to the source of this correlation (locus of control, and suggest future research which could test this speculation.

  17. Introduction to Unconscious Bias

    Science.gov (United States)

    Schmelz, Joan T.

    2010-05-01

    We all have biases, and we are (for the most part) unaware of them. In general, men and women BOTH unconsciously devalue the contributions of women. This can have a detrimental effect on grant proposals, job applications, and performance reviews. Sociology is way ahead of astronomy in these studies. When evaluating identical application packages, male and female University psychology professors preferred 2:1 to hire "Brian” over "Karen” as an assistant professor. When evaluating a more experienced record (at the point of promotion to tenure), reservations were expressed four times more often when the name was female. This unconscious bias has a repeated negative effect on Karen's career. This talk will introduce the concept of unconscious bias and also give recommendations on how to address it using an example for a faculty search committee. The process of eliminating unconscious bias begins with awareness, then moves to policy and practice, and ends with accountability.

  18. Australia's Bond Home Bias

    OpenAIRE

    Anil V. Mishra; Umaru B. Conteh

    2014-01-01

    This paper constructs the float adjusted measure of home bias and explores the determinants of bond home bias by employing the International Monetary Fund's high quality dataset (2001 to 2009) on cross-border bond investment. The paper finds that Australian investors' prefer investing in countries with higher economic development and more developed bond markets. Exchange rate volatility appears to be an impediment for cross-border bond investment. Investors prefer investing in countries with ...

  19. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design.

    Science.gov (United States)

    Meaney, Christopher; Moineddin, Rahim

    2014-01-24

    In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the

  20. Aid and growth regressions

    DEFF Research Database (Denmark)

    Hansen, Henrik; Tarp, Finn

    2001-01-01

    This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy....... investment. We conclude by stressing the need for more theoretical work before this kind of cross-country regressions are used for policy purposes.......This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy...

  1. Comparison of dye dilution method to radionuclide techniques for cardiac output determination in dogs

    International Nuclear Information System (INIS)

    Eng, S.S.; Robayo, J.R.; Porter, W.; Smith, R.E.

    1980-01-01

    A study was undertaken to identify the most accurate /sup 99m/Tc-labeled radiopharmaceutical and to determine the accuracy of a noninvasive radionuclide technique or cardiac output determinations. Phase I employed sodium pertechnetate, stannous pyrophosphate with sodium pertechnetate, /sup 99m/Tc red blood cells, and /sup 99m/Tc human serum albumin as radionuclide tracers. Cardiac output was determined by the dye dilution method and then by the invasive radionuclide technique. A pairied t test and regression analysis indicated that /sup 99m/Tc human serum albumin was the most accurate radiopharmaceutical for cardiac output determinations, and the results compared favorably to those obtained by the dye dilution method. In Phase II, /sup 99m/Tc human serum albumin was used as the radionuclide tracer for cardiac output determinations with the noninvasive technique. The results compared favorably to those obtained by the dye dilution method

  2. Quantifying dilution caused by execution efficiency

    Directory of Open Access Journals (Sweden)

    Taís Renata Câmara

    Full Text Available Abstract In open pit mining, dilution is not always a factor systematically analyzed and calculated. Often it is only an adjusted number, for example, calculated or even empirically determined for a certain operational condition perpetuating along time in the form of a constant applied to calculating reserves or mine planning in attendance of audit requirements. Dilution and loss are factors that should be always considered for tonnage and grade estimates. These factors are always associated and can be determined considering several particularities of the deposit and the operation itself. In this study, a methodology was determined to identify blocks adjacent to the blocks previously planned to be mined. Thus, it is possible to estimate the dilution caused by poor operating efficiency, taking into account the inability of the equipment to perfectly remove each block, respecting its limits. Mining dilution is defined as the incorporation of waste material to ore due to the operational incapacity to efficiently separate the materials during the mining process, considering the physical processes, and the operating and geometric configurations of the mining with the equipment available.

  3. Atomic displacements in bcc dilute alloys

    Indian Academy of Sciences (India)

    be attributed to the reliability of the measured distances which fall off quickly with each shell. Therefore, in ... field and electrical field gradients due to impurities in vanadium [13]. The effective .... Expanding ∆φ(| Rn' |) in power series of u(R0 n), one gets ... The results of each dilute alloy system are presented separately and ...

  4. A century of indicator dilution technique

    DEFF Research Database (Denmark)

    Henriksen, Jens H; Jensen, Gorm B; Larsson, Henrik B W

    2014-01-01

    This review imparts the history and the present status of the indicator dilution technique with quantitative bolus injection. The first report on flow measurement with this technique appeared 100 years ago. In 1928, the use of intravascular dyes made possible a widespread application in animals...

  5. Liquid volumes measurements by isotopic dilution

    International Nuclear Information System (INIS)

    Herrera M, J.M.

    1981-01-01

    By the nuclear technique, isotopic dilution industrial liquid volumes may be measured in large size recipients of irregular shapes using radiotracers. In the present work laboratory and pilot test are made with 2 radiotracers for optimizing the technique and later done on an industrial scale, obtaining a maximum deviation of +-2%, some recommendations are given to improve the performance of the technique. (author)

  6. Atomic displacements in bcc dilute alloys

    Indian Academy of Sciences (India)

    We present here a systematic investigation of the atomic displacements in bcc transition metal (TM) dilute alloys. We have calculated the atomic displacements in bcc (V, Cr, Fe, Nb, Mo, Ta and W) transition metals (TMs) due to 3d, 4d and 5d TMs at the substitutional site using the Kanzaki lattice static method. Wills and ...

  7. Continuous deionization of a dilute nickel solution

    NARCIS (Netherlands)

    Spoor, P.B.; Koene, L.; Veen, ter W.R.; Janssen, L.J.J.

    2002-01-01

    This paper describes the continuous removal of nickel ions from a dilute solution using a hybrid ion-exchange/electrodialysis process. Emphasis was placed on the ionic state of the bed during the process, and the mass balance of ions in the system. Much of this information was obtained by analysing

  8. Dilution kicker for the SPS beam dump

    CERN Multimedia

    1974-01-01

    In order to reduce thermal stress on the SPS dump material, the fast-ejected beam was swept horizontally across the dump. This was done with the "dilution kicker" MKDH, still in use at the time of writing. The person on the left is Manfred Mayer. See also 7404072X.

  9. Magnetic properties of diluted magnetic semiconductors

    NARCIS (Netherlands)

    Jonge, de W.J.M.; Swagten, H.J.M.

    1991-01-01

    A review will be given of the magnetic characteristics of diluted magnetic semiconductors and the relation with the driving exchange mechanisms. II–VI as well as IV–VI compounds will be considered. The relevance of the long-range interaction and the role of the carrier concentration will be

  10. Optimism Bias in Fans and Sports Reporters.

    Science.gov (United States)

    Love, Bradley C; Kopeć, Łukasz; Guest, Olivia

    2015-01-01

    People are optimistic about their prospects relative to others. However, existing studies can be difficult to interpret because outcomes are not zero-sum. For example, one person avoiding cancer does not necessitate that another person develops cancer. Ideally, optimism bias would be evaluated within a closed formal system to establish with certainty the extent of the bias and the associated environmental factors, such that optimism bias is demonstrated when a population is internally inconsistent. Accordingly, we asked NFL fans to predict how many games teams they liked and disliked would win in the 2015 season. Fans, like ESPN reporters assigned to cover a team, were overly optimistic about their team's prospects. The opposite pattern was found for teams that fans disliked. Optimism may flourish because year-to-year team results are marked by auto-correlation and regression to the group mean (i.e., good teams stay good, but bad teams improve).

  11. Optimism Bias in Fans and Sports Reporters

    Science.gov (United States)

    Love, Bradley C.

    2015-01-01

    People are optimistic about their prospects relative to others. However, existing studies can be difficult to interpret because outcomes are not zero-sum. For example, one person avoiding cancer does not necessitate that another person develops cancer. Ideally, optimism bias would be evaluated within a closed formal system to establish with certainty the extent of the bias and the associated environmental factors, such that optimism bias is demonstrated when a population is internally inconsistent. Accordingly, we asked NFL fans to predict how many games teams they liked and disliked would win in the 2015 season. Fans, like ESPN reporters assigned to cover a team, were overly optimistic about their team’s prospects. The opposite pattern was found for teams that fans disliked. Optimism may flourish because year-to-year team results are marked by auto-correlation and regression to the group mean (i.e., good teams stay good, but bad teams improve). PMID:26352146

  12. Robust mislabel logistic regression without modeling mislabel probabilities.

    Science.gov (United States)

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  13. Bias-corrected Pearson estimating functions for Taylor's power law applied to benthic macrofauna data

    DEFF Research Database (Denmark)

    Jørgensen, Bent; Demétrio, Clarice G. B.; Kristensen, Erik

    2011-01-01

    Estimation of Taylor’s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating...

  14. Canonical variate regression.

    Science.gov (United States)

    Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun

    2016-07-01

    In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Measuring Agricultural Bias

    DEFF Research Database (Denmark)

    Jensen, Henning Tarp; Robinson, Sherman; Tarp, Finn

    The measurement issue is the key issue in the literature on trade policy-induced agri-cultural price incentive bias. This paper introduces a general equilibrium effective rate of protection (GE-ERP) measure, which extends and generalizes earlier partial equilibrium nominal protection measures...... shares and intersectoral linkages - are crucial for determining the sign and magnitude of trade policy bias. The GE-ERP measure is therefore uniquely suited to capture the full impact of trade policies on agricultural price incentives. A Monte Carlo procedure confirms that the results are robust....... For the 15 sample countries, the results indicate that the agricultural price incentive bias, which was generally perceived to exist during the 1980s, was largely eliminated during the 1990s. The results also demonstrate that general equilibrium effects and country-specific characteristics - including trade...

  16. EXAFS of dilute systems: fluorescence detection

    International Nuclear Information System (INIS)

    Hastings, J.B.

    1979-01-01

    Since the first observations of the variation of the absorption coefficient for x-rays above the energy thresholds in the thirties until the early seventies, measurements and analysis of these variations were merely intended for the understanding of the underlying physics. Recently, with the understanding of the information available about the local atomic structure in the neighborhood of the absorbing species and the availability of high intensity synchrotron radiation sources, EXAFS has become a powerful structural tool. In these discussions, the details of the measurements for very dilute species are presented. It is shown that for the more dilute systems the measurement of the emission rather than the direct absorption is a more favorable technique

  17. Phase diagrams of diluted transverse Ising nanowire

    Energy Technology Data Exchange (ETDEWEB)

    Bouhou, S.; Essaoudi, I. [Laboratoire de Physique des Matériaux et Modélisation, des Systèmes, (LP2MS), Unité Associée au CNRST-URAC 08, University of Moulay Ismail, Physics Department, Faculty of Sciences, B.P. 11201 Meknes (Morocco); Ainane, A., E-mail: ainane@pks.mpg.de [Laboratoire de Physique des Matériaux et Modélisation, des Systèmes, (LP2MS), Unité Associée au CNRST-URAC 08, University of Moulay Ismail, Physics Department, Faculty of Sciences, B.P. 11201 Meknes (Morocco); Max-Planck-Institut für Physik Complexer Systeme, Nöthnitzer Str. 38 D-01187 Dresden (Germany); Saber, M. [Laboratoire de Physique des Matériaux et Modélisation, des Systèmes, (LP2MS), Unité Associée au CNRST-URAC 08, University of Moulay Ismail, Physics Department, Faculty of Sciences, B.P. 11201 Meknes (Morocco); Max-Planck-Institut für Physik Complexer Systeme, Nöthnitzer Str. 38 D-01187 Dresden (Germany); Ahuja, R. [Condensed Matter Theory Group, Department of Physics and Astronomy, Uppsala University, 75120 Uppsala (Sweden); Dujardin, F. [Laboratoire de Chimie et Physique des Milieux Complexes (LCPMC), Institut de Chimie, Physique et Matériaux (ICPM), 1 Bd. Arago, 57070 Metz (France)

    2013-06-15

    In this paper, the phase diagrams of diluted Ising nanowire consisting of core and surface shell coupling by J{sub cs} exchange interaction are studied using the effective field theory with a probability distribution technique, in the presence of transverse fields in the core and in the surface shell. We find a number of characteristic phenomena. In particular, the effect of concentration c of magnetic atoms, the exchange interaction core/shell, the exchange in surface and the transverse fields in core and in surface shell of phase diagrams are investigated. - Highlights: ► We use the EFT to investigate the phase diagrams of Ising transverse nanowire. ► Ferrimagnetic and ferromagnetic cases are investigated. ► The effects of the dilution and the transverse fields in core and shell are studied. ► Behavior of the transition temperature with the exchange interaction is given.

  18. Phase diagrams of diluted transverse Ising nanowire

    International Nuclear Information System (INIS)

    Bouhou, S.; Essaoudi, I.; Ainane, A.; Saber, M.; Ahuja, R.; Dujardin, F.

    2013-01-01

    In this paper, the phase diagrams of diluted Ising nanowire consisting of core and surface shell coupling by J cs exchange interaction are studied using the effective field theory with a probability distribution technique, in the presence of transverse fields in the core and in the surface shell. We find a number of characteristic phenomena. In particular, the effect of concentration c of magnetic atoms, the exchange interaction core/shell, the exchange in surface and the transverse fields in core and in surface shell of phase diagrams are investigated. - Highlights: ► We use the EFT to investigate the phase diagrams of Ising transverse nanowire. ► Ferrimagnetic and ferromagnetic cases are investigated. ► The effects of the dilution and the transverse fields in core and shell are studied. ► Behavior of the transition temperature with the exchange interaction is given

  19. Isotope dilution analysis of environmental samples

    International Nuclear Information System (INIS)

    Tolgyessy, J.; Lesny, J.; Korenova, Z.; Klas, J.; Klehr, E.H.

    1986-01-01

    Isotope dilution analysis has been used for the determination of several trace elements - especially metals - in a variety of environmental samples, including aerosols, water, soils, biological materials and geological materials. Variations of the basic concept include classical IDA, substoichiometric IDA, and more recently, sub-superequivalence IDA. Each variation has its advantages and limitations. A periodic chart has been used to identify those elements which have been measured in environmental samples using one or more of these methods. (author)

  20. Fractal effects on excitations in diluted ferromagnets

    International Nuclear Information System (INIS)

    Kumar, D.

    1981-08-01

    The low energy spin-wave like excitations in diluted ferromagnets near percolation threshold are studied. For this purpose an explicit use of the fractal model for the backbone of the infinite percolating cluster due to Kirkpatrick is made. Three physical effects are identified, which cause the softening of spin-waves as the percolation point is approached. The importance of fractal effects in the calculation of density of states and the low temperature thermodynamics is pointed out. (author)

  1. Dilution physics modeling: Dissolution/precipitation chemistry

    International Nuclear Information System (INIS)

    Onishi, Y.; Reid, H.C.; Trent, D.S.

    1995-09-01

    This report documents progress made to date on integrating dilution/precipitation chemistry and new physical models into the TEMPEST thermal-hydraulics computer code. Implementation of dissolution/precipitation chemistry models is necessary for predicting nonhomogeneous, time-dependent, physical/chemical behavior of tank wastes with and without a variety of possible engineered remediation and mitigation activities. Such behavior includes chemical reactions, gas retention, solids resuspension, solids dissolution and generation, solids settling/rising, and convective motion of physical and chemical species. Thus this model development is important from the standpoint of predicting the consequences of various engineered activities, such as mitigation by dilution, retrieval, or pretreatment, that can affect safe operations. The integration of a dissolution/precipitation chemistry module allows the various phase species concentrations to enter into the physical calculations that affect the TEMPEST hydrodynamic flow calculations. The yield strength model of non-Newtonian sludge correlates yield to a power function of solids concentration. Likewise, shear stress is concentration-dependent, and the dissolution/precipitation chemistry calculations develop the species concentration evolution that produces fluid flow resistance changes. Dilution of waste with pure water, molar concentrations of sodium hydroxide, and other chemical streams can be analyzed for the reactive species changes and hydrodynamic flow characteristics

  2. Deuterium oxide dilution kinetics to predict body composition in dairy goats

    International Nuclear Information System (INIS)

    Brown, D.L.; Taylor, S.J.

    1986-01-01

    Body composition and D2O dilution kinetics were studied in 15 female goats ranging from 38.0 to 70.1 kg live weight. Infrared spectrophotometric analyses of blood samples drawn during the 4 d following D2O injections were used to estimate D2O space. All does were slaughtered without shrinking and analyzed for dry matter, fat, nitrogen, and ash content. Estimates of D2O space from the late slope of the dilution curve, together with live weight, were used to predict body composition. Conclusions were 1) deuterium oxide space with live body weight accounts for about 90% of the variation in dairy goat empty body fat, empty body nitrogen, and empty body dry matter; 2) less than half the variation in empty body ash is related to live weight and D2O space; and 3) D2O space estimates would be biased by accelerations in water turnover

  3. Measuring agricultural policy bias

    DEFF Research Database (Denmark)

    Jensen, Henning Tarp; Robinson, Sherman; Tarp, Finn

    2010-01-01

    Measurement is a key issue in the literature on price incentive bias induced by trade policy. We introduce a general equilibrium measure of the relative effective rate of protection, which generalizes earlier protection measures. For our fifteen sample countries, results indicate that the agricul...

  4. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  5. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  6. Molecular analysis of two mouse dilute locus deletion mutations: Spontaneous dilute lethal20J and radiation-induced dilute prenatal lethal Aa2 alleles

    International Nuclear Information System (INIS)

    Strobel, M.C.; Seperack, P.K.; Copeland, N.G.; Jenkins, N.A.

    1990-01-01

    The dilute (d) coat color locus of mouse chromosome 9 has been identified by more than 200 spontaneous and mutagen-induced recessive mutations. With the advent of molecular probes for this locus, the molecular lesion associated with different dilute alleles can be recognized and precisely defined. In this study, two dilute mutations, dilute-lethal20J (dl20J) and dilute prenatal lethal Aa2, have been examined. Using a dilute locus genomic probe in Southern blot analysis, we detected unique restriction fragments in dl20J and Aa2 DNA. Subsequent analysis of these fragments showed that they represented deletion breakpoint fusion fragments. DNA sequence analysis of each mutation-associated deletion breakpoint fusion fragment suggests that both genomic deletions were generated by nonhomologous recombination events. The spontaneous dl20J mutation is caused by an interstitial deletion that removes a single coding exon of the dilute gene. The correlation between this discrete deletion and the expression of all dilute-associated phenotypes in dl20J homozygotes defines the dl20J mutation as a functional null allele of the dilute gene. The radiation-induced Aa2 allele is a multilocus deletion that, by complementation analysis, affects both the dilute locus and the proximal prenatal lethal-3 (pl-3) functional unit. Molecular analysis of the Aa2 deletion breakpoint fusion fragment has provided access to a previously undefined gene proximal to d. Initial characterization of this new gene suggests that it may represent the genetically defined pl-3 functional unit

  7. Relative volatility of dilute solutions of Rb-Cs system

    International Nuclear Information System (INIS)

    Gromov, P.B.; Izotov, V.P.; Nisel'son, L.A.

    1984-01-01

    Relative volatility of diluted solutions Rb-Cs in the temperature range 650-820 K and pressures 13-200 gPa has been studied. The system Rb-Cs in the range of diluted solutions obeys the Henry law. It is shown, that liquid-vapour equilibrium in diluted solutions of cesium in rubidium is characterized by negative deviation from perfection

  8. Combining Alphas via Bounded Regression

    Directory of Open Access Journals (Sweden)

    Zura Kakushadze

    2015-11-01

    Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

  9. Regression in autistic spectrum disorders.

    Science.gov (United States)

    Stefanatos, Gerry A

    2008-12-01

    A significant proportion of children diagnosed with Autistic Spectrum Disorder experience a developmental regression characterized by a loss of previously-acquired skills. This may involve a loss of speech or social responsitivity, but often entails both. This paper critically reviews the phenomena of regression in autistic spectrum disorders, highlighting the characteristics of regression, age of onset, temporal course, and long-term outcome. Important considerations for diagnosis are discussed and multiple etiological factors currently hypothesized to underlie the phenomenon are reviewed. It is argued that regressive autistic spectrum disorders can be conceptualized on a spectrum with other regressive disorders that may share common pathophysiological features. The implications of this viewpoint are discussed.

  10. Linear regression in astronomy. I

    Science.gov (United States)

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  11. Tutorial on Using Regression Models with Count Outcomes Using R

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2016-02-01

    Full Text Available Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares either with or without transforming the count variables. In either case, using typical regression for count data can produce parameter estimates that are biased, thus diminishing any inferences made from such data. As count-variable regression models are seldom taught in training programs, we present a tutorial to help educational researchers use such methods in their own research. We demonstrate analyzing and interpreting count data using Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial regression models. The count regression methods are introduced through an example using the number of times students skipped class. The data for this example are freely available and the R syntax used run the example analyses are included in the Appendix.

  12. Advanced statistics: linear regression, part I: simple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  13. Investigating vulnerability to eating disorders: biases in emotional processing.

    Science.gov (United States)

    Pringle, A; Harmer, C J; Cooper, M J

    2010-04-01

    Biases in emotional processing and cognitions about the self are thought to play a role in the maintenance of eating disorders (EDs). However, little is known about whether these difficulties exist pre-morbidly and how they might contribute to risk. Female dieters (n=82) completed a battery of tasks designed to assess the processing of social cues (facial emotion recognition), cognitions about the self [Self-Schema Processing Task (SSPT)] and ED-specific cognitions about eating, weight and shape (emotional Stroop). The 26-item Eating Attitudes Test (EAT-26; Garner et al. 1982) was used to assess subclinical ED symptoms; this was used as an index of vulnerability within this at-risk group. Regression analyses showed that biases in the processing of both neutral and angry faces were predictive of our measure of vulnerability (EAT-26). In the self-schema task, biases in the processing of negative self descriptors previously found to be common in EDs predicted vulnerability. Biases in the processing of shape-related words on the Stroop task were also predictive; however, these biases were more important in dieters who also displayed biases in the self-schema task. We were also able to demonstrate that these biases are specific and separable from more general negative biases that could be attributed to depressive symptoms. These results suggest that specific biases in the processing of social cues, cognitions about the self, and also about eating, weight and shape information, may be important in understanding risk and preventing relapse in EDs.

  14. Polaron in the dilute critical Bose condensate

    Science.gov (United States)

    Pastukhov, Volodymyr

    2018-05-01

    The properties of an impurity immersed in a dilute D-dimensional Bose gas at temperatures close to its second-order phase transition point are considered. Particularly by means of the 1/N-expansion, we calculate the leading-order polaron energy and the damping rate in the limit of vanishing boson–boson interaction. It is shown that the perturbative effective mass and the quasiparticle residue diverge logarithmically in the long-length limit, signalling the non-analytic behavior of the impurity spectrum and pole-free structure of the polaron Green’s function in the infrared region, respectively.

  15. Confluence Model or Resource Dilution Hypothesis?

    DEFF Research Database (Denmark)

    Jæger, Mads

    have a negative effect on educational attainment most studies cannot distinguish empirically between the CM and the RDH. In this paper, I use the different theoretical predictions in the CM and the RDH on the role of cognitive ability as a partial or complete mediator of the sibship size effect......Studies on family background often explain the negative effect of sibship size on educational attainment by one of two theories: the Confluence Model (CM) or the Resource Dilution Hypothesis (RDH). However, as both theories – for substantively different reasons – predict that sibship size should...

  16. Estimation bias and bias correction in reduced rank autoregressions

    DEFF Research Database (Denmark)

    Nielsen, Heino Bohn

    2017-01-01

    This paper characterizes the finite-sample bias of the maximum likelihood estimator (MLE) in a reduced rank vector autoregression and suggests two simulation-based bias corrections. One is a simple bootstrap implementation that approximates the bias at the MLE. The other is an iterative root...

  17. A machine learning model with human cognitive biases capable of learning from small and biased datasets.

    Science.gov (United States)

    Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro

    2018-05-09

    Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.

  18. Validation of an ultrasound dilution technology for cardiac output measurement and shunt detection in infants and children.

    Science.gov (United States)

    Lindberg, Lars; Johansson, Sune; Perez-de-Sa, Valeria

    2014-02-01

    To validate cardiac output measurements by ultrasound dilution technology (COstatus monitor) against those obtained by a transit-time ultrasound technology with a perivascular flow probe and to investigate ultrasound dilution ability to estimate pulmonary to systemic blood flow ratio in children. Prospective observational clinical trial. Pediatric cardiac operating theater in a university hospital. In 21 children (6.1 ± 2.6 kg, mean ± SD) undergoing heart surgery, cardiac output was simultaneously recorded by ultrasound dilution (extracorporeal arteriovenous loop connected to existing arterial and central venous catheters) and a transit-time ultrasound probe applied to the ascending aorta, and when possible, the main pulmonary artery. The pulmonary to systemic blood flow ratio estimated from ultrasound dilution curve analysis was compared with that estimated from transit-time ultrasound technology. Bland-Altman analysis of the whole cohort (90 pairs, before and after surgery) showed a bias between transit-time ultrasound (1.01 ± 0.47 L/min) and ultrasound dilution technology (1.03 ± 0.51 L/min) of -0.02 L/min, limits of agreement -0.3 to 0.3 L/min, and percentage error of 31%. In children with no residual shunts, the bias was -0.04 L/min, limits of agreement -0.28 to 0.2 L/min, and percentage error 19%. The pooled co efficient of variation was for the whole cohort 3.5% (transit-time ultrasound) and 6.3% (ultrasound dilution), and in children without shunt, it was 2.9% (transit-time ultrasound) and 4% (ultrasound dilution), respectively. Ultrasound dilution identified the presence of shunts (pulmonary to systemic blood flow ≠ 1) with a sensitivity of 100% and a specificity of 92%. Mean pulmonary to systemic blood flow ratio by transit-time ultrasound was 2.6 ± 1.0 and by ultrasound dilution 2.2 ± 0.7 (not significant). The COstatus monitor is a reliable technique to measure cardiac output in children with high sensitivity and specificity for detecting the

  19. Does neurocognitive function affect cognitive bias toward an emotional stimulus? Association between general attentional ability and attentional bias toward threat

    Directory of Open Access Journals (Sweden)

    Yuko eHakamata

    2014-08-01

    Full Text Available Background: Although poorer cognitive performance has been found to be associated with anxiety, it remains unclear whether neurocognitive function affects biased cognitive processing toward emotional information. We investigated whether general cognitive function evaluated with a standard neuropsychological test predicts biased cognition, focusing on attentional bias toward threat.Methods: One hundred and five healthy young adults completed a dot-probe task measuring attentional bias and the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS measuring general cognitive function, which consists of five domains: immediate memory, visuospatial/constructional, language, attention, and delayed memory. Stepwise multiple regression analysis was performed to examine the relationships between attentional bias and cognitive function. Results: The attentional domain was the best predictor of attentional bias toward threat (β = -0.26, p = 0.006. Within the attentional domain, digit symbol coding was negatively correlated with attentional bias (r = -0.28, p = 0.005.Conclusions: The present study provides the first evidence that general attentional ability, which was assessed with a standard neuropsychological test, affects attentional bias toward threatening information. Individual cognitive profiles might be important for the measurement and modification of cognitive biases.

  20. SLDAssay: A software package and web tool for analyzing limiting dilution assays.

    Science.gov (United States)

    Trumble, Ilana M; Allmon, Andrew G; Archin, Nancie M; Rigdon, Joseph; Francis, Owen; Baldoni, Pedro L; Hudgens, Michael G

    2017-11-01

    Serial limiting dilution (SLD) assays are used in many areas of infectious disease related research. This paper presents SLDAssay, a free and publicly available R software package and web tool for analyzing data from SLD assays. SLDAssay computes the maximum likelihood estimate (MLE) for the concentration of target cells, with corresponding exact and asymptotic confidence intervals. Exact and asymptotic goodness of fit p-values, and a bias-corrected (BC) MLE are also provided. No other publicly available software currently implements the BC MLE or the exact methods. For validation of SLDAssay, results from Myers et al. (1994) are replicated. Simulations demonstrate the BC MLE is less biased than the MLE. Additionally, simulations demonstrate that exact methods tend to give better confidence interval coverage and goodness-of-fit tests with lower type I error than the asymptotic methods. Additional advantages of using exact methods are also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Nonparametric additive regression for repeatedly measured data

    KAUST Repository

    Carroll, R. J.

    2009-05-20

    We develop an easily computed smooth backfitting algorithm for additive model fitting in repeated measures problems. Our methodology easily copes with various settings, such as when some covariates are the same over repeated response measurements. We allow for a working covariance matrix for the regression errors, showing that our method is most efficient when the correct covariance matrix is used. The component functions achieve the known asymptotic variance lower bound for the scalar argument case. Smooth backfitting also leads directly to design-independent biases in the local linear case. Simulations show our estimator has smaller variance than the usual kernel estimator. This is also illustrated by an example from nutritional epidemiology. © 2009 Biometrika Trust.

  2. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  3. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  4. Retro-regression--another important multivariate regression improvement.

    Science.gov (United States)

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

  5. Neutron scattering study of dilute supercritical solutions

    International Nuclear Information System (INIS)

    Cochran, H.D.; Wignall, G.D.; Shah, V.M.; Londono, J.D.; Bienkowski, P.R.

    1994-01-01

    Dilute solutions in supercritical solvents exhibit interesting microstructures that are related to their dramatic macroscopic behavior. In typical attractive solutions, solutes are believed to be surrounded by clusters of solvent molecules, and solute molecules are believed to congregate in the vicinity of one another. Repulsive solutions, on the other hand, exhibit a local region of reduced solvent density around the solute with solute-solute congregation. Such microstructures influence solubility, partial molar volume, reaction kinetics, and many other properties. We have undertaken to observe these interesting microstructures directly by neutron scattering experiments on dilute noble gas systems including Ar. The three partial structure factors for such systems and the corresponding pair correlation functions can be determined by using the isotope substitution technique. The systems studied are uniquely suited for our objectives because of the large coherent neutron scattering length of the isotope 36 Ar and because of the accurate potential energy functions that are available for use in molecular simulations and theoretical calculations to be compared with the scattering results. We will describe our experiment, the unique apparatus we have built for it, and the neutron scattering results from our initial allocations of beam time. We will also describe planned scattering experiments to follow those with noble gases, including study of long-chain molecules in supercritical solvents. Such studies will involve hydrocarbon mixtures with and without deuteration to provide contrast

  6. The Statistical Mechanics of Dilute, Disordered Systems

    Science.gov (United States)

    Blackburn, Roger Michael

    Available from UMI in association with The British Library. Requires signed TDF. A graph partitioning problem with variable inter -partition costs is studied by exploiting its mapping on to the Ashkin-Teller spin glass. The cavity method is used to derive the TAP equations and free energy for both extensively connected and dilute systems. Unlike Ising and Potts spin glasses, the self-consistent equation for the distribution of effective fields does not have a solution solely made up of delta functions. Numerical integration is used to find the stable solution, from which the ground state energy is calculated. Simulated annealing is used to test the results. The retrieving activity distribution for networks of boolean functions trained as associative memories for optimal capacity is derived. For infinite networks, outputs are shown to be frozen, in contrast to dilute asymmetric networks trained with the Hebb rule. For finite networks, a steady leaking to the non-retrieving attractor is demonstrated. Simulations of quenched networks are reported which show a departure from this picture: some configurations remain frozen for all time, while others follow cycles of small periods. An estimate of the critical capacity from the simulations is found to be in broad agreement with recent analytical results. The existing theory is extended to include noise on recall, and the behaviour is found to be robust to noise up to order 1/c^2 for networks with connectivity c.

  7. Universal water-dilutable inhibited protective lubricants

    International Nuclear Information System (INIS)

    Mamtseva, M.V.; Kardash, N.V.; Latynina, M.B.

    1993-01-01

    In the interest of environmental protection, improvement of working conditions, and reduced fire hazard in production operations, water-based protective lubricants are now available in a wide assortment, and the production volume has increased greatly. The term water-dilutable inhibited protective lubricants (WDIPL) means water-soluble, water-emulsifiable, or water-dispersible products with the dual function of reducing friction and wear and protecting metal surfaces against corrosion for specified periods of time. According to the standard Unified System of Protection Against Corrosion and Aging (COST 9.103-78), WDIPLs are classed as products for the temporary corrosion protection of metals and end-items. In the general class of WDIPLs one can identify water-dilutable combination corrosion inhibitors, film-forming inhibited petroleum compositions (FIPC-d), detergent-preservative fluids, operational-preservative lubricating-cooling process compounds (ICPC), and, finally, universal multifunctional products. Combined corrosion inhibitors may consist of water-soluble organic and inorganic compounds; water/oil and oil-soluble surfactants - corrosion inhibitors of the chemisorption type or donor and/or acceptor types; shielding inhibitors of the adsorption type; and fast-acting water-displacing components. 23 refs

  8. Capsize of polarization in dilute photonic crystals.

    Science.gov (United States)

    Gevorkian, Zhyrair; Hakhoumian, Arsen; Gasparian, Vladimir; Cuevas, Emilio

    2017-11-29

    We investigate, experimentally and theoretically, polarization rotation effects in dilute photonic crystals with transverse permittivity inhomogeneity perpendicular to the traveling direction of waves. A capsize, namely a drastic change of polarization to the perpendicular direction is observed in a one-dimensional photonic crystal in the frequency range 10 ÷ 140 GHz. To gain more insights into the rotational mechanism, we have developed a theoretical model of dilute photonic crystal, based on Maxwell's equations with a spatially dependent two dimensional inhomogeneous dielectric permittivity. We show that the polarization's rotation can be explained by an optical splitting parameter appearing naturally in Maxwell's equations for magnetic or electric fields components. This parameter is an optical analogous of Rashba like spin-orbit interaction parameter present in quantum waves, introduces a correction to the band structure of the two-dimensional Bloch states, creates the dynamical phase shift between the waves propagating in the orthogonal directions and finally leads to capsizing of the initial polarization. Excellent agreement between theory and experiment is found.

  9. Quantile regression theory and applications

    CERN Document Server

    Davino, Cristina; Vistocco, Domenico

    2013-01-01

    A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

  10. Empirical Comparison of Publication Bias Tests in Meta-Analysis.

    Science.gov (United States)

    Lin, Lifeng; Chu, Haitao; Murad, Mohammad Hassan; Hong, Chuan; Qu, Zhiyong; Cole, Stephen R; Chen, Yong

    2018-04-16

    Decision makers rely on meta-analytic estimates to trade off benefits and harms. Publication bias impairs the validity and generalizability of such estimates. The performance of various statistical tests for publication bias has been largely compared using simulation studies and has not been systematically evaluated in empirical data. This study compares seven commonly used publication bias tests (i.e., Begg's rank test, trim-and-fill, Egger's, Tang's, Macaskill's, Deeks', and Peters' regression tests) based on 28,655 meta-analyses available in the Cochrane Library. Egger's regression test detected publication bias more frequently than other tests (15.7% in meta-analyses of binary outcomes and 13.5% in meta-analyses of non-binary outcomes). The proportion of statistically significant publication bias tests was greater for larger meta-analyses, especially for Begg's rank test and the trim-and-fill method. The agreement among Tang's, Macaskill's, Deeks', and Peters' regression tests for binary outcomes was moderately strong (most κ's were around 0.6). Tang's and Deeks' tests had fairly similar performance (κ > 0.9). The agreement among Begg's rank test, the trim-and-fill method, and Egger's regression test was weak or moderate (κ < 0.5). Given the relatively low agreement between many publication bias tests, meta-analysts should not rely on a single test and may apply multiple tests with various assumptions. Non-statistical approaches to evaluating publication bias (e.g., searching clinical trials registries, records of drug approving agencies, and scientific conference proceedings) remain essential.

  11. Panel Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    González, Andrés; Terasvirta, Timo; Dijk, Dick van

    We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...

  12. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin

    2017-01-19

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  13. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin; Zhou, Yuejin; Tong, Tiejun

    2017-01-01

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  14. Logistic Regression: Concept and Application

    Science.gov (United States)

    Cokluk, Omay

    2010-01-01

    The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

  15. Laminar Flame Velocity and Temperature Exponent of Diluted DME-Air Mixture

    Science.gov (United States)

    Naseer Mohammed, Abdul; Anwar, Muzammil; Juhany, Khalid A.; Mohammad, Akram

    2017-03-01

    In this paper, the laminar flame velocity and temperature exponent diluted dimethyl ether (DME) air mixtures are reported. Laminar premixed mixture of DME-air with volumetric dilutions of carbon dioxides (CO2) and nitrogen (N2) are considered. Experiments were conducted using a preheated mesoscale high aspect-ratio diverging channel with inlet dimensions of 25 mm × 2 mm. In this method, flame velocities are extracted from planar flames that were stabilized near adiabatic conditions inside the channel. The flame velocities are then plotted against the ratio of mixture temperature and the initial reference temperature. A non-linear power law regression is observed suitable. This regression analysis gives the laminar flame velocity at the initial reference temperature and temperature exponent. Decrease in the laminar flame velocity and increase in temperature exponent is observed for CO2 and N2 diluted mixtures. The addition of CO2 has profound influence when compared to N2 addition on both flame velocity and temperature exponent. Numerical prediction of the similar mixture using a detailed reaction mechanism is obtained. The computational mechanism predicts higher magnitudes for laminar flame velocity and smaller magnitudes of temperature exponent compared to experimental data.

  16. Fungible weights in logistic regression.

    Science.gov (United States)

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

    International Nuclear Information System (INIS)

    Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

    2007-01-01

    Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

  18. Tumor regression patterns in retinoblastoma

    International Nuclear Information System (INIS)

    Zafar, S.N.; Siddique, S.N.; Zaheer, N.

    2016-01-01

    To observe the types of tumor regression after treatment, and identify the common pattern of regression in our patients. Study Design: Descriptive study. Place and Duration of Study: Department of Pediatric Ophthalmology and Strabismus, Al-Shifa Trust Eye Hospital, Rawalpindi, Pakistan, from October 2011 to October 2014. Methodology: Children with unilateral and bilateral retinoblastoma were included in the study. Patients were referred to Pakistan Institute of Medical Sciences, Islamabad, for chemotherapy. After every cycle of chemotherapy, dilated funds examination under anesthesia was performed to record response of the treatment. Regression patterns were recorded on RetCam II. Results: Seventy-four tumors were included in the study. Out of 74 tumors, 3 were ICRB group A tumors, 43 were ICRB group B tumors, 14 tumors belonged to ICRB group C, and remaining 14 were ICRB group D tumors. Type IV regression was seen in 39.1% (n=29) tumors, type II in 29.7% (n=22), type III in 25.6% (n=19), and type I in 5.4% (n=4). All group A tumors (100%) showed type IV regression. Seventeen (39.5%) group B tumors showed type IV regression. In group C, 5 tumors (35.7%) showed type II regression and 5 tumors (35.7%) showed type IV regression. In group D, 6 tumors (42.9%) regressed to type II non-calcified remnants. Conclusion: The response and success of the focal and systemic treatment, as judged by the appearance of different patterns of tumor regression, varies with the ICRB grouping of the tumor. (author)

  19. Regression to Causality : Regression-style presentation influences causal attribution

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

  20. Alpha-clustering in dilute nucleonic sea

    International Nuclear Information System (INIS)

    Tohsaki, Akihiro

    1999-01-01

    α-clusters are expected to come out here and there in nucleonic sea owing to energetic benefit as its density is diluted. We propose a precise treatment to elucidate α-clusterized process in nucleonic sea after the breakdown of the uniformness. In order to do this, an infinite number of nucleons are considered by taking account of both the Pauli exclusion principle and effective internucleon forces. This method is called a microscopic approach, which has been successful in an α-cluster structure in light nuclei. In particular, we shed light on overcoming difficulties in a static model within the microscopic framework. This improvement is verified by using the empirical value in Weizaecker's mass formula. (author)

  1. Water Metabolism of Walruses by Isotope Dilution

    DEFF Research Database (Denmark)

    Acquarone, M.; Born, E. W.; Chwalibog, A.

    was sampled via an epidural catheter, at regular intervals, for up to seven hours after the initial enrichment to assess isotope equilibration in the body water pools. Five individuals returned to the haul-out after feeding trips of varying duration (158±86 hr, 44-287 hr) where they were immobilized again......In August 2000, the hydrogen isotope dilution method was used on 7 adult male Atlantic walruses (Odobenus rosmarus rosmarus) (weight: 1197±148 kg, mean±SD, range 1013-1508 kg) at a terrestrial haul-out in Northeastern Greenland to determine their body water pool sizes and body water turnover rates....... During immobilization by use of etorphine HCl (reversed with diprenorphine HCl), a first blood sample was taken to measure background isotope levels. The animals were then enriched with deuterium oxide by infusion into the epidural vein. During recovery, while the animals were still on the beach, blood...

  2. Tunnel backfill erosion by dilute water

    Energy Technology Data Exchange (ETDEWEB)

    Olin, M. [VTT Technical Research Centre of Finland, Espoo (Finland)

    2014-03-15

    The goal was to estimate smectite release from tunnel backfill due to dilute groundwater pulse during post glacial conditions. The plan was to apply VTT's two different implementations (BESW{sub D} and BESW{sub S}) of well-known model of Neretnieks et al. (2009). It appeared difficult to produce repeatable results using this model in COMSOL 4.2 environment, therefore a semi-analytical approximate approach was applied, which enabled to take into account both different geometry and smectite content in tunnel backfill as compared to buffer case. The results are quite similar to buffer results due to the decreasing effect of smaller smectite content and the increasing effect of larger radius. (orig.)

  3. Ultrafast magnetization dynamics in diluted magnetic semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Morandi, O [INRIA Nancy Grand-Est and Institut de Recherche en Mathematiques Avancees, 7 rue Rene Descartes, F-67084 Strasbourg (France); Hervieux, P-A; Manfredi, G [Institut de Physique et Chimie des Materiaux de Strasbourg, 23 rue du Loess, F-67037 Strasbourg (France)], E-mail: morandi@dipmat.univpm.it

    2009-07-15

    We present a dynamical model that successfully explains the observed time evolution of the magnetization in diluted magnetic semiconductor quantum wells after weak laser excitation. Based on the pseudo-fermion formalism and a second-order many-particle expansion of the exact p-d exchange interaction, our approach goes beyond the usual mean-field approximation. It includes both the sub-picosecond demagnetization dynamics and the slower relaxation processes that restore the initial ferromagnetic order in a nanosecond timescale. In agreement with experimental results, our numerical simulations show that, depending on the value of the initial lattice temperature, a subsequent enhancement of the total magnetization may be observed within the timescale of a few hundred picoseconds.

  4. Mechanisms of urine concentration and dilution (1961)

    International Nuclear Information System (INIS)

    Morel, F.; Guinnebault, M.

    1961-01-01

    This paper is devoted to the analysis of a problem in the field of renal physiology which has shown many new developments during the course of the last few years. The following are treated successively: a) the data obtained from measurements of free water clearance and their interpretation; b) the data provided by nephron morphology and the comparative anatomy of the kidney ; c) the data relative to the existence of an intrarenal osmotic gradient; d) the principle of concentration multiplication by a counter current technique; e) the present day theory of counter current concentration of urine, and f) the physiological check on dilution and concentration mechanisms in urine. Lastly, the advantages of the modern theory and the unknown factors which remain are discussed. (authors) [fr

  5. Tunnel backfill erosion by dilute water

    International Nuclear Information System (INIS)

    Olin, M.

    2014-03-01

    The goal was to estimate smectite release from tunnel backfill due to dilute groundwater pulse during post glacial conditions. The plan was to apply VTT's two different implementations (BESW D and BESW S ) of well-known model of Neretnieks et al. (2009). It appeared difficult to produce repeatable results using this model in COMSOL 4.2 environment, therefore a semi-analytical approximate approach was applied, which enabled to take into account both different geometry and smectite content in tunnel backfill as compared to buffer case. The results are quite similar to buffer results due to the decreasing effect of smaller smectite content and the increasing effect of larger radius. (orig.)

  6. Critical exponents for diluted resistor networks.

    Science.gov (United States)

    Stenull, O; Janssen, H K; Oerding, K

    1999-05-01

    An approach by Stephen [Phys. Rev. B 17, 4444 (1978)] is used to investigate the critical properties of randomly diluted resistor networks near the percolation threshold by means of renormalized field theory. We reformulate an existing field theory by Harris and Lubensky [Phys. Rev. B 35, 6964 (1987)]. By a decomposition of the principal Feynman diagrams, we obtain diagrams which again can be interpreted as resistor networks. This interpretation provides for an alternative way of evaluating the Feynman diagrams for random resistor networks. We calculate the resistance crossover exponent phi up to second order in epsilon=6-d, where d is the spatial dimension. Our result phi=1+epsilon/42+4epsilon(2)/3087 verifies a previous calculation by Lubensky and Wang, which itself was based on the Potts-model formulation of the random resistor network.

  7. A horizontal dilution refrigerator for polarized target

    International Nuclear Information System (INIS)

    Isagawa, S.; Ishimoto, S.; Masaike, A.; Morimoto, K.

    1978-01-01

    A horizontal dilution refrigerator was constructed with a view to the spin frozen target and the deuteron polarized target. High cooling power at high temperature such as 3.7 mW at 400 mK serves for overcoming a heat load of microwave to polarize the nuclear spins in the target material. The cooling power at 50 mK was 50 μW, which is sufficient to hold the high nuclear polarization for long time. The lowest temperature reached was 26 mK. The refrigerator has rather simple heat exchangers, a long stainless steel double tube heat exchanger and two coaxial type heat exchangers with sintered copper. The mixing chamber is made of polytetrafluoroethylene (TFE) and demountable so that the target material can be easily put into it. (Auth.)

  8. Cost effectiveness of dilute chemical decontamination

    International Nuclear Information System (INIS)

    Le Surf, J.E.; Weyman, G.D.

    1983-01-01

    The origin and basic principles of the dilute chemical decontamination (DCD) concept are described and illustrated by reference to the CAN-DECON process. The estimated dose savings from the actual application of the process at several reactors are presented and discussed. Two methods of performing a cost/benefit appraisal are described and discussed. This methodology requires more study by the nuclear industry, including collection by station staff of relevant data on which future cost/benefit appraisals may be based. Finally, three illustrative cases are examinated to show the breakeven point and potential savings achievable by DCD with different initial radiation fields and different amounts of work to be done. The overall conclusion is that there are many situations in which DCD is desirable to reduce radiation exposure of workers, to save costs to the station, and to ease the performance of maintenance and repair work on reactor systems

  9. Guideline on Isotope Dilution Mass Spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Gaffney, Amy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-19

    Isotope dilution mass spectrometry is used to determine the concentration of an element of interest in a bulk sample. It is a destructive analysis technique that is applicable to a wide range of analytes and bulk sample types. With this method, a known amount of a rare isotope, or ‘spike’, of the element of interest is added to a known amount of sample. The element of interest is chemically purified from the bulk sample, the isotope ratio of the spiked sample is measured by mass spectrometry, and the concentration of the element of interest is calculated from this result. This method is widely used, although a mass spectrometer required for this analysis may be fairly expensive.

  10. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  11. Logic regression and its extensions.

    Science.gov (United States)

    Schwender, Holger; Ruczinski, Ingo

    2010-01-01

    Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Asymptotic Distribution of Eigenvalues of Weakly Dilute Wishart Matrices

    Energy Technology Data Exchange (ETDEWEB)

    Khorunzhy, A. [Institute for Low Temperature Physics (Ukraine)], E-mail: khorunjy@ilt.kharkov.ua; Rodgers, G. J. [Brunel University, Uxbridge, Department of Mathematics and Statistics (United Kingdom)], E-mail: g.j.rodgers@brunel.ac.uk

    2000-03-15

    We study the eigenvalue distribution of large random matrices that are randomly diluted. We consider two random matrix ensembles that in the pure (nondilute) case have a limiting eigenvalue distribution with a singular component at the origin. These include the Wishart random matrix ensemble and Gaussian random matrices with correlated entries. Our results show that the singularity in the eigenvalue distribution is rather unstable under dilution and that even weak dilution destroys it.

  13. Detection of bias in animal model pedigree indices of heifers

    Directory of Open Access Journals (Sweden)

    M. LIDAUER

    2008-12-01

    Full Text Available The objective of the study was to test whether the pedigree indices (PI of heifers are biased, and if so, whether the magnitude of the bias varies in different groups of heifers. Therefore, two animal model evaluations with two different data sets were computed. Data with all the records from the national evaluation in December 1994 was used to obtain estimated breeding values (EBV for 305-days' milk yield and protein yield. In the second evaluation, the PIs were estimated for cows calving the first time in 1993 by excluding all their production records from the data. Three different statistics, a simple t-test, the linear regression of EBV on PI, and the polynomial regression of the difference in the predictions (EBV-PI on PI, were computed for three groups of first parity Ayrshire cows: daughters of proven sires, daughters of young sires, and daughters of bull dam candidates. A practically relevant bias was found only in the PIs for the daughters of young sires. On average their PIs were biased upwards by 0.20 standard deviations (78.8 kg for the milk yield and by 0.21 standard deviations (2.2 kg for the protein yield. The polynomial regression analysis showed that the magnitude of the bias in the PIs changed somewhat with the size of the PIs.;

  14. Comparison between Linear and Nonlinear Regression in a Laboratory Heat Transfer Experiment

    Science.gov (United States)

    Gonçalves, Carine Messias; Schwaab, Marcio; Pinto, José Carlos

    2013-01-01

    In order to interpret laboratory experimental data, undergraduate students are used to perform linear regression through linearized versions of nonlinear models. However, the use of linearized models can lead to statistically biased parameter estimates. Even so, it is not an easy task to introduce nonlinear regression and show for the students…

  15. 21 CFR 866.2500 - Microtiter diluting and dispensing device.

    Science.gov (United States)

    2010-04-01

    ... SERVICES (CONTINUED) MEDICAL DEVICES IMMUNOLOGY AND MICROBIOLOGY DEVICES Microbiology Devices § 866.2500... a mechanical device intended for medical purposes to dispense or serially dilute very small...

  16. Dilute acid/metal salt hydrolysis of lignocellulosics

    Science.gov (United States)

    Nguyen, Quang A.; Tucker, Melvin P.

    2002-01-01

    A modified dilute acid method of hydrolyzing the cellulose and hemicellulose in lignocellulosic material under conditions to obtain higher overall fermentable sugar yields than is obtainable using dilute acid alone, comprising: impregnating a lignocellulosic feedstock with a mixture of an amount of aqueous solution of a dilute acid catalyst and a metal salt catalyst sufficient to provide higher overall fermentable sugar yields than is obtainable when hydrolyzing with dilute acid alone; loading the impregnated lignocellulosic feedstock into a reactor and heating for a sufficient period of time to hydrolyze substantially all of the hemicellulose and greater than 45% of the cellulose to water soluble sugars; and recovering the water soluble sugars.

  17. Exchange bias theory

    International Nuclear Information System (INIS)

    Kiwi, Miguel

    2001-01-01

    Research on the exchange bias (EB) phenomenon has witnessed a flurry of activity during recent years, which stems from its use in magnetic sensors and as stabilizers in magnetic reading heads. EB was discovered in 1956 but it attracted only limited attention until these applications, closely related to giant magnetoresistance, were developed during the last decade. In this review, I initially give a short introduction, listing the most salient experimental results and what is required from an EB theory. Next, I indicate some of the obstacles in the road towards a satisfactory understanding of the phenomenon. The main body of the text reviews and critically discusses the activity that has flourished, mainly during the last 5 years, in the theoretical front. Finally, an evaluation of the progress made, and a critical assessment as to where we stand nowadays along the road to a satisfactory theory, is presented

  18. Bias modification training can alter approach bias and chocolate consumption.

    Science.gov (United States)

    Schumacher, Sophie E; Kemps, Eva; Tiggemann, Marika

    2016-01-01

    Recent evidence has demonstrated that bias modification training has potential to reduce cognitive biases for attractive targets and affect health behaviours. The present study investigated whether cognitive bias modification training could be applied to reduce approach bias for chocolate and affect subsequent chocolate consumption. A sample of 120 women (18-27 years) were randomly assigned to an approach-chocolate condition or avoid-chocolate condition, in which they were trained to approach or avoid pictorial chocolate stimuli, respectively. Training had the predicted effect on approach bias, such that participants trained to approach chocolate demonstrated an increased approach bias to chocolate stimuli whereas participants trained to avoid such stimuli showed a reduced bias. Further, participants trained to avoid chocolate ate significantly less of a chocolate muffin in a subsequent taste test than participants trained to approach chocolate. Theoretically, results provide support for the dual process model's conceptualisation of consumption as being driven by implicit processes such as approach bias. In practice, approach bias modification may be a useful component of interventions designed to curb the consumption of unhealthy foods. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Abstract Expression Grammar Symbolic Regression

    Science.gov (United States)

    Korns, Michael F.

    This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems.

  20. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying; Carroll, Raymond J.

    2009-01-01

    . The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a

  1. Verifying mixing in dilution tunnels How to ensure cookstove emissions samples are unbiased

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Daniel L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rapp, Vi H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Caubel, Julien J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Sharon S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gadgil, Ashok J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-12-15

    A well-mixed diluted sample is essential for unbiased measurement of cookstove emissions. Most cookstove testing labs employ a dilution tunnel, also referred to as a “duct,” to mix clean dilution air with cookstove emissions before sampling. It is important that the emissions be well-mixed and unbiased at the sampling port so that instruments can take representative samples of the emission plume. Some groups have employed mixing baffles to ensure the gaseous and aerosol emissions from cookstoves are well-mixed before reaching the sampling location [2, 4]. The goal of these baffles is to to dilute and mix the emissions stream with the room air entering the fume hood by creating a local zone of high turbulence. However, potential drawbacks of mixing baffles include increased flow resistance (larger blowers needed for the same exhaust flow), nuisance cleaning of baffles as soot collects, and, importantly, the potential for loss of PM2.5 particles on the baffles themselves, thus biasing results. A cookstove emission monitoring system with baffles will collect particles faster than the duct’s walls alone. This is mostly driven by the available surface area for deposition by processes of Brownian diffusion (through the boundary layer) and turbophoresis (i.e. impaction). The greater the surface area available for diffusive and advection-driven deposition to occur, the greater the particle loss will be at the sampling port. As a layer of larger particle “fuzz” builds on the mixing baffles, even greater PM2.5 loss could occur. The micro structure of the deposited aerosol will lead to increased rates of particle loss by interception and a tendency for smaller particles to deposit due to impaction on small features of the micro structure. If the flow stream could be well-mixed without the need for baffles, these drawbacks could be avoided and the cookstove emissions sampling system would be more robust.

  2. Statistical methods for accurately determining criticality code bias

    International Nuclear Information System (INIS)

    Trumble, E.F.; Kimball, K.D.

    1997-01-01

    A system of statistically treating validation calculations for the purpose of determining computer code bias is provided in this paper. The following statistical treatments are described: weighted regression analysis, lower tolerance limit, lower tolerance band, and lower confidence band. These methods meet the criticality code validation requirements of ANS 8.1. 8 refs., 5 figs., 4 tabs

  3. From Rasch scores to regression

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

  4. Testing Heteroscedasticity in Robust Regression

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 1, č. 4 (2011), s. 25-28 ISSN 2045-3345 Grant - others:GA ČR(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust regression * heteroscedasticity * regression quantiles * diagnostics Subject RIV: BB - Applied Statistics , Operational Research http://www.researchjournals.co.uk/documents/Vol4/06%20Kalina.pdf

  5. Regression methods for medical research

    CERN Document Server

    Tai, Bee Choo

    2013-01-01

    Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the

  6. Forecasting with Dynamic Regression Models

    CERN Document Server

    Pankratz, Alan

    2012-01-01

    One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.

  7. Dilute Surfactant Methods for Carbonate Formations

    Energy Technology Data Exchange (ETDEWEB)

    Kishore K. Mohanty

    2006-02-01

    There are many fractured carbonate reservoirs in US (and the world) with light oil. Waterflooding is effective in fractured reservoirs, if the formation is water-wet. Many fractured carbonate reservoirs, however, are mixed-wet and recoveries with conventional methods are low (less than 10%). The process of using dilute anionic surfactants in alkaline solutions has been investigated in this work for oil recovery from fractured oil-wet carbonate reservoirs both experimentally and numerically. This process is a surfactant-aided gravity drainage where surfactant diffuses into the matrix, lowers IFT and contact angle, which decrease capillary pressure and increase oil relative permeability enabling gravity to drain the oil up. Anionic surfactants have been identified which at dilute concentration of 0.05 wt% and optimal salinity can lower the interfacial tension and change the wettability of the calcite surface to intermediate/water-wet condition as well or better than the cationic surfactant DTAB with a West Texas crude oil. The force of adhesion in AFM of oil-wet regions changes after anionic surfactant treatment to values similar to those of water-wet regions. The AFM topography images showed that the oil-wetting material was removed from the surface by the anionic surfactant treatment. Adsorption studies indicate that the extent of adsorption for anionic surfactants on calcite minerals decreases with increase in pH and with decrease in salinity. Surfactant adsorption can be minimized in the presence of Na{sub 2}CO{sub 3}. Laboratory-scale surfactant brine imbibition experiments give high oil recovery (20-42% OOIP in 50 days; up to 60% in 200 days) for initially oil-wet cores through wettability alteration and IFT reduction. Small (<10%) initial gas saturation does not affect significantly the rate of oil recovery in the imbibition process, but larger gas saturation decreases the oil recovery rate. As the core permeability decreases, the rate of oil recovery reduces

  8. Religious Attitudes and Home Bias

    OpenAIRE

    C. Reggiani; G. Rossini

    2008-01-01

    Home bias affects trade in goods, services and financial assets. It is mostly generated by "natural" trade barriers. Among these dividers we may list many behavioral and sociological factors, such as status quo biases and a few kind of ‘embeddedness’. Unfortunately these factors are difficult to measure. An important part of ‘embeddedness’ may be related to religious attitudes. Is there any relation between economic home bias and religious attitudes at the individual tier? Our aim is to provi...

  9. Cognitive Reflection, Decision Biases, and Response Times

    Directory of Open Access Journals (Sweden)

    Carlos Alos-Ferrer

    2016-09-01

    Full Text Available We present novel evidence on decision times and personality traits in standard questions from the decision-making literature where responses are relatively slow (medians around half a minute or above. To this end, we measured decision times in a number of incentivized, framed items (decisions from description including the Cognitive Reflection Test, two additional questions following the same logic, and a number of classic questions used to study decision biases in probability judgments (base-rate neglect, the conjunction fallacy, and the ratio bias. All questions create a conflict between an intuitive process and more deliberative thinking. For each item, we then created a non-conflict version by either making the intuitive impulse correct (resulting in an alignment question, shutting it down (creating a neutral question, or making it dominant (creating a heuristic question. For CRT questions, the differences in decision times are as predicted by dual-process theories, with alignment and heuristic variants leading to faster responses and neutral questions to slower responses than the original, conflict questions. For decision biases (where responses are slower, evidence is mixed. To explore the possible influence of personality factors on both choices and decision times, we used standard personality scales including the Rational-Experiential Inventory and the Big Five, and used the mas controls in regression analysis.

  10. Cognitive Reflection, Decision Biases, and Response Times.

    Science.gov (United States)

    Alós-Ferrer, Carlos; Garagnani, Michele; Hügelschäfer, Sabine

    2016-01-01

    We present novel evidence on response times and personality traits in standard questions from the decision-making literature where responses are relatively slow (medians around half a minute or above). To this end, we measured response times in a number of incentivized, framed items (decisions from description) including the Cognitive Reflection Test, two additional questions following the same logic, and a number of classic questions used to study decision biases in probability judgments (base-rate neglect, the conjunction fallacy, and the ratio bias). All questions create a conflict between an intuitive process and more deliberative thinking. For each item, we then created a non-conflict version by either making the intuitive impulse correct (resulting in an alignment question), shutting it down (creating a neutral question), or making it dominant (creating a heuristic question). For CRT questions, the differences in response times are as predicted by dual-process theories, with alignment and heuristic variants leading to faster responses and neutral questions to slower responses than the original, conflict questions. For decision biases (where responses are slower), evidence is mixed. To explore the possible influence of personality factors on both choices and response times, we used standard personality scales including the Rational-Experiential Inventory and the Big Five, and used them as controls in regression analysis.

  11. Bias in clinical intervention research

    DEFF Research Database (Denmark)

    Gluud, Lise Lotte

    2006-01-01

    Research on bias in clinical trials may help identify some of the reasons why investigators sometimes reach the wrong conclusions about intervention effects. Several quality components for the assessment of bias control have been suggested, but although they seem intrinsically valid, empirical...... evidence is needed to evaluate their effects on the extent and direction of bias. This narrative review summarizes the findings of methodological studies on the influence of bias in clinical trials. A number of methodological studies suggest that lack of adequate randomization in published trial reports...

  12. Impact of multicollinearity on small sample hydrologic regression models

    Science.gov (United States)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  13. Information environment, behavioral biases, and home bias in analysts’ recommendations

    DEFF Research Database (Denmark)

    Farooq, Omar; Taouss, Mohammed

    2012-01-01

    Can information environment of a firm explain home bias in analysts’ recommendations? Can the extent of agency problems explain optimism difference between foreign and local analysts? This paper answers these questions by documenting the effect of information environment on home bias in analysts’...

  14. Threat bias, not negativity bias, underpins differences in political ideology.

    Science.gov (United States)

    Lilienfeld, Scott O; Latzman, Robert D

    2014-06-01

    Although disparities in political ideology are rooted partly in dispositional differences, Hibbing et al.'s analysis paints with an overly broad brush. Research on the personality correlates of liberal-conservative differences points not to global differences in negativity bias, but to differences in threat bias, probably emanating from differences in fearfulness. This distinction bears implications for etiological research and persuasion efforts.

  15. Effects of Inventory Bias on Landslide Susceptibility Calculations

    Science.gov (United States)

    Stanley, T. A.; Kirschbaum, D. B.

    2017-01-01

    Many landslide inventories are known to be biased, especially inventories for large regions such as Oregon's SLIDO or NASA's Global Landslide Catalog. These biases must affect the results of empirically derived susceptibility models to some degree. We evaluated the strength of the susceptibility model distortion from postulated biases by truncating an unbiased inventory. We generated a synthetic inventory from an existing landslide susceptibility map of Oregon, then removed landslides from this inventory to simulate the effects of reporting biases likely to affect inventories in this region, namely population and infrastructure effects. Logistic regression models were fitted to the modified inventories. Then the process of biasing a susceptibility model was repeated with SLIDO data. We evaluated each susceptibility model with qualitative and quantitative methods. Results suggest that the effects of landslide inventory bias on empirical models should not be ignored, even if those models are, in some cases, useful. We suggest fitting models in well-documented areas and extrapolating across the study region as a possible approach to modeling landslide susceptibility with heavily biased inventories.

  16. Effect of Malmquist bias on correlation studies with IRAS data base

    Science.gov (United States)

    Verter, Frances

    1993-01-01

    The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.

  17. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  18. Dilute antiferromagnetism in magnetically doped phosphorene

    Directory of Open Access Journals (Sweden)

    Andrew Allerdt

    2017-11-01

    Full Text Available We study the competition between Kondo physics and indirect exchange on monolayer black phos-phorous using a realistic description of the band structure in combination with the density matrixrenormalization group (DMRG method. The Hamiltonian is reduced to a one-dimensional problemvia an exact canonical transformation that makes it amenable to DMRG calculations, yielding exactresults that fully incorporate the many-body physics. We find that a perturbative description of theproblem is not appropriate and cannot account for the slow decay of the correlations and the completelack of ferromagnetism. In addition, at some particular distances, the impurities decouple formingtheir own independent Kondo states. This can be predicted from the nodes of the Lindhard function.Our results indicate a possible route toward realizing dilute anti-ferromagnetism in phosphorene. Received: 19 September 2017, Accepted: 12 October 2017; Edited by: K. Hallberg; DOI: http://dx.doi.org/10.4279/PIP.090008 Cite as: A Allerdt, A E Feiguin, Papers in Physics 9, 090008 (2017

  19. Behaviour of humic-bentonite aggregates in diluted suspensions ...

    African Journals Online (AJOL)

    Formation and disaggregation of micron-size aggregates in a diluted suspension made up of HSs and bentonite (B) were studied by tracing distribution of aggregate sizes and their counts in freshly prepared and aged suspensions, and at high (10 000) and low (1.0) [HS]/[B] ratios. Diluted HSB suspensions are unstable ...

  20. Dilution in Transition Zone between Rising Plumes and Surface Plumes

    DEFF Research Database (Denmark)

    Larsen, Torben

    2004-01-01

    The papers presents some physical experiments with the dilution of sea outfall plumes with emphasize on the transition zone where the relative fast flowing vertical plume turns to a horizontal surface plume following the slow sea surface currents. The experiments show that a considerable dilution...

  1. Magnetic ordering in dilute YTb and YEr alloys

    International Nuclear Information System (INIS)

    Rainford, B.D.; Kilcoyne, S.H.; Mohammed, K.A.; Lanchester, P.C.; Stanley, H.B.; Caudron, R.

    1988-01-01

    Dilute YEr alloys (Er concentration between 3% and 10%) show the existence of sinusoidally modulated antiferromagnetism down to the lowest impurity concentrations studied. Extrapolation of the Neel temperatures for both YEr and YTb suggests a critical concentration is ≅ 0.8% Tb, Er. Ordering in such dilute alloys may result from exchange enhancement in the yttrium host

  2. Magnetic ordering in dilute YTb and YEr alloys

    Energy Technology Data Exchange (ETDEWEB)

    Rainford, B.D.; Kilcoyne, S.H.; Mohammed, K.A.; Lanchester, P.C.; Stanley, H.B.; Caudron, R.

    1988-12-01

    Dilute YEr alloys (Er concentration between 3% and 10%) show the existence of sinusoidally modulated antiferromagnetism down to the lowest impurity concentrations studied. Extrapolation of the Neel temperatures for both YEr and YTb suggests a critical concentration is /approx equal/ 0.8% Tb, Er. Ordering in such dilute alloys may result from exchange enhancement in the yttrium host.

  3. The Melt-Dilute Treatment Technology Offgas Development Status Report

    International Nuclear Information System (INIS)

    Adams, T. M.

    1999-01-01

    The melt-dilute treatment technology is being developed to facilitate the ultimate disposition of highly enriched Al-Base DOE spent nuclear fuels in a geologic repository such as that proposed for Yucca Mountain. The melt-dilute process is a method of preparing DOE spent nuclear fuel for long term storage

  4. Near-wall molecular ordering of dilute ionic liquids

    NARCIS (Netherlands)

    Jitvisate, Monchai; Seddon, James Richard Thorley

    2017-01-01

    The interfacial behavior of ionic liquids promises tunable lubrication as well as playing an integral role in ion diffusion for electron transfer. Diluting the ionic liquids optimizes bulk parameters, such as electric conductivity, and one would expect dilution to disrupt the near-wall molecular

  5. Effect of dietary dilution of energy and nutrients during different ...

    African Journals Online (AJOL)

    A completely randomized design was conducted to evaluate the effect of dietary dilution of energy and nutrients during different growing periods on compensatory growth of Ross broilers. Four replicant pens were assigned per seven treatments. Chicks in each treatment received concentrated and diluted diets in different ...

  6. The dilute random field Ising model by finite cluster approximation

    International Nuclear Information System (INIS)

    Benyoussef, A.; Saber, M.

    1987-09-01

    Using the finite cluster approximation, phase diagrams of bond and site diluted three-dimensional simple cubic Ising models with a random field have been determined. The resulting phase diagrams have the same general features for both bond and site dilution. (author). 7 refs, 4 figs

  7. Enhancement of surface magnetism due to bulk bond dilution

    International Nuclear Information System (INIS)

    Tsallis, C.; Sarmento, E.F.; Albuquerque, E.L. de

    1985-01-01

    Within a renormalization group scheme, the phase diagram of a semi-infinite simple cubic Ising ferromagnet is discussed, with arbitrary surface and bulk coupling constants, and including possible dilution of the bulk bonds. It is obtained that dilution makes easier the appearance of surface magnetism in the absence of bulk magnetism. (Author) [pt

  8. Producing The New Regressive Left

    DEFF Research Database (Denmark)

    Crone, Christine

    members, this thesis investigates a growing political trend and ideological discourse in the Arab world that I have called The New Regressive Left. On the premise that a media outlet can function as a forum for ideology production, the thesis argues that an analysis of this material can help to trace...... the contexture of The New Regressive Left. If the first part of the thesis lays out the theoretical approach and draws the contextual framework, through an exploration of the surrounding Arab media-and ideoscapes, the second part is an analytical investigation of the discourse that permeates the programmes aired...... becomes clear from the analytical chapters is the emergence of the new cross-ideological alliance of The New Regressive Left. This emerging coalition between Shia Muslims, religious minorities, parts of the Arab Left, secular cultural producers, and the remnants of the political,strategic resistance...

  9. Dynamic dilution exponent in monodisperse entangled polymer solutions

    DEFF Research Database (Denmark)

    Shahid, T.; Huang, Qian; Oosterlinck, F.

    2017-01-01

    of concentration but also depends on the molar mass of the chains. While the proposed approach successfully explains the viscoelastic properties of a large number of semi-dilute solutions of polymers in their own oligomers, important discrepancies are found for semi-dilute entangled polymers in small-molecule......We study and model the linear viscoelastic properties of several entangled semi-dilute and concentrated solutions of linear chains of different molar masses and at different concentrations dissolved in their oligomers. We discuss the dilution effect of the oligomers on the entangled long chains....... In particular, we investigate the influence of both concentration and molar mass on the value of the effective dynamic dilution exponent determined from the level of the storage plateau at low and intermediate frequencies. We show that the experimental results can be quantitatively explained by considering...

  10. The dilution effect on the extinction of wall diffusion flame

    Directory of Open Access Journals (Sweden)

    Ghiti Nadjib

    2014-12-01

    Full Text Available The dynamic process of the interaction between a turbulent jet diffusion methane flame and a lateral wall was experimentally studied. The evolution of the flame temperature field with the Nitrogen dilution of the methane jet flame was examined. The interaction between the diffusion flame and the lateral wall was investigated for different distance between the wall and the central axes of the jet flame. The dilution is found to play the central role in the flame extinction process. The flame response as the lateral wall approaches from infinity and the increasing of the dilution rate make the flame extinction more rapid than the flame without dilution, when the nitrogen dilution rate increase the flame temperature decrease.

  11. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    Science.gov (United States)

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  12. A Matlab program for stepwise regression

    Directory of Open Access Journals (Sweden)

    Yanhong Qi

    2016-03-01

    Full Text Available The stepwise linear regression is a multi-variable regression for identifying statistically significant variables in the linear regression equation. In present study, we presented the Matlab program of stepwise regression.

  13. Correlation and simple linear regression.

    Science.gov (United States)

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  14. Regression filter for signal resolution

    International Nuclear Information System (INIS)

    Matthes, W.

    1975-01-01

    The problem considered is that of resolving a measured pulse height spectrum of a material mixture, e.g. gamma ray spectrum, Raman spectrum, into a weighed sum of the spectra of the individual constituents. The model on which the analytical formulation is based is described. The problem reduces to that of a multiple linear regression. A stepwise linear regression procedure was constructed. The efficiency of this method was then tested by transforming the procedure in a computer programme which was used to unfold test spectra obtained by mixing some spectra, from a library of arbitrary chosen spectra, and adding a noise component. (U.K.)

  15. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  16. Heuristic Biases in Mathematical Reasoning

    Science.gov (United States)

    Inglis, Matthew; Simpson, Adrian

    2005-01-01

    In this paper we briefly describe the dual process account of reasoning, and explain the role of heuristic biases in human thought. Concentrating on the so-called matching bias effect, we describe a piece of research that indicates a correlation between success at advanced level mathematics and an ability to override innate and misleading…

  17. Gender bias affects forests worldwide

    Science.gov (United States)

    Marlène Elias; Susan S Hummel; Bimbika S Basnett; Carol J.P. Colfer

    2017-01-01

    Gender biases persist in forestry research and practice. These biases result in reduced scientific rigor and inequitable, ineffective, and less efficient policies, programs, and interventions. Drawing from a two-volume collection of current and classic analyses on gender in forests, we outline five persistent and inter-related themes: gendered governance, tree tenure,...

  18. Anti-Bias Education: Reflections

    Science.gov (United States)

    Derman-Sparks, Louise

    2011-01-01

    It is 30 years since NAEYC published "Anti-Bias Curriculum Tools for Empowering Young Children" (Derman-Sparks & ABC Task Force, 1989). Since then, anti-bias education concepts have become part of the early childhood education (ECE) narrative in the United States and many other countries. It has brought a fresh way of thinking about…

  19. Dilute Oxygen Combustion Phase I Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, H.M.; Riley, M.F.; Kobayashi, H.

    1997-10-31

    A novel burner, in which fuel (natural gas) and oxidant (oxygen or air) are separately injected into a furnace, shows promise for achieving very low nitrogen oxide(s) (NOx) emissions for commercial furnace applications. The dilute oxygen combustion (DOC) burner achieves very low NOx through in-furnace dilution of the oxidant stream prior to combustion, resulting in low flame temperatures, thus inhibiting thermal NOx production. The results of a fundamental and applied research effort on the development of the DOC burner are presented. In addition, the results of a market survey detailing the potential commercial impact of the DOC system are disclosed. The fundamental aspects of the burner development project involved examining the flame characteristics of a natural gas turbulent jet in a high-temperature (~1366 K) oxidant (7-27% O2 vol. wet). Specifically, the mass entrainment rate, the flame lift-off height, the velocity field and major species field of the jet were evaluated as a function of surrounding-gas temperature and composition. The measured entrainment rate of the fuel jet decreased with increasing oxygen content in the surrounding high-temperature oxidant, and was well represented by the d+ scaling correlation found in the literature. The measured flame lift-off height decreased with increasing oxygen content and increasing temperature of the surrounding gas. An increase in surrounding-gas oxygen content and/or temperature inhibited the velocity decay within the jet periphery as a function of axial distance as compared to isothermal turbulent jets. However, the velocity measurements were only broadly represented by the d+ scaling correlation. Several DOC burner configurations were tested in a laboratory-scale furnace at a nominal firing rate of 185 kW (~0.63 MMBtu/h). The flue gas composition was recorded as a function of furnace nitrogen content, furnace temperature, burner geometric arrangement, firing rate, and fuel injection velocity. NOx emissions

  20. Dilute oxygen combustion. Phase I report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-10-01

    A novel burner, in which fuel (natural gas) and oxidant (oxygen or air) are separately injected into a furnace, shows promise for achieving very low nitrogen oxide(s) (NO{sub x}) emissions for commercial furnace applications. The dilute oxygen combustion (DOC) burner achieves very low NO{sub x} through in-furnace dilution of the oxidant stream prior to combustion, resulting in low flame temperatures, thus inhibiting thermal NO{sub x} production. The results of a fundamental and applied research effort on the development of the DOC burner are presented. In addition, the results of a market survey detailing the potential commercial impact of the DOC system are disclosed. The fundamental aspects of the burner development project involved examining the flame characteristics of a natural gas turbulent jet in a high-temperature ({approximately}1366 K) oxidant (7-27% O{sub 2} vol. wet). Specifically, the mass entrainment rate, the flame lift-off height, the velocity field and major species field of the jet were evaluated as a function of surrounding-gas temperature and composition. The measured entrainment rate of the fuel jet decreased with increasing oxygen content in the surrounding high-temperature oxidant, and was well represented by the d{sup +} scaling correlation found in the literature. The measured flame lift-off height decreased with increasing oxygen content and increasing temperature of the surrounding gas. An increase in surrounding-gas oxygen content and/or temperature inhibited the velocity decay within the jet periphery as a function of axial distance as compared to isothermal turbulent jets. However, the velocity measurements were only broadly represented by the d{sup +} scaling correlation. Several DOC burner configurations were tested in a laboratory-scale furnace at a nominal firing rate of 185 kW ({approximately}0.63 MMBtu/h). The flue gas composition was recorded as a function of furnace nitrogen content, furnace temperature, burner geometric

  1. Large-scale galaxy bias

    Science.gov (United States)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  2. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  3. Cactus: An Introduction to Regression

    Science.gov (United States)

    Hyde, Hartley

    2008-01-01

    When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…

  4. Regression Models for Repairable Systems

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2015-01-01

    Roč. 17, č. 4 (2015), s. 963-972 ISSN 1387-5841 Institutional support: RVO:67985556 Keywords : Reliability analysis * Repair models * Regression Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.782, year: 2015 http://library.utia.cas.cz/separaty/2015/SI/novak-0450902.pdf

  5. Survival analysis II: Cox regression

    NARCIS (Netherlands)

    Stel, Vianda S.; Dekker, Friedo W.; Tripepi, Giovanni; Zoccali, Carmine; Jager, Kitty J.

    2011-01-01

    In contrast to the Kaplan-Meier method, Cox proportional hazards regression can provide an effect estimate by quantifying the difference in survival between patient groups and can adjust for confounding effects of other variables. The purpose of this article is to explain the basic concepts of the

  6. Kernel regression with functional response

    OpenAIRE

    Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe

    2011-01-01

    We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.

  7. Role reductants in dilute chemical decontamination formulations

    Energy Technology Data Exchange (ETDEWEB)

    Ranganathan, S. [Univ. of New Brunswick (Canada). Dept. of Chemical Engineering; Srinivasan, M.P.; Narasimhan, S.V. [Bhabha Atomic Research Centre (BARC), Trombay, Mumbai (India). Water and Steam Chemistry Lab.; Raghavan, P.S. [Madras Christian Coll., Chennai (India); Gopalan, R. [Madras Christian Coll., Chennai (India). Dept. of Chemistry

    2004-10-01

    Iron(III) oxides are the major corrosion products formed in boiling water reactors. The iron(III) oxides are of two types, namely hematite ({alpha}-Fe{sub 2}O{sub 3}) and maghemite ({gamma}-Fe{sub 2}O{sub 3}). The dissolution of these oxides is in no way simple because of the labile nature of the Fe(III)-O bond towards the chelants. The leaching of metal ions is partially controlled by reductive dissolution. In order to understand the role of the reductant, it is essential to study the dissolution behaviour of a system like Fe{sub 2}O{sub 3}, which does not contain any Fe{sup 2+} in the crystal lattice. The present study was carried out with {gamma}-Fe{sub 2}O{sub 3} and dilute chemical decontamination (DCD) formulations containing ascorbic acid and citric acid with the addition of Fe(II)-L as a reductant. The chelants used for the dissolution process were nitrilotriacetic acid, 2,6-pyridinedicorboxylic acid and ethylenediaminetetraacetic acid. The {gamma}-Fe{sub 2}O{sub 3} was chosen since the earlier studies revealed that the dissolution kinetics of {alpha}-Fe{sub 2}O{sub 3} is slow and it is difficult to dissolve even by strong complexing agents, whereas {gamma}-Fe{sub 2}O{sub 3} dissolution is comparatively easier. This is due to the structural difference between these two oxides. The studies also revealed that the dissolution was partly influenced by the nature of the chelating agents but mainly controlled by the power of the reductants used in the formulation. The dissolution behaviour of {gamma}-Fe{sub 2}O{sub 3} under various experimental conditions is discussed and compared with that of magnetite in order to arrive at a suitable mechanism for the dissolution of iron oxides and emphasize the role of reductants in DCD formulations. (orig.)

  8. Ionic liquids behave as dilute electrolyte solutions

    Science.gov (United States)

    Gebbie, Matthew A.; Valtiner, Markus; Banquy, Xavier; Fox, Eric T.; Henderson, Wesley A.; Israelachvili, Jacob N.

    2013-01-01

    We combine direct surface force measurements with thermodynamic arguments to demonstrate that pure ionic liquids are expected to behave as dilute weak electrolyte solutions, with typical effective dissociated ion concentrations of less than 0.1% at room temperature. We performed equilibrium force–distance measurements across the common ionic liquid 1-butyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide ([C4mim][NTf2]) using a surface forces apparatus with in situ electrochemical control and quantitatively modeled these measurements using the van der Waals and electrostatic double-layer forces of the Derjaguin–Landau–Verwey–Overbeek theory with an additive repulsive steric (entropic) ion–surface binding force. Our results indicate that ionic liquids screen charged surfaces through the formation of both bound (Stern) and diffuse electric double layers, where the diffuse double layer is comprised of effectively dissociated ionic liquid ions. Additionally, we used the energetics of thermally dissociating ions in a dielectric medium to quantitatively predict the equilibrium for the effective dissociation reaction of [C4mim][NTf2] ions, in excellent agreement with the measured Debye length. Our results clearly demonstrate that, outside of the bound double layer, most of the ions in [C4mim][NTf2] are not effectively dissociated and thus do not contribute to electrostatic screening. We also provide a general, molecular-scale framework for designing ionic liquids with significantly increased dissociated charge densities via judiciously balancing ion pair interactions with bulk dielectric properties. Our results clear up several inconsistencies that have hampered scientific progress in this important area and guide the rational design of unique, high–free-ion density ionic liquids and ionic liquid blends. PMID:23716690

  9. Correction of Selection Bias in Survey Data: Is the Statistical Cure Worse Than the Bias?

    Science.gov (United States)

    Hanley, James A

    2017-04-01

    In previous articles in the American Journal of Epidemiology (Am J Epidemiol. 2013;177(5):431-442) and American Journal of Public Health (Am J Public Health. 2013;103(10):1895-1901), Masters et al. reported age-specific hazard ratios for the contrasts in mortality rates between obesity categories. They corrected the observed hazard ratios for selection bias caused by what they postulated was the nonrepresentativeness of the participants in the National Health Interview Study that increased with age, obesity, and ill health. However, it is possible that their regression approach to remove the alleged bias has not produced, and in general cannot produce, sensible hazard ratio estimates. First, we must consider how many nonparticipants there might have been in each category of obesity and of age at entry and how much higher the mortality rates would have to be in nonparticipants than in participants in these same categories. What plausible set of numerical values would convert the ("biased") decreasing-with-age hazard ratios seen in the data into the ("unbiased") increasing-with-age ratios that they computed? Can these values be encapsulated in (and can sensible values be recovered from) one additional internal variable in a regression model? Second, one must examine the age pattern of the hazard ratios that have been adjusted for selection. Without the correction, the hazard ratios are attenuated with increasing age. With it, the hazard ratios at older ages are considerably higher, but those at younger ages are well below one. Third, one must test whether the regression approach suggested by Masters et al. would correct the nonrepresentativeness that increased with age and ill health that I introduced into real and hypothetical data sets. I found that the approach did not recover the hazard ratio patterns present in the unselected data sets: the corrections overshot the target at older ages and undershot it at lower ages.

  10. Use of diluted urine for cultivation of Chlorella vulgaris.

    Science.gov (United States)

    Jaatinen, Sanna; Lakaniemi, Aino-Maija; Rintala, Jukka

    2016-01-01

    Our aim was to study the biomass growth of microalga Chlorella vulgaris using diluted human urine as a sole nutrient source. Batch cultivations (21 days) were conducted in five different urine dilutions (1:25-1:300), in 1:100-diluted urine as such and with added trace elements, and as a reference, in artificial growth medium. The highest biomass density was obtained in 1:100-diluted urine with and without additional trace elements (0.73 and 0.60 g L(-1), respectively). Similar biomass growth trends and densities were obtained with 1:25- and 1:300-diluted urine (0.52 vs. 0.48 gVSS L(-1)) indicating that urine at dilution 1:25 can be used to cultivate microalgal based biomass. Interestingly, even 1:300-diluted urine contained sufficiently nutrients and trace elements to support biomass growth. Biomass production was similar despite pH-variation from < 5 to 9 in different incubations indicating robustness of the biomass growth. Ammonium formation did not inhibit overall biomass growth. At the beginning of cultivation, the majority of the biomass consisted of living algal cells, while towards the end, their share decreased and the estimated share of bacteria and cell debris increased.

  11. Influence of extragent dilution upon light rare earths separation

    International Nuclear Information System (INIS)

    Korpusova, R.D.; Smirnova, N.N.

    1978-01-01

    The effect of diluting the extragent on separation of REE in the presence of 6 g-equiv. of LiNO 3 has been studied. For experiments use was made of TBP diluted with kerosene or butylbenzene (40,50,70 vol.%). The separation coefficients have been determined under conditions of saturation. The content of trace amounts of the components has been determined by the weight method; the content of macroimpurities - by the radiometric method. It has been established that the coefficient of Ce-La, Pr-La separation is not affected by the dilution of the extragent. The only exception is the Pr 142 -La pair; in the presence of trace amounts of better extracted element and two-fold dilution the separation coefficient increases almost by 150%. For the Pr-Ce pair the effect of dilution is better noticeable in that case when more extracted element is present in trace amounts. However, a comparison of the effect of dilution on separation coefficients of all REE pairs under study has shown that this effect is the strongest for the samarium-neodymium pair. The data obtained allow an assumption to be made that kerosene, as a diluent, affects the steric factor and coordination. Therefore, upon dilution the coefficient of the samarium-neodymium pair separation is affected most of all

  12. Dilution thermodynamics of the biologically relevant cation mixtures

    International Nuclear Information System (INIS)

    Kaczyński, Marek; Borowik, Tomasz; Przybyło, Magda; Langner, Marek

    2014-01-01

    Graphical abstract: - Highlights: • Dilution energetics of Ca 2+ can be altered by the aqueous phase ionic composition. • Dissipated heat upon Ca 2+ dilution is drastically reduced in the K + presence. • Reduction of the enthalpy change upon Ca 2+ dilution is K + concentration dependent. • The cooperativity of Ca 2+ hydration might be of great biological relevance providing a thermodynamic argument for the specific ionic composition of the intracellular environment. - Abstract: The ionic composition of intracellular space is rigorously controlled by a variety of processes consuming large quantities of energy. Since the energetic efficiency is an important evolutional criterion, therefore the ion fluxes within the cell should be optimized with respect to the accompanying energy consumption. In the paper we present the experimental evidence that the dilution enthalpies of the biologically relevant ions; i.e. calcium and magnesium depend on the presence of monovalent cations; i.e. sodium and potassium. The heat flow generated during the dilution of ionic mixtures was measured with the isothermal titration calorimetry. When calcium was diluted together with potassium the dilution enthalpy was drastically reduced as the function of the potassium concentration present in the solution. No such effect was observed when the potassium ions were substituted with sodium ones. When the dilution of magnesium was investigated the dependence of the dilution enthalpy on the accompanying monovalent cation was much weaker. In order to interpret experimental evidences the ionic cluster formation is postulated. The specific organization of such cluster should depend on ions charges, sizes and organization of the hydration layers

  13. Cognitive Bias in Systems Verification

    Science.gov (United States)

    Larson, Steve

    2012-01-01

    Working definition of cognitive bias: Patterns by which information is sought and interpreted that can lead to systematic errors in decisions. Cognitive bias is used in diverse fields: Economics, Politics, Intelligence, Marketing, to name a few. Attempts to ground cognitive science in physical characteristics of the cognitive apparatus exceed our knowledge. Studies based on correlations; strict cause and effect is difficult to pinpoint. Effects cited in the paper and discussed here have been replicated many times over, and appear sound. Many biases have been described, but it is still unclear whether they are all distinct. There may only be a handful of fundamental biases, which manifest in various ways. Bias can effect system verification in many ways . Overconfidence -> Questionable decisions to deploy. Availability -> Inability to conceive critical tests. Representativeness -> Overinterpretation of results. Positive Test Strategies -> Confirmation bias. Debiasing at individual level very difficult. The potential effect of bias on the verification process can be managed, but not eliminated. Worth considering at key points in the process.

  14. Administrative bias in South Africa

    Directory of Open Access Journals (Sweden)

    E S Nwauche

    2005-01-01

    Full Text Available This article reviews the interpretation of section 6(2(aii of the Promotion of Administrative Justice Act which makes an administrator “biased or reasonably suspected of bias” a ground of judicial review. In this regard, the paper reviews the determination of administrative bias in South Africa especially highlighting the concept of institutional bias. The paper notes that inspite of the formulation of the bias ground of review the test for administrative bias is the reasonable apprehension test laid down in the case of President of South Africa v South African Rugby Football Union(2 which on close examination is not the same thing. Accordingly the paper urges an alternative interpretation that is based on the reasonable suspicion test enunciated in BTR Industries South Africa (Pty Ltd v Metal and Allied Workers Union and R v Roberts. Within this context, the paper constructs a model for interpreting the bias ground of review that combines the reasonable suspicion test as interpreted in BTR Industries and R v Roberts, the possibility of the waiver of administrative bias, the curative mechanism of administrative appeal as well as some level of judicial review exemplified by the jurisprudence of article 6(1 of the European Convention of Human Rights, especially in the light of the contemplation of the South African Magistrate Court as a jurisdictional route of judicial review.

  15. Specific heat in diluted magnetic semiconductor quantum ring

    Science.gov (United States)

    Babanlı, A. M.; Ibragimov, B. G.

    2017-11-01

    In the present paper, we have calculated the specific heat and magnetization of a quantum ring of a diluted magnetic semiconductor (DMS) material in the presence of magnetic field. We take into account the effect of Rashba spin-orbital interaction, the exchange interaction and the Zeeman term on the specific heat. We have calculated the energy spectrum of the electrons in diluted magnetic semiconductor quantum ring. Moreover we have calculated the specific heat dependency on the magnetic field and Mn concentration at finite temperature of a diluted magnetic semiconductor quantum ring.

  16. On the Wigner law in dilute random matrices

    Science.gov (United States)

    Khorunzhy, A.; Rodgers, G. J.

    1998-12-01

    We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.

  17. Bias and misleading concepts in an Arnica research study. Comments to improve experimental Homeopathy

    Directory of Open Access Journals (Sweden)

    Salvatore Chirumbolo

    2018-01-01

    Full Text Available Basic experimental models in Homeopathy are of major interest because they could get insightful data about the ability of high dilutions to work in a biological system. Due to the extreme difficulty in the highlighting any possible effect and trusting its reliability, methods should be particularly stringent and highly standardized. Confounders, handling process, pre-analytical errors, misleading statistics and misinterpretations may lead to experimental biases. This article tries to elucidate those factors causing bias, taking into account some recent reported evidence in the field.

  18. Critical Thinking and Cognitive Bias

    Directory of Open Access Journals (Sweden)

    Jeffrey Maynes

    2015-05-01

    Full Text Available Teaching critical thinking skill is a central pedagogical aim in many courses. These skills, it is hoped, will be both portable (applicable in a wide range of contexts and durable (not forgotten quickly. Yet, both of these virtues are challenged by pervasive and potent cognitive biases, such as motivated reasoning, false consensus bias and hindsight bias. In this paper, I argue that a focus on the development of metacognitive skill shows promise as a means to inculcate debiasing habits in students. Such habits will help students become more critical reasoners. I close with suggestions for implementing this strategy.

  19. Dilute Oxygen Combustion Phase IV Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Riley, M.F.

    2003-04-30

    Novel furnace designs based on Dilute Oxygen Combustion (DOC) technology were developed under subcontract by Techint Technologies, Coraopolis, PA, to fully exploit the energy and environmental capabilities of DOC technology and to provide a competitive offering for new furnace construction opportunities. Capital cost, fuel, oxygen and utility costs, NOx emissions, oxide scaling performance, and maintenance requirements were compared for five DOC-based designs and three conventional air5-fired designs using a 10-year net present value calculation. A furnace direct completely with DOC burners offers low capital cost, low fuel rate, and minimal NOx emissions. However, these benefits do not offset the cost of oxygen and a full DOC-fired furnace is projected to cost $1.30 per ton more to operate than a conventional air-fired furnace. The incremental cost of the improved NOx performance is roughly $6/lb NOx, compared with an estimated $3/lb. NOx for equ8pping a conventional furnace with selective catalytic reduction (SCCR) technology. A furnace fired with DOC burners in the heating zone and ambient temperature (cold) air-fired burners in the soak zone offers low capital cost with less oxygen consumption. However, the improvement in fuel rate is not as great as the full DOC-fired design, and the DOC-cold soak design is also projected to cost $1.30 per ton more to operate than a conventional air-fired furnace. The NOx improvement with the DOC-cold soak design is also not as great as the full DOC fired design, and the incremental cost of the improved NOx performance is nearly $9/lb NOx. These results indicate that a DOC-based furnace design will not be generally competitive with conventional technology for new furnace construction under current market conditions. Fuel prices of $7/MMBtu or oxygen prices of $23/ton are needed to make the DOC furnace economics favorable. Niche applications may exist, particularly where access to capital is limited or floor space limitations

  20. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  1. Regression algorithm for emotion detection

    OpenAIRE

    Berthelon , Franck; Sander , Peter

    2013-01-01

    International audience; We present here two components of a computational system for emotion detection. PEMs (Personalized Emotion Maps) store links between bodily expressions and emotion values, and are individually calibrated to capture each person's emotion profile. They are an implementation based on aspects of Scherer's theoretical complex system model of emotion~\\cite{scherer00, scherer09}. We also present a regression algorithm that determines a person's emotional feeling from sensor m...

  2. Directional quantile regression in R

    Czech Academy of Sciences Publication Activity Database

    Boček, Pavel; Šiman, Miroslav

    2017-01-01

    Roč. 53, č. 3 (2017), s. 480-492 ISSN 0023-5954 R&D Projects: GA ČR GA14-07234S Institutional support: RVO:67985556 Keywords : multivariate quantile * regression quantile * halfspace depth * depth contour Subject RIV: BD - Theory of Information OBOR OECD: Applied mathematics Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/bocek-0476587.pdf

  3. Polylinear regression analysis in radiochemistry

    International Nuclear Information System (INIS)

    Kopyrin, A.A.; Terent'eva, T.N.; Khramov, N.N.

    1995-01-01

    A number of radiochemical problems have been formulated in the framework of polylinear regression analysis, which permits the use of conventional mathematical methods for their solution. The authors have considered features of the use of polylinear regression analysis for estimating the contributions of various sources to the atmospheric pollution, for studying irradiated nuclear fuel, for estimating concentrations from spectral data, for measuring neutron fields of a nuclear reactor, for estimating crystal lattice parameters from X-ray diffraction patterns, for interpreting data of X-ray fluorescence analysis, for estimating complex formation constants, and for analyzing results of radiometric measurements. The problem of estimating the target parameters can be incorrect at certain properties of the system under study. The authors showed the possibility of regularization by adding a fictitious set of data open-quotes obtainedclose quotes from the orthogonal design. To estimate only a part of the parameters under consideration, the authors used incomplete rank models. In this case, it is necessary to take into account the possibility of confounding estimates. An algorithm for evaluating the degree of confounding is presented which is realized using standard software or regression analysis

  4. Efficient bias correction for magnetic resonance image denoising.

    Science.gov (United States)

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  6. A theory of stable-isotope dilution mass spectrometry

    International Nuclear Information System (INIS)

    Pickup, J.F.; McPherson, C.K.

    1977-01-01

    In order to perform quantitative analysis using stable isotope dilution with mass spectrometry, an equation is derived which describes the relationship between the relative proportions of natural and labelled material and measured isotope ratios

  7. Paradigms in isotope dilution mass spectrometry for elemental speciation analysis

    International Nuclear Information System (INIS)

    Meija, Juris; Mester, Zoltan

    2008-01-01

    Isotope dilution mass spectrometry currently stands out as the method providing results with unchallenged precision and accuracy in elemental speciation. However, recent history of isotope dilution mass spectrometry has shown that the extent to which this primary ratio measurement method can deliver accurate results is still subject of active research. In this review, we will summarize the fundamental prerequisites behind isotope dilution mass spectrometry and discuss their practical limits of validity and effects on the accuracy of the obtained results. This review is not to be viewed as a critique of isotope dilution; rather its purpose is to highlight the lesser studied aspects that will ensure and elevate current supremacy of the results obtained from this method

  8. An overview the boron dilution issue in PWRs

    International Nuclear Information System (INIS)

    Hyvaerinen, J.

    1994-01-01

    The presentation is an overview of boron (boric acid) dilution in pressurized water reactors (PWRs). Boric acid has been widely used in PWRs as a dissolved poison, as one of the main reactivity controlling means, for a long time, from nearly but not quite from the beginning of the design, construction and operation of PWRs in the present-day sense. The specific safety issue, namely the risk of uncontrolled reactivity insertion due to inadvertent boron dilution, is discussed first, followed by a brief look on the history of boron usage in PWRs. A discussion of boron dilution phenomenology is presented next in general terms. Some particular concerns that boron dilution phenomena arouse in the minds of a regulator will also be presented before concluding with a brief look on the future of dissolved poisons. (11 refs.)

  9. A Cold Cycle Dilution Refrigerator for Space Applications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The cold cycle dilution refrigerator is a continuous refrigerator capable of cooling to temperatures below 100 mK that makes use of a novel thermal magnetic pump....

  10. Exploration, Sampling, And Reconstruction of Free Energy Surfaces with Gaussian Process Regression.

    Science.gov (United States)

    Mones, Letif; Bernstein, Noam; Csányi, Gábor

    2016-10-11

    Practical free energy reconstruction algorithms involve three separate tasks: biasing, measuring some observable, and finally reconstructing the free energy surface from those measurements. In more than one dimension, adaptive schemes make it possible to explore only relatively low lying regions of the landscape by progressively building up the bias toward the negative of the free energy surface so that free energy barriers are eliminated. Most schemes use the final bias as their best estimate of the free energy surface. We show that large gains in computational efficiency, as measured by the reduction of time to solution, can be obtained by separating the bias used for dynamics from the final free energy reconstruction itself. We find that biasing with metadynamics, measuring a free energy gradient estimator, and reconstructing using Gaussian process regression can give an order of magnitude reduction in computational cost.

  11. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  12. Determination of zinc stable isotopes in biological materials using isotope dilution inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Patterson, K.Y.; Veillon, Claude

    1992-01-01

    A method is described for using isotope dilution to determine both the amount of natural zinc and enriched isotopes of zinc in biological samples. Isotope dilution inductively coupled plasma mass spectrometry offers a way to quantify not only the natural zinc found in a sample but also the enriched isotope tracers of zinc. Accurate values for the enriched isotopes and natural zinc are obtained by adjusting the mass count rate data for measurable instrumental biases. Analytical interferences from the matrix are avoided by extracting the zinc from the sample matrix using diethylammonium diethyldithiocarbamate. The extraction technique separates the zinc from elements which form interfering molecular ions at the same nominal masses as the zinc isotopes. Accuracy of the method is verified using standard reference materials. The detection limit is 0.06 μg Zn per sample. Precision of the abundance ratios range from 0.3-0.8%. R.S.D. for natural zinc concentrations is about 200-600 μg g -1 . The accuracy and precision of the measurements make it possible to follow enriched isotopic tracers of zinc in biological samples in metabolic tracer studies. (author). 19 refs.; 1 fig., 4 tabs

  13. Influence Of Dilution Factor For Activity Measurement Of 60CO

    International Nuclear Information System (INIS)

    Hermawan-Candra; Nazaroh; Ermi-Juita

    2003-01-01

    Influence of dilution factor for activity measurement of 60 Co has been studied. The aim of this research is to determine influence between activity measurement result of 60 Co before and after diluted. Measurement were done by using ionization chamber detectors system and gamma spectrometry system with NaI(TI) detector. Discrepancy within three ionization chambers measurements were 0.2% - 2.1% and NaI(Tl) were 3.5% - 6%. (author)

  14. Attentional sets influence perceptual load effects, but not dilution effects.

    Science.gov (United States)

    Benoni, Hanna; Zivony, Alon; Tsal, Yehoshua

    2014-01-01

    Perceptual load theory [Lavie, N. (1995). Perceptual load as a necessary condition for selective attention. Journal of Experimental Psychology: Human Perception and Performance, 21, 451-468.; Lavie, N., & Tsal, Y. (1994) Perceptual load as a major determinant of the locus of selection in visual attention. Perception & Psychophysics, 56, 183-197.] proposes that interference from distractors can only be avoided in situations of high perceptual load. This theory has been supported by blocked design manipulations separating low load (when the target appears alone) and high load (when the target is embedded among neutral letters). Tsal and Benoni [(2010a). Diluting the burden of load: Perceptual load effects are simply dilution effects. Journal of Experimental Psychology: Human Perception and Performance, 36, 1645-1656.; Benoni, H., & Tsal, Y. (2010). Where have we gone wrong? Perceptual load does not affect selective attention. Vision Research, 50, 1292-1298.] have recently shown that these manipulations confound perceptual load with "dilution" (the mere presence of additional heterogeneous items in high-load situations). Theeuwes, Kramer, and Belopolsky [(2004). Attentional set interacts with perceptual load in visual search. Psychonomic Bulletin & Review, 11, 697-702.] independently questioned load theory by suggesting that attentional sets might also affect distractor interference. When high load and low load were intermixed, and participants could not prepare for the presentation that followed, both the low-load and high-load trials showed distractor interference. This result may also challenge the dilution account, which proposes a stimulus-driven mechanism. In the current study, we presented subjects with both fixed and mixed blocks, including a mix of dilution trials with low-load trials and with high-load trials. We thus separated the effect of dilution from load and tested the influence of attentional sets on each component. The results revealed that whereas

  15. Preferences, country bias, and international trade

    NARCIS (Netherlands)

    S. Roy (Santanu); J.M.A. Viaene (Jean-Marie)

    1998-01-01

    textabstractAnalyzes international trade where consumer preferences exhibit country bias. Why country biases arise; How trade can occur in the presence of country bias; Implication for the pattern of trade and specialization.

  16. Effects of dissolved species on radiolysis of diluted seawater

    International Nuclear Information System (INIS)

    Hata, Kuniki; Hanawa, Satoshi; Kasahara, Shigeki; Motooka, Takafumi; Tsukada, Takashi; Muroya, Yusa; Yamashita, Shinichi; Katsumura, Yosuke

    2014-01-01

    Fukushima Daiichi Nuclear Power Plants (NPPs) experienced seawater injection into the cores and fuel pools as an emergent measure after the accident. After the accident, retained water has been continuously desalinized, and subsequently the concentration of chloride ion (Cl"-) has been kept at a lower level these days. These ions in seawater are known to affect water radiolysis, which causes the production of radiolytic products, such as hydrogen peroxide (H_2O_2), molecular hydrogen (H_2) and molecular oxygen (O_2). However, the effects of dissolved ions relating seawater on the production of the stable radiolytic products are not well understood in the diluted seawater. To understand of the production behavior in diluted seawater under radiation, radiolysis calculations were carried out. Production of H_2 is effectively suppressed by diluting by up to vol10%. The concentrations of oxidants (H_2O_2 and O_2) are also suppressed by dilution of dissolved species. The effect of oxidants on corrosion of materials is thought to be low when the seawater was diluted by less than 1 vol% by water. It is also shown that deaeration is one of the effective measure to suppress the concentrations of oxidants at a lower level for any dilution conditions. (author)

  17. Dilution Refrigeration of Multi-Ton Cold Masses

    CERN Document Server

    Wikus, P; CERN. Geneva

    2007-01-01

    Dilution refrigeration is the only means to provide continuous cooling at temperatures below 250 mK. Future experiments featuring multi-ton cold masses require a new generation of dilution refrigeration systems, capable of providing a heat sink below 10 mK at cooling powers which exceed the performance of present systems considerably. This thesis presents some advances towards dilution refrigeration of multi-ton masses in this temperature range. A new method using numerical simulation to predict the cooling power of a dilution refrigerator of a given design has been developed in the framework of this thesis project. This method does not only allow to take into account the differences between an actual and an ideal continuous heat exchanger, but also to quantify the impact of an additional heat load on an intermediate section of the dilute stream. In addition, transient behavior can be simulated. The numerical model has been experimentally verified with a dilution refrigeration system which has been designed, ...

  18. Heuristics and bias in rectal surgery.

    Science.gov (United States)

    MacDermid, Ewan; Young, Christopher J; Moug, Susan J; Anderson, Robert G; Shepherd, Heather L

    2017-08-01

    Deciding to defunction after anterior resection can be difficult, requiring cognitive tools or heuristics. From our previous work, increasing age and risk-taking propensity were identified as heuristic biases for surgeons in Australia and New Zealand (CSSANZ), and inversely proportional to the likelihood of creating defunctioning stomas. We aimed to assess these factors for colorectal surgeons in the British Isles, and identify other potential biases. The Association of Coloproctology of Great Britain and Ireland (ACPGBI) was invited to complete an online survey. Questions included demographics, risk-taking propensity, sensitivity to professional criticism, self-perception of anastomotic leak rate and propensity for creating defunctioning stomas. Chi-squared testing was used to assess differences between ACPGBI and CSSANZ respondents. Multiple regression analysis identified independent surgeon predictors of stoma formation. One hundred fifty (19.2%) eligible members of the ACPGBI replied. Demographics between ACPGBI and CSSANZ groups were well-matched. Significantly more ACPGBI surgeons admitted to anastomotic leak in the last year (p < 0.001). ACPGBI surgeon age over 50 (p = 0.02), higher risk-taking propensity across several domains (p = 0.044), self-belief in a lower-than-average anastomotic leak rate (p = 0.02) and belief that the average risk of leak after anterior resection is 8% or lower (p = 0.007) were all independent predictors of less frequent stoma formation. Sensitivity to criticism from colleagues was not a predictor of stoma formation. Unrecognised surgeon factors including age, everyday risk-taking, self-belief in surgical ability and lower probability bias of anastomotic leak appear to exert an effect on decision-making in rectal surgery.

  19. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  20. The number of subjects per variable required in linear regression analyses.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2015-06-01

    To determine the number of independent variables that can be included in a linear regression model. We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R(2) of the fitted model. A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, the standard errors of the regression coefficients were accurately estimated and estimated confidence intervals had approximately the advertised coverage rates. A much higher number of SPV were necessary to minimize bias in estimating the model R(2), although adjusted R(2) estimates behaved well. The bias in estimating the model R(2) statistic was inversely proportional to the magnitude of the proportion of variation explained by the population regression model. Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Anticipated Regret and Omission Bias in HPV Vaccination Decisions

    DEFF Research Database (Denmark)

    Jensen, Niels Holm

    2017-01-01

    This study investigated effects of anticipated regret on parents’ HPV vaccination intentions and effects of omission bias on HPV vaccination intentions and vaccine uptake. An online survey was completed by 851 parents of adolescent girls in Denmark, a country where HPV vaccine safety is currently...... heavily debated. Multivariate regression analyses revealed anticipated inaction regret as a significant positive predictor of vaccination intentions, and, anticipated action regret as a significant negative predictor of vaccination intentions. Multivariate analyses also revealed omission bias...... in a hypothetical vaccination vignette as a significant negative predictor of HPV vaccination intention as well as vaccine uptake. Finally, the study tested effects of anticipated regret and omission bias on evaluations of two extisting Danish pro-vaccine campaign videos. Here, the result revealed anticipated...

  2. Spontaneous regression of pulmonary bullae

    International Nuclear Information System (INIS)

    Satoh, H.; Ishikawa, H.; Ohtsuka, M.; Sekizawa, K.

    2002-01-01

    The natural history of pulmonary bullae is often characterized by gradual, progressive enlargement. Spontaneous regression of bullae is, however, very rare. We report a case in which complete resolution of pulmonary bullae in the left upper lung occurred spontaneously. The management of pulmonary bullae is occasionally made difficult because of gradual progressive enlargement associated with abnormal pulmonary function. Some patients have multiple bulla in both lungs and/or have a history of pulmonary emphysema. Others have a giant bulla without emphysematous change in the lungs. Our present case had treated lung cancer with no evidence of local recurrence. He had no emphysematous change in lung function test and had no complaints, although the high resolution CT scan shows evidence of underlying minimal changes of emphysema. Ortin and Gurney presented three cases of spontaneous reduction in size of bulla. Interestingly, one of them had a marked decrease in the size of a bulla in association with thickening of the wall of the bulla, which was observed in our patient. This case we describe is of interest, not only because of the rarity with which regression of pulmonary bulla has been reported in the literature, but also because of the spontaneous improvements in the radiological picture in the absence of overt infection or tumor. Copyright (2002) Blackwell Science Pty Ltd

  3. Quantum algorithm for linear regression

    Science.gov (United States)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  4. Interpretation of commonly used statistical regression models.

    Science.gov (United States)

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  5. Learning Supervised Topic Models for Classification and Regression from Crowds

    DEFF Research Database (Denmark)

    Rodrigues, Filipe; Lourenco, Mariana; Ribeiro, Bernardete

    2017-01-01

    problems, which account for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages...... annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this article, we propose two supervised topic models, one for classification and another for regression...

  6. Negativity Bias in Dangerous Drivers.

    Directory of Open Access Journals (Sweden)

    Jing Chai

    Full Text Available The behavioral and cognitive characteristics of dangerous drivers differ significantly from those of safe drivers. However, differences in emotional information processing have seldom been investigated. Previous studies have revealed that drivers with higher anger/anxiety trait scores are more likely to be involved in crashes and that individuals with higher anger traits exhibit stronger negativity biases when processing emotions compared with control groups. However, researchers have not explored the relationship between emotional information processing and driving behavior. In this study, we examined the emotional information processing differences between dangerous drivers and safe drivers. Thirty-eight non-professional drivers were divided into two groups according to the penalty points that they had accrued for traffic violations: 15 drivers with 6 or more points were included in the dangerous driver group, and 23 drivers with 3 or fewer points were included in the safe driver group. The emotional Stroop task was used to measure negativity biases, and both behavioral and electroencephalograph data were recorded. The behavioral results revealed stronger negativity biases in the dangerous drivers than in the safe drivers. The bias score was correlated with self-reported dangerous driving behavior. Drivers with strong negativity biases reported having been involved in mores crashes compared with the less-biased drivers. The event-related potentials (ERPs revealed that the dangerous drivers exhibited reduced P3 components when responding to negative stimuli, suggesting decreased inhibitory control of information that is task-irrelevant but emotionally salient. The influence of negativity bias provides one possible explanation of the effects of individual differences on dangerous driving behavior and traffic crashes.

  7. Coupled bias-variance tradeoff for cross-pose face recognition.

    Science.gov (United States)

    Li, Annan; Shan, Shiguang; Gao, Wen

    2012-01-01

    Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance.

  8. Noise Induces Biased Estimation of the Correction Gain.

    Directory of Open Access Journals (Sweden)

    Jooeun Ahn

    Full Text Available The detection of an error in the motor output and the correction in the next movement are critical components of any form of motor learning. Accordingly, a variety of iterative learning models have assumed that a fraction of the error is adjusted in the next trial. This critical fraction, the correction gain, learning rate, or feedback gain, has been frequently estimated via least-square regression of the obtained data set. Such data contain not only the inevitable noise from motor execution, but also noise from measurement. It is generally assumed that this noise averages out with large data sets and does not affect the parameter estimation. This study demonstrates that this is not the case and that in the presence of noise the conventional estimate of the correction gain has a significant bias, even with the simplest model. Furthermore, this bias does not decrease with increasing length of the data set. This study reveals this limitation of current system identification methods and proposes a new method that overcomes this limitation. We derive an analytical form of the bias from a simple regression method (Yule-Walker and develop an improved identification method. This bias is discussed as one of other examples for how the dynamics of noise can introduce significant distortions in data analysis.

  9. Using the tracer-dilution discharge method to develop streamflow records for ice-affected streams in Colorado

    Science.gov (United States)

    Capesius, Joseph P.; Sullivan, Joseph R.; O'Neill, Gregory B.; Williams, Cory A.

    2005-01-01

    Accurate ice-affected streamflow records are difficult to obtain for several reasons, which makes the management of instream-flow water rights in the wintertime a challenging endeavor. This report documents a method to improve ice-affected streamflow records for two gaging stations in Colorado. In January and February 2002, the U.S. Geological Survey, in cooperation with the Colorado Water Conservation Board, conducted an experiment using a sodium chloride tracer to measure streamflow under ice cover by the tracer-dilution discharge method. The purpose of this study was to determine the feasibility of obtaining accurate ice-affected streamflow records by using a sodium chloride tracer that was injected into the stream. The tracer was injected at two gaging stations once per day for approximately 20 minutes for 25 days. Multiple-parameter water-quality sensors at the two gaging stations monitored background and peak chloride concentrations. These data were used to determine discharge at each site. A comparison of the current-meter streamflow record to the tracer-dilution streamflow record shows different levels of accuracy and precision of the tracer-dilution streamflow record at the two sites. At the lower elevation and warmer site, Brandon Ditch near Whitewater, the tracer-dilution method overestimated flow by an average of 14 percent, but this average is strongly biased by outliers. At the higher elevation and colder site, Keystone Gulch near Dillon, the tracer-dilution method experienced problems with the tracer solution partially freezing in the injection line. The partial freezing of the tracer contributed to the tracer-dilution method underestimating flow by 52 percent at Keystone Gulch. In addition, a tracer-pump-reliability test was conducted to test how accurately the tracer pumps can discharge the tracer solution in conditions similar to those used at the gaging stations. Although the pumps were reliable and consistent throughout the 25-day study period

  10. Numerical value biases sound localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R

    2017-12-08

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1-9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.

  11. Modeling syngas-fired gas turbine engines with two dilutants

    Science.gov (United States)

    Hawk, Mitchell E.

    2011-12-01

    Prior gas turbine engine modeling work at the University of Wyoming studied cycle performance and turbine design with air and CO2-diluted GTE cycles fired with methane and syngas fuels. Two of the cycles examined were unconventional and innovative. The work presented herein reexamines prior results and expands the modeling by including the impacts of turbine cooling and CO2 sequestration on GTE cycle performance. The simple, conventional regeneration and two alternative regeneration cycle configurations were examined. In contrast to air dilution, CO2 -diluted cycle efficiencies increased by approximately 1.0 percentage point for the three regeneration configurations examined, while the efficiency of the CO2-diluted simple cycle decreased by approximately 5.0 percentage points. For CO2-diluted cycles with a closed-exhaust recycling path, an optimum CO2-recycle pressure was determined for each configuration that was significantly lower than atmospheric pressure. Un-cooled alternative regeneration configurations with CO2 recycling achieved efficiencies near 50%, which was approximately 3.0 percentage points higher than the conventional regeneration cycle and simple cycle configurations that utilized CO2 recycling. Accounting for cooling of the first two turbine stages resulted in a 2--3 percentage point reduction in un-cooled efficiency, with air dilution corresponding to the upper extreme. Additionally, when the work required to sequester CO2 was accounted for, cooled cycle efficiency decreased by 4--6 percentage points, and was more negatively impacted when syngas fuels were used. Finally, turbine design models showed that turbine blades are shorter with CO2 dilution, resulting in fewer design restrictions.

  12. Dilution Ratios for HB Line Phase I Eductor System

    International Nuclear Information System (INIS)

    Steimke, J.L.

    2002-01-01

    HB Line Phase I product transfer includes an eductor which transfers liquid from Product Hold Tank (PHT) RT-33 or RT-34 to Tank 11.1. The eductor also dilutes the liquid from the PHT with eductant. Dilution must be reliably controlled because of criticality concerns with H Canyon Tanks. The eductor system, which contains a 1 inch Model 264 Schutte and Koerting eductor, was previously modeled [1] in 1998 and dilution ratios were calculated for different flow restrictors, eductant pressures and densities for the eductant and the contents of the PHT. The previous calculation was performed using spreadsheet software no longer supported at SRS. For the previous work dilution ratio was defined as the volume of eductant consumed divided by volume of PHT contents transferred. Since 1998 HB Line Engineering has changed the definition of dilution ratio to the total volume of liquid, eductant consumed plus the volume of PHT liquid transferred, divided by the volume of PHT liquid transferred. The 1998 base case calculation was for a restrictor diameter of 0.334 inches, an eductant supply pressure of 15 psig, full PHT, an eductant specific gravity of 1.385 and a PHT density of 1.015. The base case dilution ratio calculated in 1998 using the current definition was 3.52. After accounting for uncertainty the minimum dilution ratio decreased to 3.23. In 2001 HB Line Engineering requested that the calculation be repeated for a manganous nitrate solution eductant and also a process water eductant. The other conditions were the same as for the 1998 calculation. The objective of this report is to document the calculations and the results

  13. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...

  14. On Weighted Support Vector Regression

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2014-01-01

    We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... shrinks the coefficient of each observation in the estimated functions; thus, it is widely used for minimizing influence of outliers. We propose to additionally add weights to the slack variables in the constraints (CF‐weights) and call the combination of weights the doubly weighted SVR. We illustrate...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...

  15. The alarming problems of confounding equivalence using logistic regression models in the perspective of causal diagrams

    Directory of Open Access Journals (Sweden)

    Yuanyuan Yu

    2017-12-01

    Full Text Available Abstract Background Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Methods Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM were compared. The “do-calculus” was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Results Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal

  16. The alarming problems of confounding equivalence using logistic regression models in the perspective of causal diagrams.

    Science.gov (United States)

    Yu, Yuanyuan; Li, Hongkai; Sun, Xiaoru; Su, Ping; Wang, Tingting; Liu, Yi; Yuan, Zhongshang; Liu, Yanxun; Xue, Fuzhong

    2017-12-28

    Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM) were compared. The "do-calculus" was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal strategy was to adjust for the parent nodes of outcome, which

  17. Validation of multi-element isotope dilution ICPMS for the analysis of basalts

    Energy Technology Data Exchange (ETDEWEB)

    Willbold, M.; Jochum, K.P.; Raczek, I.; Amini, M.A.; Stoll, B.; Hofmann, A.W. [Max-Planck-Institut fuer Chemie, Mainz (Germany)

    2003-09-01

    In this study we have validated a newly developed multi-element isotope dilution (ID) ICPMS method for the simultaneous analysis of up to 12 trace elements in geological samples. By evaluating the analytical uncertainty of individual components using certified reference materials we have quantified the overall analytical uncertainty of the multi-element ID ICPMS method at 1-2%. Individual components include sampling/weighing, purity of reagents, purity of spike solutions, calibration of spikes, determination of isotopic ratios, instrumental sources of error, correction of mass discrimination effect, values of constants, and operator bias. We have used the ID-determined trace elements for internal standardization to improve indirectly the analysis of 14 other (mainly mono-isotopic trace elements) by external calibration. The overall analytical uncertainty for those data is about 2-3%. In addition, we have analyzed USGS and MPI-DING geological reference materials (BHVO-1, BHVO-2, KL2-G, ML3B-G) to quantify the overall bias of the measurement procedure. Trace element analysis of geological reference materials yielded results that agree mostly within about 2-3% relative to the reference values. Since these results match the conclusions obtained by the investigation of the overall analytical uncertainty, we take this as a measure for the validity of multi-element ID ICPMS. (orig.)

  18. The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies

    Science.gov (United States)

    O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.

    2011-01-01

    The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…

  19. Sample Size and Robustness of Inferences from Logistic Regression in the Presence of Nonlinearity and Multicollinearity

    OpenAIRE

    Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.

    2011-01-01

    The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...

  20. Mobile Melt-Dilute Treatment for Russian Spent Nuclear Fuel

    International Nuclear Information System (INIS)

    Peacock, H.

    2002-01-01

    Treatment of spent Russian fuel using a Melt-Dilute (MD) process is proposed to consolidate fuel assemblies into a form that is proliferation resistant and provides critically safety under storage and disposal configurations. Russian fuel elements contain a variety of fuel meat and cladding materials. The Melt-Dilute treatment process was initially developed for aluminum-based fuels so additional development is needed for several cladding and fuel meat combinations in the Russian fuel inventory (e.g. zirconium-clad, uranium-zirconium alloy fuel). A Mobile Melt-Dilute facility (MMD) is being proposed for treatment of spent fuels at reactor site storage locations in Russia; thereby, avoiding the costs of building separate treatment facilities at each site and avoiding shipment of enriched fuel assemblies over the road. The MMD facility concept is based on laboratory tests conducted at the Savannah River Technology Center (SRTC), and modular pilot-scale facilities constructed at the Savannah River Site for treatment of US spent fuel. SRTC laboratory tests have shown the feasibility of operating a Melt-Dilute treatment process with either a closed system or a filtered off-gas system. The proposed Mobile Melt-Dilute process is presented in this paper

  1. Quantifying the dilution effect for models in ecological epidemiology.

    Science.gov (United States)

    Roberts, M G; Heesterbeek, J A P

    2018-03-01

    The dilution effect , where an increase in biodiversity results in a reduction in the prevalence of an infectious disease, has been the subject of speculation and controversy. Conversely, an amplification effect occurs when increased biodiversity is related to an increase in prevalence. We explore the conditions under which these effects arise, using multi species compartmental models that integrate ecological and epidemiological interactions. We introduce three potential metrics for quantifying dilution and amplification, one based on infection prevalence in a focal host species, one based on the size of the infected subpopulation of that species and one based on the basic reproduction number. We introduce our approach in the simplest epidemiological setting with two species, and show that the existence and strength of a dilution effect is influenced strongly by the choices made to describe the system and the metric used to gauge the effect. We show that our method can be generalized to any number of species and to more complicated ecological and epidemiological dynamics. Our method allows a rigorous analysis of ecological systems where dilution effects have been postulated, and contributes to future progress in understanding the phenomenon of dilution in the context of infectious disease dynamics and infection risk. © 2018 The Author(s).

  2. A Study on the Stability of Diluted Bee Venom Solution

    Directory of Open Access Journals (Sweden)

    Mi-Suk Kang

    2003-06-01

    Full Text Available Objective : The purpose of this study was to investigate the stability of bee venom according to the keeping method and period. Method : The author observed microbial contamination of bee venom in nutrient agar, broth, YPD agar and YPD media and antibacterial activity for S. aureus, E. coli manufactured 12, 6 and 3 months ago as the two type of room temperature and 4℃ cold storage. Result : 1. 1:3,000 and 1:4,000 diluted bee venom solution did not show microbial contamination both room temperature and cold storage within twelve months. 2. There was antibacterial activity of diluted bee venom for S. aureus in cold storage within twelve months and there was no antibacterial activity of diluted bee venom for S. aureus in twelve months, room temperature storage. 3. We could not observe the zone of inhibition around paper disc of all for E.coli. in 1:3,000, 1:30,000 and 1:3,000,000 diluted bee venom solution, respectively. According to results, we expect that diluted bee venom solution is stable both cold and room temperature storage within twelve months.

  3. Initial magnetic susceptibility of the diluted magnetopolymer elastic composites

    International Nuclear Information System (INIS)

    Borin, D.Yu.; Odenbach, S.

    2017-01-01

    In this work diluted magnetopolymer elastic composites based on magnetic microparticles are experimentally studied. Considered samples have varied concentration of the magnetic powder and different structural anisotropy. Experimental data on magnetic properties are accomplished by microstructural observations performed using X-Ray tomography. Influence of the particles amount and structuring effects on the initial magnetic susceptibility of the composites as well as the applicability of the Maxwell-Garnett approximation, which is widely used in considerations of magnetopolymer elastic composites, are evaluated. It is demonstrated that the approximation works well for diluted samples containing randomly distributed magnetic particles and for the diluted samples with chain-like structures oriented perpendicular to an externally applied field, while it fails to predict the susceptibility of the samples with structures oriented parallel to the field. Moreover, it is shown, that variation of the chains morphology does not significantly change the composite initial magnetic susceptibility. - Highlights: • The Maxwell-Garnet prediction works well for the diluted isotropic composites. • The Maxwell-Garnet prediction can be used for composites with structures oriented perpendicular to an applied field. • Chains oriented parallel to an applied field significantly increase the composite initial magnetic susceptibility. • The number and thickness of chains is not of the highest importance for the diluted composites. • The crucial reason of the observed effect is expected to be the demagnetisation factor of the chains.

  4. A probabilistic analysis of rapid boron dilution scenarios

    International Nuclear Information System (INIS)

    Kohut, P.; Diamond, D.J.

    1993-01-01

    A probabilistic and deterministic analysis of a rapid boron dilution scenario related to reactor restart was performed. The event is initiated by a loss of off-site power during the startup dilution process. The automatic restart of the charging pump in such cases may lead to the accumulation of a diluted slug of water in the lower plenum. The restart of the reactor coolant pumps may send the diluted slug through the core, adding sufficient reactivity to overcome the shutdown margin and cause a power excursion. The concern is that the power excursion is sufficient in certain circumstances to cause fuel damage. The estimated core damage frequency based on the scoping analysis is 1.0--3.0E-05/yr for the plants analyzed. These are relatively significant values when compared to desirable goals. The analysis contained assumptions related to plant specific design characteristics which may lead to non-conservative estimates. The most important conservative assumptions were that mixing of the injected diluted water is insignificant and that fuel damage occurs when the slug passes through the core

  5. News Consumption and Media Bias

    OpenAIRE

    Yi Xiang; Miklos Sarvary

    2007-01-01

    Bias in the market for news is well-documented. Recent research in economics explains the phenomenon by assuming that consumers want to read (watch) news that is consistent with their tastes or prior beliefs rather than the truth. The present paper builds on this idea but recognizes that (i) besides “biased” consumers, there are also “conscientious” consumers whose sole interest is in discovering the truth, and (ii) consistent with reality, media bias is constrained by the truth. These two fa...

  6. Biased limiter experiments on text

    International Nuclear Information System (INIS)

    Phillips, P.E.; Wootton, A.J.; Rowan, W.L.; Ritz, C.P.; Rhodes, T.L.; Bengtson, R.D.; Hodge, W.L.; Durst, R.D.; McCool, S.C.; Richards, B.; Gentle, K.W.; Schoch, P.; Forster, J.C.; Hickok, R.L.; Evans, T.E.

    1987-01-01

    Experiments using an electrically biased limiter have been performed on the Texas Experimental Tokamak (TEXT). A small movable limiter is inserted past the main poloidal ring limiter (which is electrically connected to the vacuum vessel) and biased at V Lim with respect to it. The floating potential, plasma potential and shear layer position can be controlled. With vertical strokeV Lim vertical stroke ≥ 50 V the plasma density increases. For V Lim Lim > 0 the results obtained are inconclusive. Variation of V Lim changes the electrostatic turbulence which may explain the observed total flux changes. (orig.)

  7. The coalitional value theory of antigay bias

    NARCIS (Netherlands)

    Winegard, Bo; Reynolds, Tania; Baumeister, Roy F.; Plant, E. Ashby

    2016-01-01

    Research indicates that antigay bias follows a specific pattern (and probably has throughout written history, at least in the West): (a) men evince more antigay bias than women; (b) men who belong to traditionally male coalitions evince more antigay bias than those who do not; (c) antigay bias is

  8. Non‐diluted seawater enhances nasal ciliary beat frequency and wound repair speed compared to diluted seawater and normal saline

    Science.gov (United States)

    Bonnomet, Arnaud; Luczka, Emilie; Coraux, Christelle

    2016-01-01

    Background The regulation of mucociliary clearance is a key part of the defense mechanisms developed by the airway epithelium. If a high aggregate quality of evidence shows the clinical effectiveness of nasal irrigation, there is a lack of studies showing the intrinsic role of the different irrigation solutions allowing such results. This study investigated the impact of solutions with different pH and ionic compositions, eg, normal saline, non‐diluted seawater and diluted seawater, on nasal mucosa functional parameters. Methods For this randomized, controlled, blinded, in vitro study, we used airway epithelial cells obtained from 13 nasal polyps explants to measure ciliary beat frequency (CBF) and epithelial wound repair speed (WRS) in response to 3 isotonic nasal irrigation solutions: (1) normal saline 0.9%; (2) non‐diluted seawater (Physiomer®); and (3) 30% diluted seawater (Stérimar). The results were compared to control (cell culture medium). Results Non‐diluted seawater enhanced the CBF and the WRS when compared to diluted seawater and to normal saline. When compared to the control, it significantly enhanced CBF and slightly, though nonsignificantly, improved the WRS. Interestingly, normal saline markedly reduced the number of epithelial cells and ciliated cells when compared to the control condition. Conclusion Our results suggest that the physicochemical features of the nasal wash solution is important because it determines the optimal conditions to enhance CBF and epithelial WRS thus preserving the respiratory mucosa in pathological conditions. Non‐diluted seawater obtains the best results on CBF and WRS vs normal saline showing a deleterious effect on epithelial cell function. PMID:27101776

  9. Non-diluted seawater enhances nasal ciliary beat frequency and wound repair speed compared to diluted seawater and normal saline.

    Science.gov (United States)

    Bonnomet, Arnaud; Luczka, Emilie; Coraux, Christelle; de Gabory, Ludovic

    2016-10-01

    The regulation of mucociliary clearance is a key part of the defense mechanisms developed by the airway epithelium. If a high aggregate quality of evidence shows the clinical effectiveness of nasal irrigation, there is a lack of studies showing the intrinsic role of the different irrigation solutions allowing such results. This study investigated the impact of solutions with different pH and ionic compositions, eg, normal saline, non-diluted seawater and diluted seawater, on nasal mucosa functional parameters. For this randomized, controlled, blinded, in vitro study, we used airway epithelial cells obtained from 13 nasal polyps explants to measure ciliary beat frequency (CBF) and epithelial wound repair speed (WRS) in response to 3 isotonic nasal irrigation solutions: (1) normal saline 0.9%; (2) non-diluted seawater (Physiomer®); and (3) 30% diluted seawater (Stérimar). The results were compared to control (cell culture medium). Non-diluted seawater enhanced the CBF and the WRS when compared to diluted seawater and to normal saline. When compared to the control, it significantly enhanced CBF and slightly, though nonsignificantly, improved the WRS. Interestingly, normal saline markedly reduced the number of epithelial cells and ciliated cells when compared to the control condition. Our results suggest that the physicochemical features of the nasal wash solution is important because it determines the optimal conditions to enhance CBF and epithelial WRS thus preserving the respiratory mucosa in pathological conditions. Non-diluted seawater obtains the best results on CBF and WRS vs normal saline showing a deleterious effect on epithelial cell function. © 2016 The Authors International Forum of Allergy & Rhinology, published by ARSAAOA, LLC.

  10. Reality checks on microbial food web interactions in dilution experiments: responses to the comments of Dolan and McKeon

    Directory of Open Access Journals (Sweden)

    M. R. Landry

    2005-01-01

    Full Text Available Dolan and McKeon (2005 have recently criticized microzooplankton grazing rate estimates by the dilution approach as being systematically biased and significantly overestimated. Their argument is based on observed mortality responses of ciliated protozoa to reduced food in several coastal experiments and a global extrapolation which assumes that all grazing in all ocean systems scales to the abundance of ciliates. We suggest that these conclusions are unrealistic on several counts: they do not account for community differences between open ocean and coastal systems; they ignore direct experimental evidence supporting dilution rate estimates in the open oceans, and they discount dilution effects on mortality rate as well as growth in multi-layered, open-ocean food webs. High microzooplankton grazing rates in open-ocean systems are consistent with current views on export fluxes and trophic transfers. More importantly, significantly lower rates would fail to account for the efficient nutrient recycling requirements of these resource-limited and rapid-turnover communities.

  11. Credit Scoring Problem Based on Regression Analysis

    OpenAIRE

    Khassawneh, Bashar Suhil Jad Allah

    2014-01-01

    ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....

  12. Evaluation of bilirubin interference and accuracy of six creatinine assays compared with isotope dilution-liquid chromatography mass spectrometry.

    Science.gov (United States)

    Nah, Hyunjin; Lee, Sang-Guk; Lee, Kyeong-Seob; Won, Jae-Hee; Kim, Hyun Ok; Kim, Jeong-Ho

    2016-02-01

    The aim of this study was to estimate bilirubin interference and accuracy of six routine methods for measuring creatinine compared with isotope dilution-liquid chromatography mass spectrometry (ID-LC/MS). A total of 40 clinical serum samples from 31 patients with serum total bilirubin concentration >68.4μmol/L were collected. Serum creatinine was measured using two enzymatic reagents and four Jaffe reagents as well as ID-LC/MS. Correlations between bilirubin concentration and percent difference in creatinine compared with ID-LC/MS were analyzed to investigate bilirubin interference. Bias estimations between the six reagents and ID-LC/MS were performed. Recovery tests using National Institute of Standards and Technology (NIST) Standard Reference Material (SRM) 967a were also performed. Both the enzymatic methods showed no bilirubin interference. However, three of the four Jaffe methods demonstrated significant bilirubin concentration-dependent interference in samples with creatinine levels creatinine levels ranging from 53.0 to 97.2μmol/L. Comparison of these methods with ID-LC/MS using patients' samples with elevated bilirubin revealed that the tested methods failed to achieve the bias goal at especially low levels of creatinine. In addition, recovery test using NIST SRM 967a showed that bias in one Jaffe method and two enzymatic methods did not achieve the bias goal at either low or high level of creatinine, indicating they had calibration bias. One enzymatic method failed to achieve all the bias goals in both comparison experiment and recovery test. It is important to understand that both bilirubin interference and calibration traceability to ID-LC/MS should be considered to improve the accuracy of creatinine measurement. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  13. On concurvity in nonlinear and nonparametric regression models

    Directory of Open Access Journals (Sweden)

    Sonia Amodio

    2014-12-01

    Full Text Available When data are affected by multicollinearity in the linear regression framework, then concurvity will be present in fitting a generalized additive model (GAM. The term concurvity describes nonlinear dependencies among the predictor variables. As collinearity results in inflated variance of the estimated regression coefficients in the linear regression model, the result of the presence of concurvity leads to instability of the estimated coefficients in GAMs. Even if the backfitting algorithm will always converge to a solution, in case of concurvity the final solution of the backfitting procedure in fitting a GAM is influenced by the starting functions. While exact concurvity is highly unlikely, approximate concurvity, the analogue of multicollinearity, is of practical concern as it can lead to upwardly biased estimates of the parameters and to underestimation of their standard errors, increasing the risk of committing type I error. We compare the existing approaches to detect concurvity, pointing out their advantages and drawbacks, using simulated and real data sets. As a result, this paper will provide a general criterion to detect concurvity in nonlinear and non parametric regression models.

  14. Regularized Label Relaxation Linear Regression.

    Science.gov (United States)

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu

    2018-04-01

    Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.

  15. ERM booster vaccination of Rainbow trout using diluted bacterin

    DEFF Research Database (Denmark)

    Schmidt, Jacob Günther; Henriksen, Niels H.; Buchmann, Kurt

    2016-01-01

    under laboratory conditions extend the protection period. The present field study investigated the applicability of the method under practical farming conditions (freshwater earth ponds supplied by stream water). Primary immersion vaccination of trout (3–4 g) for 30 s in Y. ruckeri bacterin (diluted 1......Enteric Red Mouth Disease ERM caused by Yersinia ruckeri infection is associated with morbidity and mortality in salmonid farming but immersion vaccination of fry may confer some protection for a number of months. Revaccination of rainbow trout, even by use of diluted ERM immersion vaccine, can......:10) in April 2015 was followed 3 months later (July 2015) by 1 h bathing of rainbow trout in bacterin (diluted 1:650 or 1:1700) in order to evaluate if this time saving vaccination methodology can improve immunity and protection. Trout were subjected in farms to natural Y. ruckeri exposure in June and July...

  16. Moderate Dilution of Copper Slag by Natural Gas

    Science.gov (United States)

    Zhang, Bao-jing; Zhang, Ting-an; Niu, Li-ping; Liu, Nan-song; Dou, Zhi-he; Li, Zhi-qiang

    2018-01-01

    To enable use of copper slag and extract the maximum value from the contained copper, an innovative method of reducing moderately diluted slag to smelt copper-containing antibacterial stainless steel is proposed. This work focused on moderate dilution of copper slag using natural gas. The thermodynamics of copper slag dilution and ternary phase diagrams of the slag system were calculated. The effects of blowing time, temperature, matte settling time, and calcium oxide addition were investigated. The optimum reaction conditions were identified to be blowing time of 20 min, reaction temperature of 1250°C, settling time of 60 min, CaO addition of 4% of mass of slag, natural gas flow rate of 80 mL/min, and outlet pressure of 0.1 MPa. Under these conditions, the Fe3O4 and copper contents of the residue were 7.36% and 0.50%, respectively.

  17. Thermal diffusion in dilute nanofluids investigated by photothermal interferometry

    International Nuclear Information System (INIS)

    Philip, J; Nisha, M R

    2010-01-01

    We have carried out a theoretical analysis of the dependence of the particle mass fraction on the thermal diffusivity of dilute suspensions of nanoparticles in liquids (dilute nanofluids). The analysis takes in to account adsorption of an ordered layer of solvent molecules around the nanoparticles. It is found that thermal diffusivity decreases with mass fraction for sufficiently small particle sizes. Beyond a critical particle size thermal diffusivity begins to increase with mass fraction for the same system. The results have been verified experimentally by measuring the thermal diffusivity of dilute suspensions of TiO 2 nanoparticles dispersed in polyvinyl alcohol (PVA) medium. The effect is attributed to Kapitza resistance of thermal waves in the medium.

  18. Biased Brownian dynamics for rate constant calculation.

    OpenAIRE

    Zou, G; Skeel, R D; Subramaniam, S

    2000-01-01

    An enhanced sampling method-biased Brownian dynamics-is developed for the calculation of diffusion-limited biomolecular association reaction rates with high energy or entropy barriers. Biased Brownian dynamics introduces a biasing force in addition to the electrostatic force between the reactants, and it associates a probability weight with each trajectory. A simulation loses weight when movement is along the biasing force and gains weight when movement is against the biasing force. The sampl...

  19. Exploring Attribution Theory and Bias

    Science.gov (United States)

    Robinson, Jessica A.

    2017-01-01

    Courses: This activity can be used in a wide range of classes, including interpersonal communication, introduction to communication, and small group communication. Objectives: After completing this activity, students should be able to: (1) define attribution theory, personality attribution, situational attribution, and attribution bias; (2)…

  20. Ratio Bias and Policy Preferences

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Tue

    2016-01-01

    Numbers permeate modern political communication. While current scholarship on framing effects has focused on the persuasive effects of words and arguments, this article shows that framing of numbers can also substantially affect policy preferences. Such effects are caused by ratio bias, which...

  1. Bias in Peripheral Depression Biomarkers

    DEFF Research Database (Denmark)

    Carvalho, André F; Köhler, Cristiano A; Brunoni, André R

    2016-01-01

    BACKGROUND: To aid in the differentiation of individuals with major depressive disorder (MDD) from healthy controls, numerous peripheral biomarkers have been proposed. To date, no comprehensive evaluation of the existence of bias favoring the publication of significant results or inflating effect...

  2. Minimum Bias Trigger in ATLAS

    International Nuclear Information System (INIS)

    Kwee, Regina

    2010-01-01

    Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.2 < |η| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presented. We also highlight the advantage of these triggers for particle correlation analyses. (author)

  3. Gender bias in teaching evaluations

    NARCIS (Netherlands)

    Mengel, Friederike; Sauermann, Jan; Zölitz, Ulf Zoelitz

    2017-01-01

    This paper provides new evidence on gender bias in teaching evaluations. We exploit a quasi-experimental dataset of 19,952 student evaluations of university faculty in a context where students are randomly allocated to female or male instructors. Despite the fact that neither students’ grades nor

  4. Attentional Bias in Math Anxiety

    Directory of Open Access Journals (Sweden)

    Orly eRubinsten

    2015-10-01

    Full Text Available Cognitive theory from the field of general anxiety suggests that the tendency to display attentional bias toward negative information results in anxiety. Accordingly, the current study aims to investigate whether attentional bias is involved in math anxiety as well (i.e., a persistent negative reaction to math. Twenty seven participants (14 with high levels of math anxiety and 13 with low levels of math anxiety were presented with a novel computerized numerical version of the well established dot probe task. One of 6 types of prime stimuli, either math related or typically neutral, were presented on one side of a computer screen. The prime was preceded by a probe (either one or two asterisks that appeared in either the prime or the opposite location. Participants had to discriminate probe identity (one or two asterisks. Math anxious individuals reacted faster when the probe was at the location of the numerical related stimuli. This suggests the existence of attentional bias in math anxiety. That is, for math anxious individuals, the cognitive system selectively favored the processing of emotionally negative information (i.e., math related words. These findings suggest that attentional bias is linked to unduly intense math anxiety symptoms.

  5. Attentional bias in math anxiety.

    Science.gov (United States)

    Rubinsten, Orly; Eidlin, Hili; Wohl, Hadas; Akibli, Orly

    2015-01-01

    Cognitive theory from the field of general anxiety suggests that the tendency to display attentional bias toward negative information results in anxiety. Accordingly, the current study aims to investigate whether attentional bias is involved in math anxiety (MA) as well (i.e., a persistent negative reaction to math). Twenty seven participants (14 with high levels of MA and 13 with low levels of MA) were presented with a novel computerized numerical version of the well established dot probe task. One of six types of prime stimuli, either math related or typically neutral, was presented on one side of a computer screen. The prime was preceded by a probe (either one or two asterisks) that appeared in either the prime or the opposite location. Participants had to discriminate probe identity (one or two asterisks). Math anxious individuals reacted faster when the probe was at the location of the numerical related stimuli. This suggests the existence of attentional bias in MA. That is, for math anxious individuals, the cognitive system selectively favored the processing of emotionally negative information (i.e., math related words). These findings suggest that attentional bias is linked to unduly intense MA symptoms.

  6. Perception bias in route choice

    NARCIS (Netherlands)

    Vreeswijk, Jacob Dirk; Thomas, Tom; van Berkum, Eric C.; van Arem, Bart

    2014-01-01

    Travel time is probably one of the most studied attributes in route choice. Recently, perception of travel time received more attention as several studies have shown its importance in explaining route choice behavior. In particular, travel time estimates by travelers appear to be biased against

  7. How Consumers’ Styles of Thinking Can Control Brand Dilution

    Directory of Open Access Journals (Sweden)

    Monga Alokparna Basu

    2018-05-01

    Full Text Available Understanding consumers’ ways of thinking can help identify strategies to limit brand damage and elicit more favorable reactions from disapproving consumers. Analytic thinkers’ beliefs about a brand are diluted when they see negative information; those of holistic thinkers remain unaffected. While both analytic and holistic thinkers blame the brand equally for quality and manufacturing problems, holistic thinkers are more likely to blame contextual factors outside of the brand than analytic thinkers. This ability of holistic thinkers to focus on the outside context is the reason why their brand beliefs are not diluted.

  8. Computer registration of radioactive indicator-dilution curves.

    Science.gov (United States)

    Shepherd, A P; Perry, M A; Alexander, G M; Granger, D N; Riedel, G L; Kvietys, P R; Franke, C P

    1983-12-01

    A system is described for recording indicator-dilution curves produced by gamma radiation-emitting tracers. The system consists of a flow-through cuvette in a well counter, appropriate commercially available gamma radiation-detecting equipment, an Apple II computer, and a two-channel pulse-counting interface of our own design. With the counting interface and the software described here, an investigator can simultaneously record two indicator-dilution curves produced by gamma emitters. Instead of having to wait hours or days for results, the investigator can watch the data being recorded and display the results in graphic form almost immediately after each injection.

  9. GRAIN-BOUNDARY PRECIPITATION UNDER IRRADIATION IN DILUTE BINARY ALLOYS

    Institute of Scientific and Technical Information of China (English)

    S.H. Song; Z.X. Yuan; J. Liu; R.G.Faulkner

    2003-01-01

    Irradiation-induced grain boundary segregation of solute atoms frequently bring about grain boundary precipitation of a second phase because of its making the solubility limit of the solute surpassed at grain boundaries. Until now the kinetic models for irradiation-induced grain boundary precipitation have been sparse. For this reason, we have theoretically treated grain boundary precipitation under irradiation in dilute binary alloys. Predictions ofγ'-Ni3Si precipitation at grain boundaries ave made for a dilute Ni-Si alloy subjected to irradiation. It is demonstrated that grain boundary silicon segregation under irradiation may lead to grain boundaryγ'-Ni3 Si precipitation over a certain temperature range.

  10. Dilution and volatilization of groundwater contaminant discharges in streams

    DEFF Research Database (Denmark)

    Aisopou, Angeliki; Bjerg, Poul Løgstrup; Sonne, Anne Thobo

    2015-01-01

    measurement. The solution was successfully applied to published field data obtained in a large and a small Danish stream and provided valuable information on the risk posed by the groundwater contaminant plumes. The results provided by the dilution and volatilization model are very different to those obtained......An analytical solution to describe dilution and volatilization of a continuous groundwater contaminant plume into streams is developed for risk assessment. The location of groundwater plume discharge into the stream (discharge through the side versus bottom of the stream) and different...

  11. Test plan for tank 241-AN-104 dilution studies

    International Nuclear Information System (INIS)

    Herting, D.L.

    1998-01-01

    Tank 241-AN-104 (104-AN) has been identified as the one of the first tanks to be retrieved for low level waste pretreatment and immobilization. Retrieval of the tank waste will require dilution. Laboratory tests are needed to determine the amount and type of dilution required for safe retrieval and transfer of feed and to re-dissolve major soluble sodium salts while not precipitating out other salts. The proposed laboratory tests are described in this document. Tank 241-AN-104 is on the Hydrogen Watch List

  12. Dynamics of dilute disordered models: A solvable case

    International Nuclear Information System (INIS)

    Semerjian, Guilhem; Cugliandolo, Leticia F.

    2003-09-01

    We study the dynamics of a dilute spherical model with two body interactions and random exchanges. We analyze the Langevin equations and we introduce a functional variational method to study generic dilute disordered models. A crossover temperature replaces the dynamic transition of the fully-connected limit. There are two asymptotic regimes, one determined by the central band of the spectral density of the interactions and a slower one determined by localized configurations on sites with high connectivity. We confront the behavior of this model to the one of real glasses. (author)

  13. High field Moessbauer study of dilute Ir-(Fe) alloys

    International Nuclear Information System (INIS)

    Takabatake, Toshiro; Mazaki, Hiromasa; Shinjo, Teruya.

    1981-01-01

    The magnetic behavior of very dilute Fe impurities in Ir has been studied by means of Moessbauer measurement in external fields up to 80 kOe at 4.2 K. The saturation hyperfine field increases in proportion to the external field up to the maximum magnetic field available. This means that for a localized spin fluctuation system IrFe, the effective magnetic moment associated with Fe impurities is induced in proportion to the external field. No anomalous spectrum was observed with a very dilute sample (--10 ppm 57 Co), indicating that the interaction between impurities is responsible for the anomalous spectrum previously observed with a less homogeneous sample. (author)

  14. Introduction to the Physics of Diluted Magnetic Semiconductors

    CERN Document Server

    Gaj, Jan A

    2010-01-01

    The book deals with diluted magnetic semiconductors, a class of materials important to the emerging field of spintronics. In these materials semiconducting properties, both transport and optical, are influenced by the presence of magnetic ions. It concentrates on basic physical mechanisms (e.g. carrier-ion and ion-ion interactions) and resulting phenomena (e.g. magnetic polaron formation and spin relaxation). Introduction to the Physics of Diluted Magnetic Semiconductors is addressed to graduate-level and doctoral students and young researchers entering the field. The authors have been actively involved in the creation of this branch of semiconductor physics.

  15. Emotional Issues and Peer Relations in Gifted Elementary Students: Regression Analysis of National Data

    Science.gov (United States)

    Wiley, Kristofor R.

    2013-01-01

    Many of the social and emotional needs that have historically been associated with gifted students have been questioned on the basis of recent empirical evidence. Research on the topic, however, is often limited by sample size, selection bias, or definition. This study addressed these limitations by applying linear regression methodology to data…

  16. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies.

    NARCIS (Netherlands)

    Kromhout, D.

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the

  17. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  18. Propensity Score Estimation with Data Mining Techniques: Alternatives to Logistic Regression

    Science.gov (United States)

    Keller, Bryan S. B.; Kim, Jee-Seon; Steiner, Peter M.

    2013-01-01

    Propensity score analysis (PSA) is a methodological technique which may correct for selection bias in a quasi-experiment by modeling the selection process using observed covariates. Because logistic regression is well understood by researchers in a variety of fields and easy to implement in a number of popular software packages, it has…

  19. Social desirability bias in dietary self-report may compromise the validity of dietary intake measures.

    Science.gov (United States)

    Hebert, J R; Clemow, L; Pbert, L; Ockene, I S; Ockene, J K

    1995-04-01

    Self-report of dietary intake could be biased by social desirability or social approval thus affecting risk estimates in epidemiological studies. These constructs produce response set biases, which are evident when testing in domains characterized by easily recognizable correct or desirable responses. Given the social and psychological value ascribed to diet, assessment methodologies used most commonly in epidemiological studies are particularly vulnerable to these biases. Social desirability and social approval biases were tested by comparing nutrient scores derived from multiple 24-hour diet recalls (24HR) on seven randomly assigned days with those from two 7-day diet recalls (7DDR) (similar in some respects to commonly used food frequency questionnaires), one administered at the beginning of the test period (pre) and one at the end (post). Statistical analysis included correlation and multiple linear regression. Cross-sectionally, no relationships between social approval score and the nutritional variables existed. Social desirability score was negatively correlated with most nutritional variables. In linear regression analysis, social desirability score produced a large downward bias in nutrient estimation in the 7DDR relative to the 24HR. For total energy, this bias equalled about 50 kcal/point on the social desirability scale or about 450 kcal over its interquartile range. The bias was approximately twice as large for women as for men and only about half as large in the post measures. Individuals having the highest 24HR-derived fat and total energy intake scores had the largest downward bias due to social desirability. We observed a large downward bias in reporting food intake related to social desirability score. These results are consistent with the theoretical constructs on which the hypothesis is based. The effect of social desirability bias is discussed in terms of its influence on epidemiological estimates of effect. Suggestions are made for future work

  20. Estimation of Optimum Dilution in the GMAW Process Using Integrated ANN-GA

    Directory of Open Access Journals (Sweden)

    P. Sreeraj

    2013-01-01

    Full Text Available To improve the corrosion resistant properties of carbon steel, usually cladding process is used. It is a process of depositing a thick layer of corrosion resistant material over carbon steel plate. Most of the engineering applications require high strength and corrosion resistant materials for long-term reliability and performance. By cladding these properties can be achieved with minimum cost. The main problem faced on cladding is the selection of optimum combinations of process parameters for achieving quality clad and hence good clad bead geometry. This paper highlights an experimental study to optimize various input process parameters (welding current, welding speed, gun angle, and contact tip to work distance and pinch to get optimum dilution in stainless steel cladding of low carbon structural steel plates using gas metal arc welding (GMAW. Experiments were conducted based on central composite rotatable design with full replication technique, and mathematical models were developed using multiple regression method. The developed models have been checked for adequacy and significance. In this study, artificial neural network (ANN and genetic algorithm (GA techniques were integrated and labeled as integrated ANN-GA to estimate optimal process parameters in GMAW to get optimum dilution.

  1. Quantification of fentanyl in serum by isotope dilution analysis using capillary gas chromatography

    Energy Technology Data Exchange (ETDEWEB)

    Sera, Shoji; Goromaru, Tsuyoshi [Fukuyama Univ., Hiroshima (Japan); Sameshima, Teruko; Kawasaki, Koichi; Oda, Toshiyuki

    1998-06-01

    The quantitative determination of fentanyl (FT) in serum was examined by isotope dilution analysis using a capillary gas chromatograph equipped with a surface ionization detector. The separation of FT and its deuterated analogue, FT-{sup 2}H{sub 19}, was achieved within 15 min a column temperature of 260degC by using a 25 m column. Measurement of the samples prepared by the addition of a known amount of FT in the range of 0.2 to 40 ng/ml with 20 ng/ml of FT-{sup 2}H{sub 19} to human control serum allowed observation of a linear relationship between the peak area ratio and the added amount ratio. The correlation coefficient obtained by regression analysis was 0.999. The advantage of the present isotope dilution method was demonstrated by comparison with other FT analogues which substituted a propionyl group with an acetyl group or a phenethyl group with a benzyl group as the internal standard. The present method was used to determine the serum level of FT in surgical patients after i.v. administration. No endogenous compounds and concomitant drugs interfered with the detection of FT or FT-{sup 2}H{sub 19}. This method was considered to be useful for the pharmacokinetic study of FT in patients. (author)

  2. Quantification of fentanyl in serum by isotope dilution analysis using capillary gas chromatography

    International Nuclear Information System (INIS)

    Sera, Shoji; Goromaru, Tsuyoshi; Sameshima, Teruko; Kawasaki, Koichi; Oda, Toshiyuki

    1998-01-01

    The quantitative determination of fentanyl (FT) in serum was examined by isotope dilution analysis using a capillary gas chromatograph equipped with a surface ionization detector. The separation of FT and its deuterated analogue, FT- 2 H 19 , was achieved within 15 min a column temperature of 260degC by using a 25 m column. Measurement of the samples prepared by the addition of a known amount of FT in the range of 0.2 to 40 ng/ml with 20 ng/ml of FT- 2 H 19 to human control serum allowed observation of a linear relationship between the peak area ratio and the added amount ratio. The correlation coefficient obtained by regression analysis was 0.999. The advantage of the present isotope dilution method was demonstrated by comparison with other FT analogues which substituted a propionyl group with an acetyl group or a phenethyl group with a benzyl group as the internal standard. The present method was used to determine the serum level of FT in surgical patients after i.v. administration. No endogenous compounds and concomitant drugs interfered with the detection of FT or FT- 2 H 19 . This method was considered to be useful for the pharmacokinetic study of FT in patients. (author)

  3. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  4. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    Science.gov (United States)

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  5. EDITORIAL: Focus on Dilute Magnetic Semiconductors FOCUS ON DILUTE MAGNETIC SEMICONDUCTORS

    Science.gov (United States)

    Chambers, Scott A.; Gallagher, Bryan

    2008-05-01

    This focus issue of New Journal of Physics is devoted to the materials science of dilute magnetic semiconductors (DMS). A DMS is traditionally defined as a diamagnetic semiconductor doped with a few to several atomic per cent of some transition metal with unpaired d electrons. Several kinds of dopant-dopant interactions can in principle couple the dopant spins leading to a ferromagnetic ground state in a dilute magnetic system. These include superexchange, which occurs principally in oxides and only between dopants with one intervening oxygen, and double exchange, in which dopants of different formal charges exchange an electron. In both of these mechanisms, the ferromagnetic alignment is not critically dependent on free carriers in the host semiconductor because exchange occurs via bonds. A third mechanism, discovered in the last few years, involves electrons associated with lattice defects that can apparently couple dopant spins. This mechanism is not well understood. Finally, the most desirable mechanism is carrier-mediated exchange interaction in which the dopant spins are coupled by itinerant electrons or holes in the host semiconductor. This mechanism introduces a fundamental link between magnetic and electrical transport properties and offers the possibility of new spintronic functionalities. In particular electrical gate control of ferromagnetism and the use of spin polarized currents to carry signals for analog and digital applications. The spin light emitting diode is a prototypical device of this kind that has been extensively used to characterize the extent of spin polarization in the active light emitting semiconductor heterostructure. The prototypical carrier mediated ferromagnetic DMS is Mn-doped GaAs. This and closely related narrow gap III-V materials have been very extensively studied. Their properties are generally quite well understood and they have led to important insights into fundamental properties of ferromagnetic systems with strong spin

  6. post-jomtien policy dilutions: infrastructural & quality norms

    Indian Academy of Sciences (India)

    Operation Blackboard norms diluted – from 3 teachers-3 rooms per primary school to 2 teachers-2 rooms per primary school. Regular teacher replaced by under-qualified, untrained, under-paid Para-teachers appointed on short-term contracts. EGS – No provision for school buildings or teaching aids. Multi-grade Teaching ...

  7. Coherence and stiffness of spin waves in diluted ferromagnets

    Czech Academy of Sciences Publication Activity Database

    Turek, Ilja; Kudrnovský, Josef; Drchal, Václav

    2016-01-01

    Roč. 94, č. 17 (2016), č. článku 174447. ISSN 2469-9950 R&D Projects: GA ČR GA15-13436S Institutional support: RVO:68081723 ; RVO:68378271 Keywords : spin wave s * diluted ferromagnets * disordered systems Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.836, year: 2016

  8. Procedures for accurately diluting and dispensing radioactive solutions

    International Nuclear Information System (INIS)

    1975-01-01

    The technique currently used by various laboratories participating in international comparisons of radioactivity measurements are surveyed and recommendations for good laboratory practice established. Thus one describes, for instance, the preparation of solutions, dilution techniques, the use of 'pycnometers', weighing procedures (including buyoancy correction), etc. It should be possible to keep random and systematic uncertainties below 0.1% of the final result

  9. A simple approximation method for dilute Ising systems

    International Nuclear Information System (INIS)

    Saber, M.

    1996-10-01

    We describe a simple approximate method to analyze dilute Ising systems. The method takes into consideration the fluctuations of the effective field, and is based on a probability distribution of random variables which correctly accounts for all the single site kinematic relations. It is shown that the simplest approximation gives satisfactory results when compared with other methods. (author). 12 refs, 2 tabs

  10. Determination of dilution and quality control of total and anti ...

    African Journals Online (AJOL)

    Objective: To determine the correct dilution and Quality control commercial ELISA of total and anti-measles antibodies for HIV infected pregnant women. Design: A laboratory based study. Setting: The University of Nairobi, Department of Paediatrics laboratory. Subjects: HIV infected pregnant women enrolled and exposed to ...

  11. Quality of potential harmonics expansion method for dilute Bose ...

    Indian Academy of Sciences (India)

    Abstract. We present and examine an approximate but ab initio many-body approach, viz., potential harmonics expansion method (PHEM), which includes two-body correla- tions for dilute Bose–Einstein condensates. Comparing the total ground state energy for three trapped interacting bosons calculated in PHEM with the ...

  12. Bioethanol productions from rice polish by optimization of dilute acid ...

    African Journals Online (AJOL)

    Lignocellulose materials are abundant renewable resource for the production of biofuel from fermentative organism (Sacchromyces cervesiae). Rice polish is cheapest and abundant lignocelluloses resource and has potential to produce bioethanol. The main steps for the conversion of biomass into glucose required dilute ...

  13. Inhibition Effect of Deanol on Mild Steel Corrosion in Dilute ...

    African Journals Online (AJOL)

    NICOLAAS

    2014-06-23

    Jun 23, 2014 ... The influence of deanol on the corrosion behaviour of mild steel in dilute sulphuric acid with sodium ... the formation of a complex precipitate of protective film, which ... silicon carbide abrasive papers of 80, 120, 220, 800 and 1000 grit ...... ions in sulphuric acid on the corrosion behaviour of stainless steel,.

  14. Kinetic-sound propagation in dilute gas mixtures

    International Nuclear Information System (INIS)

    Campa, A.; Cohen, E.G.D.

    1989-01-01

    Kinetic sound is predicted in dilute disparate-mass binary gas mixtures, propagating exclusively in the light compound and much faster than ordinary sound. It should be detectable by light-scattering experiments, as an extended shoulder in the scattering cross section for large frequencies. As an example, H 2 -Ar mixtures are discussed

  15. Thermodynamics of a dilute XX chain in a field

    Energy Technology Data Exchange (ETDEWEB)

    Timonin, P. N., E-mail: pntim@live.ru [Southern Federal University, Physics Research Institute (Russian Federation)

    2016-06-15

    Gapless phases in ground states of low-dimensional quantum spin systems are rather ubiquitous. Their peculiarity is a remarkable sensitivity to external perturbations due to permanent criticality of such phases manifested by a slow (power-low) decay of pair correlations and the divergence of the corresponding susceptibility. A strong influence of various defects on the properties of the system in such a phase can then be expected. Here, we consider the influence of vacancies on the thermodynamics of the simplest quantum model with a gapless phase, the isotropic spin-1/2 XX chain. The existence of the exact solution of this model gives a unique opportunity to describe in detail the dramatic effect of dilution on the gapless phase—the appearance of an infinite series of quantum phase transitions resulting from level crossing under the variation of a longitudinal magnetic field. We calculate the jumps in the field dependences of the ground-state longitudinal magnetization, susceptibility, entropy, and specific heat appearing at these transitions and show that they result in a highly nonlinear temperature dependence of these parameters at low T. Also, the effect of enhancement of the magnetization and longitudinal correlations in the dilute chain is established. The changes of the pair spin correlators under dilution are also analyzed. The universality of the mechanism of the quantum transition generation suggests that similar effects of dilution can also be expected in gapless phases of other low-dimensional quantum spin systems.

  16. 21 CFR 172.710 - Adjuvants for pesticide use dilutions.

    Science.gov (United States)

    2010-04-01

    ... Section 172.710 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION (CONTINUED) FOOD ADDITIVES PERMITTED FOR DIRECT ADDITION TO FOOD FOR HUMAN CONSUMPTION Other Specific Usage Additives § 172.710 Adjuvants for pesticide use dilutions. The...

  17. Electrochemical reduction of metal ions in dilute solution using hydrogen

    NARCIS (Netherlands)

    Portegies Zwart, I.; Wijnbelt, E.C.W.; Janssen, L.J.J.

    1995-01-01

    Reduction of metal ions in dilute solutions is of great interest for purification of waste waters and process liquids. A new electrochemical cell has been introduced. This cell - a GBC-cell - is a combination of a gasdiffusion electrode in direct contact with a packed bed of carbon particles.

  18. Electrochemical reduction of dilute chromate solutions on carbon felt electrodes

    NARCIS (Netherlands)

    Frenzel, Ines; Frenzel, I.; Holdik, Hans; Barmashenko, Vladimir; Stamatialis, Dimitrios; Wessling, Matthias

    2006-01-01

    Carbon felt is a potential material for electrochemical reduction of chromates. Very dilute solutions may be efficiently treated due to its large specific surface area and high porosity. In this work, the up-scaling of this technology is investigated using a new type of separated cell and

  19. Electrochemical reduction of nickel ions from dilute solutions

    NARCIS (Netherlands)

    Njau, K.N.; Janssen, L.J.J.

    1995-01-01

    Electrochemical reduction of nickel ions in dilute solution using a divided GBC-cell is of interest for purification of waste waters. A typical solution to be treated is the effluent from steel etching processes which contain low quantities of nickel, chromate and chromium ions. Reduction of

  20. 21 CFR 864.5240 - Automated blood cell diluting apparatus.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated blood cell diluting apparatus. 864.5240 Section 864.5240 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES HEMATOLOGY AND PATHOLOGY DEVICES Automated and Semi-Automated Hematology Devices...

  1. Phase diagrams of site diluted ferromagnetic thin film

    International Nuclear Information System (INIS)

    Hamedoun, M.; Bouslykhane, K.; Bakrim, H.; Hourmatallah, A.; Benzakour, N.; Masrour, R.

    2006-01-01

    The phase transition properties of Ising, classical XY and Heisenberg of diluted ferromagnetic thin film are studied by the method of exact high-temperature series expansions extrapolated with the Pade approximants method. The reduced critical temperature τ c of the diluted ferromagnetic thin films is studied as a function of film thickness L and the exchange interactions in the bulk J b , in the surface J s and between surface and nearest-neighbour layer J - bar . It is found that τ c increases with the exchange interactions of surface and L. The magnetic phase diagram (τ c versus dilution x) is obtained. A critical value of the surface exchange interaction above which the surface magnetism appears is obtained. The dependence of the critical parameter of surface reduced coupling R 2 c as a function of the dilution x and the ratio of the exchange interaction between the surface and nearest-neighbour layer to the bulk one R 1 for the three studied models has been investigated. The percolation threshold is defined as the concentration x p at which τ c =0. The obtained values are x p ∼0.2 in the bulk and x p ∼0.4 at the surface

  2. Determination of photooxygenation products of rotenone with isotope dilution method

    International Nuclear Information System (INIS)

    Chubachi, Mitsuo; Hamada, Masayuki

    1975-01-01

    When rotenone dissolved in certain solvent was photochemically oxidized, rotenolones, dehydrorotenone and rotenonone were obtained as main products. In order to determine the quantitative yields of these compounds in photooxygenation products, four compounds mentioned above were labeled with carbon-14 and the isotope dilution method by these labeled compounds was applied to the product analysis. (auth.)

  3. Analysis of boron dilution in a four-loop PWR

    International Nuclear Information System (INIS)

    Sun, J.G.; Sha, W.T.

    1995-03-01

    Thermal mixing and boron dilution in a pressurized water reactor were analyzed with COMMIX codes. The reactor system was the four-loop Zion reactor. Two boron dilution scenarios were analyzed. In the first scenario, the plant is in cold shutdown and the reactor coolant system has just been filled after maintenance on the steam generators. To flush the air out of the steam generator tubes, a reactor coolant pump (RCP) is started, with the water in the pump suction line devoid of boron and at the same temperature as the coolant in the system. In the second scenario, the plant is at hot standby and the reactor coolant system has been heated to operating temperature after a long outage. It is assumed that an RCP is started, with the pump suction line filled with cold unborated water, forcing a slug of diluted coolant down the downcomer and subsequently through the reactor core. The subsequent transient thermal mixing and boron dilution that would occur in the reactor system is simulated for these two scenarios. The reactivity insertion rate and the total reactivity are evaluated and a sensitivity study is performed to assess the accuracy of the numerical modeling of the geometry of the reactor coolant system

  4. Simplified Method for Groundwater Treatment Using Dilution and Ceramic Filter

    Science.gov (United States)

    Musa, S.; Ariff, N. A.; Kadir, M. N. Abdul; Denan, F.

    2016-07-01

    Groundwater is one of the natural resources that is not susceptible to pollutants. However, increasing activities of municipal, industrial, agricultural or extreme land use activities have resulted in groundwater contamination as occured at the Research Centre for Soft Soil Malaysia (RECESS), Universiti Tun Hussein Onn Malaysia (UTHM). Thus, aims of this study is to treat groundwater by using rainwater and simple ceramic filter as a treatment agent. The treatment uses rain water dilution, ceramic filters and combined method of dilute and filtering as an alternate treatment which are simple and more practical compared to modern or chemical methods. The water went through dilution treatment processes able to get rid of 57% reduction compared to initial condition. Meanwhile, the water that passes through the filtering process successfully get rid of as much as 86% groundwater parameters where only chloride does not pass the standard. Favorable results for the combination methods of dilution and filtration methods that can succesfully eliminate 100% parameters that donot pass the standards of the Ministry of Health and the Interim National Drinking Water Quality Standard such as those found in groundwater in RECESS, UTHM especially sulfate and chloride. As a result, it allows the raw water that will use clean drinking water and safe. It also proves that the method used in this study is very effective in improving the quality of groundwater.

  5. In vitro dilutions of thioridaxine with potential to enhance antibiotic ...

    African Journals Online (AJOL)

    Gram staining, catalase test and coagulase test were done on the resulting colonies to further confirm the strains as S. aureus. Antibiotic susceptibility test was done by agar disc diffusion method using sterile Mueller- Hinton agar plates before and after treatment with laboratory dilutions of thioridaxine. S. aureus strains 1, ...

  6. Time correlation functions and transport coefficients in a dilute superfluid

    International Nuclear Information System (INIS)

    Kirkpatrick, T.R.; Dorfman, J.R.

    1985-01-01

    Time correlation functions for the transport coefficients in the linear Landau-Khalatnikov equations are derived on the basis of a formal theory. These Green--Kubo expressions are then explicitly evaluated for a dilute superfluid and the resulting transport coefficiencts are shown to be identical to those obtained previously by using a distribution function method

  7. Optimisation of Dilute Sulphuric Acid Hydrolysis of Waste ...

    African Journals Online (AJOL)

    Dilute sulphuric acid hydrolysis of waste paper was investigated in this study. The effects of acid concentration, time, temperature and liquid to solid ratio on the total reducing sugar concentration were studied over three levels using a four variable Box-Behnken design (BBD). A statistical model was developed for the ...

  8. Atomic displacements in dilute alloys of Cr, Nb and Mo

    Indian Academy of Sciences (India)

    physics pp. 497–514. Atomic displacements in dilute alloys of Cr, Nb and Mo ... used to calculate dynamical matrix and the impurity-induced forces up to second nearest ... origin, the lattice is strained, and the host atoms get displaced to new ...

  9. Color dilution alopecia in a blue Doberman pinscher crossbreed

    OpenAIRE

    Perego, Roberta; Proverbio, Daniela; Roccabianca, Paola; Spada, Eva

    2009-01-01

    A 6-year-old male, blue Doberman pinscher crossbreed was presented with coat abnormalities; in particular, flank alopecia and pruritus. Based on medical the history, clinical evidence, and histopathological examination, color dilution alopecia was diagnosed. The dog was with oral melatonin treated for 3 months without success.

  10. Color dilution alopecia in a blue Doberman pinscher crossbreed.

    Science.gov (United States)

    Perego, Roberta; Proverbio, Daniela; Roccabianca, Paola; Spada, Eva

    2009-05-01

    A 6-year-old male, blue Doberman pinscher crossbreed was presented with coat abnormalities; in particular, flank alopecia and pruritus. Based on medical the history, clinical evidence, and histopathological examination, color dilution alopecia was diagnosed. The dog was with oral melatonin treated for 3 months without success.

  11. Does the dilution effect generally occur in animal diseases?

    NARCIS (Netherlands)

    Huang, Zheng Y.X.; Yu, Yang; Langevelde, Van Frank; Boer, De Willem F.

    2017-01-01

    The dilution effect (DE) has been reported in many diseases, but its generality is still highly disputed. Most current criticisms of DE are related to animal diseases. Particularly, some critical studies argued that DE is less likely to occur in complex environments. Here our meta-analyses

  12. Inhibition Effect of Deanol on Mild Steel Corrosion in Dilute ...

    African Journals Online (AJOL)

    The influence of deanol on the corrosion behaviour of mild steel in dilute sulphuric acid with sodium chloride addition was studied by means of mass-loss, potentiodynamic polarization, electrode potential monitoring, scanning electron microscopy and statistical analysis. Results show that deanol performed excellently with ...

  13. Novel understanding of calcium silicate hydrate from dilute hydration

    KAUST Repository

    Zhang, Lina; Yamauchi, Kazuo; Li, Zongjin; Zhang, Xixiang; Ma, Hongyan; Ge, Shenguang

    2017-01-01

    The perspective of calcium silicate hydrate (C-S-H) is still confronting various debates due to its intrinsic complicated structure and properties after decades of studies. In this study, hydration at dilute suspension of w/s equaling to 10

  14. Estimation and correction of visibility bias in aerial surveys of wintering ducks

    Science.gov (United States)

    Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.

    2008-01-01

    Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.

  15. Variable-bias coin tossing

    International Nuclear Information System (INIS)

    Colbeck, Roger; Kent, Adrian

    2006-01-01

    Alice is a charismatic quantum cryptographer who believes her parties are unmissable; Bob is a (relatively) glamorous string theorist who believes he is an indispensable guest. To prevent possibly traumatic collisions of self-perception and reality, their social code requires that decisions about invitation or acceptance be made via a cryptographically secure variable-bias coin toss (VBCT). This generates a shared random bit by the toss of a coin whose bias is secretly chosen, within a stipulated range, by one of the parties; the other party learns only the random bit. Thus one party can secretly influence the outcome, while both can save face by blaming any negative decisions on bad luck. We describe here some cryptographic VBCT protocols whose security is guaranteed by quantum theory and the impossibility of superluminal signaling, setting our results in the context of a general discussion of secure two-party computation. We also briefly discuss other cryptographic applications of VBCT

  16. Probability biases as Bayesian inference

    Directory of Open Access Journals (Sweden)

    Andre; C. R. Martins

    2006-11-01

    Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

  17. Variable-bias coin tossing

    Science.gov (United States)

    Colbeck, Roger; Kent, Adrian

    2006-03-01

    Alice is a charismatic quantum cryptographer who believes her parties are unmissable; Bob is a (relatively) glamorous string theorist who believes he is an indispensable guest. To prevent possibly traumatic collisions of self-perception and reality, their social code requires that decisions about invitation or acceptance be made via a cryptographically secure variable-bias coin toss (VBCT). This generates a shared random bit by the toss of a coin whose bias is secretly chosen, within a stipulated range, by one of the parties; the other party learns only the random bit. Thus one party can secretly influence the outcome, while both can save face by blaming any negative decisions on bad luck. We describe here some cryptographic VBCT protocols whose security is guaranteed by quantum theory and the impossibility of superluminal signaling, setting our results in the context of a general discussion of secure two-party computation. We also briefly discuss other cryptographic applications of VBCT.

  18. Semiparametric regression during 2003–2007

    KAUST Repository

    Ruppert, David; Wand, M.P.; Carroll, Raymond J.

    2009-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.

  19. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  20. Development of isotope dilution-liquid chromatography/mass spectrometry combined with standard addition techniques for the accurate determination of tocopherols in infant formula

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joonhee; Jang, Eun-Sil; Kim, Byungjoo, E-mail: byungjoo@kriss.re.kr

    2013-07-17

    Graphical abstract: -- Highlights: •ID-LC/MS method showed biased results for tocopherols analysis in infant formula. •H/D exchange of deuterated tocopherols in sample preparation was the source of bias. •Standard addition (SA)-ID-LC/MS was developed as an alternative to ID-LC/MS. •Details of calculation and uncertainty evaluation of the SA-IDMS were described. •SA-ID-LC/MS showed a higher-order metrological quality as a reference method. -- Abstract: During the development of isotope dilution-liquid chromatography/mass spectrometry (ID-LC/MS) for tocopherol analysis in infant formula, biased measurement results were observed when deuterium-labeled tocopherols were used as internal standards. It turned out that the biases came from intermolecular H/D exchange and intramolecular H/D scrambling of internal standards in sample preparation processes. Degrees of H/D exchange and scrambling showed considerable dependence on sample matrix. Standard addition-isotope dilution mass spectrometry (SA-IDMS) based on LC/MS was developed in this study to overcome the shortcomings of using deuterium-labeled internal standards while the inherent advantage of isotope dilution techniques is utilized for the accurate recovery correction in sample preparation processes. Details of experimental scheme, calculation equation, and uncertainty evaluation scheme are described in this article. The proposed SA-IDMS method was applied to several infant formula samples to test its validity. The method was proven to have a higher-order metrological quality with providing very accurate and precise measurement results.

  1. Girl child and gender bias.

    Science.gov (United States)

    Chowdhry, D P

    1995-01-01

    This article identifies gender bias against female children and youth in India. Gender bias is based on centuries-old religious beliefs and sayings from ancient times. Discrimination is reflected in denial or ignorance of female children's educational, health, nutrition, and recreational needs. Female infanticide and selective abortion of female fetuses are other forms of discrimination. The task of eliminating or reducing gender bias will involve legal, developmental, political, and administrative measures. Public awareness needs to be created. There is a need to reorient the education and health systems and to advocate for gender equality. The government of India set the following goals for the 1990s: to protect the survival of the girl child and practice safe motherhood; to develop the girl child in general; and to protect vulnerable girl children in different circumstances and in special groups. The Health Authorities should monitor the laws carefully to assure marriage after the minimum age, ban sex determination of the fetus, and monitor the health and nutrition of pre-school girls and nursing and pregnant mothers. Mothers need to be encouraged to breast feed, and to breast feed equally between genders. Every village and slum area needs a mini health center. Maternal mortality must decline. Primary health centers and hospitals need more women's wards. Education must be universally accessible. Enrollments should be increased by educating rural tribal and slum parents, reducing distances between home and school, making curriculum more relevant to girls, creating more female teachers, and providing facilities and incentives for meeting the needs of girl students. Supplementary income could be provided to families for sending girls to school. Recreational activities must be free of gender bias. Dowry, sati, and devdasi systems should be banned.

  2. Competition and Commercial Media Bias

    OpenAIRE

    Blasco, Andrea; Sobbrio, Francesco

    2011-01-01

    This paper reviews the empirical evidence on commercial media bias (i.e., advertisers influence over media accuracy) and then introduces a simple model to summarize the main elements of the theoretical literature. The analysis provides three main policy insights for media regulators: i) Media regulators should target their monitoring efforts towards news contents upon which advertisers are likely to share similar preferences; ii) In advertising industries characterized by high correlation in ...

  3. BEHAVIORAL BIASES IN TRADING SECURITIES

    Directory of Open Access Journals (Sweden)

    Turcan Ciprian Sebastian

    2010-12-01

    Full Text Available The main thesis of this paper represents the importance and the effects that human behavior has over capital markets. It is important to see the link between the asset valuation and investor sentiment that motivate to pay for an asset a certain prices over/below the intrinsic value. The main behavioral aspects discussed are emotional factors such as: fear of regret, overconfidence, perseverance, loss aversion ,heuristic biases, misinformation and thinking errors, herding and their consequences.

  4. Regression Analysis by Example. 5th Edition

    Science.gov (United States)

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  5. Standards for Standardized Logistic Regression Coefficients

    Science.gov (United States)

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  6. A Seemingly Unrelated Poisson Regression Model

    OpenAIRE

    King, Gary

    1989-01-01

    This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

  7. Attention bias for chocolate increases chocolate consumption--an attention bias modification study.

    Science.gov (United States)

    Werthmann, Jessica; Field, Matt; Roefs, Anne; Nederkoorn, Chantal; Jansen, Anita

    2014-03-01

    The current study examined experimentally whether a manipulated attention bias for food cues increases craving, chocolate intake and motivation to search for hidden chocolates. To test the effect of attention for food on subsequent chocolate intake, attention for chocolate was experimentally modified by instructing participants to look at chocolate stimuli ("attend chocolate" group) or at non-food stimuli ("attend shoes" group) during a novel attention bias modification task (antisaccade task). Chocolate consumption, changes in craving and search time for hidden chocolates were assessed. Eye-movement recordings were used to monitor the accuracy during the experimental attention modification task as possible moderator of effects. Regression analyses were conducted to test the effect of attention modification and modification accuracy on chocolate intake, craving and motivation to search for hidden chocolates. Results showed that participants with higher accuracy (+1 SD), ate more chocolate when they had to attend to chocolate and ate less chocolate when they had to attend to non-food stimuli. In contrast, for participants with lower accuracy (-1 SD), the results were exactly reversed. No effects of the experimental attention modification on craving or search time for hidden chocolates were found. We used chocolate as food stimuli so it remains unclear how our findings generalize to other types of food. These findings demonstrate further evidence for a link between attention for food and food intake, and provide an indication about the direction of this relationship. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Significant biases affecting abundance determinations

    Science.gov (United States)

    Wesson, Roger

    2015-08-01

    I have developed two highly efficient codes to automate analyses of emission line nebulae. The tools place particular emphasis on the propagation of uncertainties. The first tool, ALFA, uses a genetic algorithm to rapidly optimise the parameters of gaussian fits to line profiles. It can fit emission line spectra of arbitrary resolution, wavelength range and depth, with no user input at all. It is well suited to highly multiplexed spectroscopy such as that now being carried out with instruments such as MUSE at the VLT. The second tool, NEAT, carries out a full analysis of emission line fluxes, robustly propagating uncertainties using a Monte Carlo technique.Using these tools, I have found that considerable biases can be introduced into abundance determinations if the uncertainty distribution of emission lines is not well characterised. For weak lines, normally distributed uncertainties are generally assumed, though it is incorrect to do so, and significant biases can result. I discuss observational evidence of these biases. The two new codes contain routines to correctly characterise the probability distributions, giving more reliable results in analyses of emission line nebulae.

  9. Galaxy formation and physical bias

    Science.gov (United States)

    Cen, Renyue; Ostriker, Jeremiah P.

    1992-01-01

    We have supplemented our code, which computes the evolution of the physical state of a representative piece of the universe to include, not only the dynamics of dark matter (with a standard PM code), and the hydrodynamics of the gaseous component (including detailed collisional and radiative processes), but also galaxy formation on a heuristic but plausible basis. If, within a cell the gas is Jeans' unstable, collapsing, and cooling rapidly, it is transformed to galaxy subunits, which are then followed with a collisionless code. After grouping them into galaxies, we estimate the relative distributions of galaxies and dark matter and the relative velocities of galaxies and dark matter. In a large scale CDM run of 80/h Mpc size with 8 x 10 exp 6 cells and dark matter particles, we find that physical bias b is on the 8/h Mpc scale is about 1.6 and increases towards smaller scales, and that velocity bias is about 0.8 on the same scale. The comparable HDM simulation is highly biased with b = 2.7 on the 8/h Mpc scale. Implications of these results are discussed in the light of the COBE observations which provide an accurate normalization for the initial power spectrum. CDM can be ruled out on the basis of too large a predicted small scale velocity dispersion at greater than 95 percent confidence level.

  10. Opinion dynamics with confirmation bias.

    Directory of Open Access Journals (Sweden)

    Armen E Allahverdyan

    Full Text Available Confirmation bias is the tendency to acquire or evaluate new information in a way that is consistent with one's preexisting beliefs. It is omnipresent in psychology, economics, and even scientific practices. Prior theoretical research of this phenomenon has mainly focused on its economic implications possibly missing its potential connections with broader notions of cognitive science.We formulate a (non-Bayesian model for revising subjective probabilistic opinion of a confirmationally-biased agent in the light of a persuasive opinion. The revision rule ensures that the agent does not react to persuasion that is either far from his current opinion or coincides with it. We demonstrate that the model accounts for the basic phenomenology of the social judgment theory, and allows to study various phenomena such as cognitive dissonance and boomerang effect. The model also displays the order of presentation effect-when consecutively exposed to two opinions, the preference is given to the last opinion (recency or the first opinion (primacy -and relates recency to confirmation bias. Finally, we study the model in the case of repeated persuasion and analyze its convergence properties.The standard Bayesian approach to probabilistic opinion revision is inadequate for describing the observed phenomenology of persuasion process. The simple non-Bayesian model proposed here does agree with this phenomenology and is capable of reproducing a spectrum of effects observed in psychology: primacy-recency phenomenon, boomerang effect and cognitive dissonance. We point out several limitations of the model that should motivate its future development.

  11. Meta-analytical synthesis of regression coefficients under different categorization scheme of continuous covariates.

    Science.gov (United States)

    Yoneoka, Daisuke; Henmi, Masayuki

    2017-11-30

    Recently, the number of clinical prediction models sharing the same regression task has increased in the medical literature. However, evidence synthesis methodologies that use the results of these regression models have not been sufficiently studied, particularly in meta-analysis settings where only regression coefficients are available. One of the difficulties lies in the differences between the categorization schemes of continuous covariates across different studies. In general, categorization methods using cutoff values are study specific across available models, even if they focus on the same covariates of interest. Differences in the categorization of covariates could lead to serious bias in the estimated regression coefficients and thus in subsequent syntheses. To tackle this issue, we developed synthesis methods for linear regression models with different categorization schemes of covariates. A 2-step approach to aggregate the regression coefficient estimates is proposed. The first step is to estimate the joint distribution of covariates by introducing a latent sampling distribution, which uses one set of individual participant data to estimate the marginal distribution of covariates with categorization. The second step is to use a nonlinear mixed-effects model with correction terms for the bias due to categorization to estimate the overall regression coefficients. Especially in terms of precision, numerical simulations show that our approach outperforms conventional methods, which only use studies with common covariates or ignore the differences between categorization schemes. The method developed in this study is also applied to a series of WHO epidemiologic studies on white blood cell counts. Copyright © 2017 John Wiley & Sons, Ltd.

  12. An isotope-dilution standard GC/MS/MS method for steroid hormones in water

    Science.gov (United States)

    Foreman, William T.; Gray, James L.; ReVello, Rhiannon C.; Lindley, Chris E.; Losche, Scott A.

    2013-01-01

    An isotope-dilution quantification method was developed for 20 natural and synthetic steroid hormones and additional compounds in filtered and unfiltered water. Deuterium- or carbon-13-labeled isotope-dilution standards (IDSs) are added to the water sample, which is passed through an octadecylsilyl solid-phase extraction (SPE) disk. Following extract cleanup using Florisil SPE, method compounds are converted to trimethylsilyl derivatives and analyzed by gas chromatography with tandem mass spectrometry. Validation matrices included reagent water, wastewater-affected surface water, and primary (no biological treatment) and secondary wastewater effluent. Overall method recovery for all analytes in these matrices averaged 100%; with overall relative standard deviation of 28%. Mean recoveries of the 20 individual analytes for spiked reagent-water samples prepared along with field samples analyzed in 2009–2010 ranged from 84–104%, with relative standard deviations of 6–36%. Detection levels estimated using ASTM International’s D6091–07 procedure range from 0.4 to 4 ng/L for 17 analytes. Higher censoring levels of 100 ng/L for bisphenol A and 200 ng/L for cholesterol and 3-beta-coprostanol are used to prevent bias and false positives associated with the presence of these analytes in blanks. Absolute method recoveries of the IDSs provide sample-specific performance information and guide data reporting. Careful selection of labeled compounds for use as IDSs is important because both inexact IDS-analyte matches and deuterium label loss affect an IDS’s ability to emulate analyte performance. Six IDS compounds initially tested and applied in this method exhibited deuterium loss and are not used in the final method.

  13. Determination of serum calcium levels by 42Ca isotope dilution inductively coupled plasma mass spectrometry.

    Science.gov (United States)

    Han, Bingqing; Ge, Menglei; Zhao, Haijian; Yan, Ying; Zeng, Jie; Zhang, Tianjiao; Zhou, Weiyan; Zhang, Jiangtao; Wang, Jing; Zhang, Chuanbao

    2017-11-27

    Serum calcium level is an important clinical index that reflects pathophysiological states. However, detection accuracy in laboratory tests is not ideal; as such, a high accuracy method is needed. We developed a reference method for measuring serum calcium levels by isotope dilution inductively coupled plasma mass spectrometry (ID ICP-MS), using 42Ca as the enriched isotope. Serum was digested with 69% ultrapure nitric acid and diluted to a suitable concentration. The 44Ca/42Ca ratio was detected in H2 mode; spike concentration was calibrated by reverse IDMS using standard reference material (SRM) 3109a, and sample concentration was measured by a bracketing procedure. We compared the performance of ID ICP-MS with those of three other reference methods in China using the same serum and aqueous samples. The relative expanded uncertainty of the sample concentration was 0.414% (k=2). The range of repeatability (within-run imprecision), intermediate imprecision (between-run imprecision), and intra-laboratory imprecision were 0.12%-0.19%, 0.07%-0.09%, and 0.16%-0.17%, respectively, for two of the serum samples. SRM909bI, SRM909bII, SRM909c, and GBW09152 were found to be within the certified value interval, with mean relative bias values of 0.29%, -0.02%, 0.10%, and -0.19%, respectively. The range of recovery was 99.87%-100.37%. Results obtained by ID ICP-MS showed a better accuracy than and were highly correlated with those of other reference methods. ID ICP-MS is a simple and accurate candidate reference method for serum calcium measurement and can be used to establish and improve serum calcium reference system in China.

  14. Sugar yields from dilute oxalic acid pretreatment of maple wood compared to those with other dilute acids and hot water.

    Science.gov (United States)

    Zhang, Taiying; Kumar, Rajeev; Wyman, Charles E

    2013-01-30

    Dilute oxalic acid pretreatment was applied to maple wood to improve compatibility with downstream operations, and its performance in pretreatment and subsequent enzymatic hydrolysis was compared to results for hydrothermal and dilute hydrochloric and sulfuric acid pretreatments. The highest total xylose yield of ∼84% of the theoretical maximum was for both 0.5% oxalic and sulfuric acid pretreatment at 160 °C, compared to ∼81% yield for hydrothermal pretreatment at 200 °C and for 0.5% hydrochloric acid pretreatment at 140 °C. The xylooligomer fraction from dilute oxalic acid pretreatment was only 6.3% of the total xylose in solution, similar to results with dilute hydrochloric and sulfuric acids but much lower than the ∼70% value for hydrothermal pretreatment. Combining any of the four pretreatments with enzymatic hydrolysis with 60 FPU cellulase/g of glucan plus xylan in the pretreated maple wood resulted in virtually the same total glucose plus xylose yields of ∼85% of the maximum possible. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Matrilateral Bias in Human Grandmothering

    Directory of Open Access Journals (Sweden)

    Martin Daly

    2017-09-01

    Full Text Available Children receive more care and resources from their maternal grandmothers than from their paternal grandmothers. This asymmetry is the “matrilateral bias” in grandmaternal investment. Here, we synopsize the evolutionary theories that predict such a bias, and review evidence of its cross-cultural generality and magnitude. Evolutionists have long maintained that investing in a daughter’s child yields greater fitness returns, on average, than investing in a son’s child because of paternity uncertainty: the son’s putative progeny may have been sired by someone else. Recent theoretical work has identified an additional natural selective basis for the matrilateral bias that may be no less important: supporting grandchildren lightens the load on their mother, increasing her capacity to pursue her fitness in other ways, and if she invests those gains either in her natal relatives or in children of a former or future partner, fitness returns accrue to the maternal, but not the paternal, grandmother. In modern democracies, where kinship is reckoned bilaterally and no postmarital residence norms restrict grandmaternal access to grandchildren, many studies have found large matrilateral biases in contact, childcare, and emotional closeness. In other societies, patrilineal ideology and postmarital residence with the husband’s kin (virilocality might be expected to have produced a patrilateral bias instead, but the available evidence refutes this hypothesis. In hunter-gatherers, regardless of professed norms concerning kinship and residence, mothers get needed help at and after childbirth from their mothers, not their mothers-in-law. In traditional agricultural and pastoral societies, patrilineal and virilocal norms are common, but young mothers still turn to their natal families for crucial help, and several studies have documented benefits, including reduced child mortality, associated with access to maternal, but not paternal, grandmothers. Even

  16. HMO marketing and selection bias: are TEFRA HMOs skimming?

    Science.gov (United States)

    Lichtenstein, R; Thomas, J W; Watkins, B; Puto, C; Lepkowski, J; Adams-Watson, J; Simone, B; Vest, D

    1992-04-01

    The research evidence indicates that health maintenance organizations (HMOs) participating in the Tax Equity and Fiscal Responsibility Act of 1982 (TEFRA) At-Risk Program tend to experience favorable selection. Although favorable selection might result from patient decisions, a common conjecture is that it can be induced by HMOs through their marketing activities. The purpose of this study is to examine the relationship between HMO marketing strategies and selection bias in TEFRA At-Risk HMOs. A purposive sample of 22 HMOs that were actively marketing their TEFRA programs was selected and data on organizational characteristics, market area characteristics, and HMO marketing decisions were collected. To measure selection bias in these HMOs, the functional health status of approximately 300 enrollees in each HMO was compared to that of 300 non-enrolling beneficiaries in the same area. Three dependent variables, reflecting selection bias at the mean, the low health tail, and the high health tail of the health status distribution were created. Weighted least squares regressions were then used to identify relationships between marketing elements and selection bias. Subject to the statistical limitations of the study, our conclusion is that it is doubtful that HMO marketing decisions are responsible for the prevalence of favorable selection in HMO enrollment. It also appears unlikely that HMOs were differentially targeting healthy and unhealthy segments of the Medicare market.

  17. Longitudinal drop-out and weighting against its bias

    Directory of Open Access Journals (Sweden)

    Steffen C. E. Schmidt

    2017-12-01

    Full Text Available Abstract Background The bias caused by drop-out is an important factor in large population-based epidemiological studies. Many studies account for it by weighting their longitudinal data, but to date there is no detailed final approach for how to conduct these weights. Methods In this study we describe the observed longitudinal bias and a three-step longitudinal weighting approach used for the longitudinal data in the MoMo baseline (N = 4528, 4–17 years and wave 1 study with 2807 (62% participants between 2003 and 2012. Results The most meaningful drop-out predictors were socioeconomic status of the household, socioeconomic characteristics of the mother and daily TV usage. Weighting reduced the bias between the longitudinal participants and the baseline sample, and also increased variance by 5% to 35% with a final weighting efficiency of 41.67%. Conclusions We conclude that a weighting procedure is important to reduce longitudinal bias in health-oriented epidemiological studies and suggest identifying the most influencing variables in the first step, then use logistic regression modeling to calculate the inverse of the probability of participation in the second step, and finally trim and standardize the weights in the third step.

  18. The relationship between attentional bias toward safety and driving behavior.

    Science.gov (United States)

    Zheng, Tingting; Qu, Weina; Zhang, Kan; Ge, Yan

    2016-11-01

    As implicit cognitive processes garner more and more importance, studies in the fields of healthy psychology and organizational safety research have focused on attentional bias, a kind of selective allocation of attentional resources in the early stage of cognitive processing. However, few studies have explored the role of attentional bias on driving behavior. This study assessed drivers' attentional bias towards safety-related words (ABS) using the dot-probe paradigm and self-reported daily driving behaviors. The results revealed significant negative correlations between attentional bias scores and several indicators of dangerous driving. Drivers with fewer dangerous driving behaviors showed greater ABS. We also built a significant linear regression model between ABS and the total DDDI score, as well as ABS and the number of accidents. Finally, we discussed the possible mechanism underlying these associations and several limitations of our study. This study opens up a new topic for the exploration of implicit processes in driving safety research. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  20. Spontaneous regression of a congenital melanocytic nevus

    Directory of Open Access Journals (Sweden)

    Amiya Kumar Nath

    2011-01-01

    Full Text Available Congenital melanocytic nevus (CMN may rarely regress which may also be associated with a halo or vitiligo. We describe a 10-year-old girl who presented with CMN on the left leg since birth, which recently started to regress spontaneously with associated depigmentation in the lesion and at a distant site. Dermoscopy performed at different sites of the regressing lesion demonstrated loss of epidermal pigments first followed by loss of dermal pigments. Histopathology and Masson-Fontana stain demonstrated lymphocytic infiltration and loss of pigment production in the regressing area. Immunohistochemistry staining (S100 and HMB-45, however, showed that nevus cells were present in the regressing areas.

  1. Bias-correction in vector autoregressive models

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    2014-01-01

    We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study......, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable...... improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find...

  2. The Probability Distribution for a Biased Spinner

    Science.gov (United States)

    Foster, Colin

    2012-01-01

    This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)

  3. Short Communication: Gender Bias and Stigmatization against ...

    African Journals Online (AJOL)

    Short Communication: Gender Bias and Stigmatization against Women Living with ... In Ethiopia, HIV/AIDS is highly stigmatized due to the fact that sexual ... bias, socio-economic situations and traditional beliefs contribute, individually and in ...

  4. A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression.

    Science.gov (United States)

    Liu, Fang; Eugenio, Evercita C

    2018-04-01

    Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.

  5. Is there bias in editorial choice? Yes

    OpenAIRE

    Moustafa, Khaled

    2018-01-01

    Nature has recently published a Correspondence claiming the absence of fame biases in the editorial choice. The topic is interesting and deserves a deeper analysis than it was presented because the reported brief analysis and its conclusion are somewhat biased for many reasons, some of them are discussed here. Since the editorial assessment is a form of peer-review, the biases reported on external peer-reviews would, thus, apply to the editorial assessment, too. The biases would be proportion...

  6. Bias-field equalizer for bubble memories

    Science.gov (United States)

    Keefe, G. E.

    1977-01-01

    Magnetoresistive Perm-alloy sensor monitors bias field required to maintain bubble memory. Sensor provides error signal that, in turn, corrects magnitude of bias field. Error signal from sensor can be used to control magnitude of bias field in either auxiliary set of bias-field coils around permanent magnet field, or current in small coils used to remagnetize permanent magnet by infrequent, short, high-current pulse or short sequence of pulses.

  7. The Accuracy Enhancing Effect of Biasing Cues

    NARCIS (Netherlands)

    W. Vanhouche (Wouter); S.M.J. van Osselaer (Stijn)

    2009-01-01

    textabstractExtrinsic cues such as price and irrelevant attributes have been shown to bias consumers’ product judgments. Results in this article replicate those findings in pretrial judgments but show that such biasing cues can improve quality judgments at a later point in time. Initially biasing

  8. Biased managers, organizational design, and incentive provision

    OpenAIRE

    Moreira, Humberto Ataíde; Costa, Cristiano Machado; Ferreira, Daniel Bernardo Soares

    2004-01-01

    Rio de Janeiro We model the tradeoff between the balance and the strength of incentives implicit in the choice between hierarchical and matrix organizational structures. We show that managerial biases determine which structure is optimal: hierarchical forms are preferred when biases are low, while matrix structures are preferred when biases are high.

  9. Dilution and Ferrite Number Prediction in Pulsed Current Cladding of Super-Duplex Stainless Steel Using RSM

    Science.gov (United States)

    Eghlimi, Abbas; Shamanian, Morteza; Raeissi, Keyvan

    2013-12-01

    Super-duplex stainless steels have an excellent combination of mechanical properties and corrosion resistance at relatively low temperatures and can be used as a coating to improve the corrosion and wear resistance of low carbon and low alloy steels. Such coatings can be produced using weld cladding. In this study, pulsed current gas tungsten arc cladding process was utilized to deposit super-duplex stainless steel on high strength low alloy steel substrates. In such claddings, it is essential to understand how the dilution affects the composition and ferrite number of super-duplex stainless steel layer in order to be able to estimate its corrosion resistance and mechanical properties. In the current study, the effect of pulsed current gas tungsten arc cladding process parameters on the dilution and ferrite number of super-duplex stainless steel clad layer was investigated by applying response surface methodology. The validity of the proposed models was investigated by using quadratic regression models and analysis of variance. The results showed an inverse relationship between dilution and ferrite number. They also showed that increasing the heat input decreases the ferrite number. The proposed mathematical models are useful for predicting and controlling the ferrite number within an acceptable range for super-duplex stainless steel cladding.

  10. Individual Tracer Atoms in an Ultracold Dilute Gas

    Science.gov (United States)

    Hohmann, Michael; Kindermann, Farina; Lausch, Tobias; Mayer, Daniel; Schmidt, Felix; Lutz, Eric; Widera, Artur

    2017-06-01

    We report on the experimental investigation of individual Cs atoms impinging on a dilute cloud of ultracold Rb atoms with variable density. We study the relaxation of the initial nonthermal state and detect the effect of single collisions which has so far eluded observation. We show that, after few collisions, the measured spatial distribution of the tracer atoms is correctly described by a Langevin equation with a velocity-dependent friction coefficient, over a large range of Knudsen numbers. Our results extend the simple and effective Langevin treatment to the realm of light particles in dilute gases. The experimental technique developed opens up the microscopic exploration of a novel regime of diffusion at the level of individual collisions.

  11. Husimi-cactus approximation study on the diluted spin ice

    Science.gov (United States)

    Otsuka, Hiromi; Okabe, Yutaka; Nefedev, Konstantin

    2018-04-01

    We investigate dilution effects on the classical spin-ice materials such as Ho2Ti2O7 and Dy2Ti2O7 . In particular, we derive a formula of the thermodynamic quantities as functions of the temperature and a nonmagnetic ion concentration based on a Husimi-cactus approximation. We find that the formula predicts a dilution-induced crossover from the cooperative to the conventional paramagnets in a ground state, and that it also reproduces the "generalized Pauling's entropy" given by Ke et al. To verify the formula from a numerical viewpoint, we compare these results with Monte Carlo simulation calculation data, and then find good agreement for all parameter values.

  12. Mössbauer Studies of dilute Magnetic Semiconductors

    CERN Multimedia

    Gislason, H P; Debernardi, A; Dlamini, W B

    2002-01-01

    The recent discovery of (dilute) magnetic semiconductors with wide band gaps, e.g. GaN, ZnO and other oxides, having Curie temperatures, T$_{\\textrm{c}}$, well above room temperature, has prompted extraordinary experimental and theoretical efforts to understand, control and exploit this unexpected finding not least in view of the obvious potential of such materials for the fabrication of "spin-(elec)tronic" or magneto-optic devices. Ferromagnetism (FM) was achieved mostly by doping with dilute 3d transition metal impurities, notably Mn, Fe, and Co (in \\% concentrations), during growth or by subsequent ion implantation. However, it is fair to state that experimentally the conditions for the occurrence of ferro-, antiferro- or paramagnetism with these impurities are not yet controlled as generally at least two conflicting forms of magnetism or none have been reported for each system - albeit often produced by different techniques. Theory is challenged as "conventional" models seem to fail and no generally accep...

  13. Further development of IDGS: Isotope dilution gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Li, T.K.; Parker, J.L.; Kuno, Y.; Sato, S.; Kamata, M.; Akiyama, T.

    1991-01-01

    The isotope dilution gamma-ray spectrometry (IDGS) technique for determining the plutonium concentration and isotopic composition of highly radioactive spent-fuel dissolver solutions has been further developed. Both the sample preparation and the analysis have been improved. The plutonium isotopic analysis is based on high-resolution, low-energy gamma-ray spectrometry. The plutonium concentration in the dissolver solutions then is calculated from the measured isotopic differences among the spike, the dissolver solution, and the spiked dissolver solution. Plutonium concentrations and isotopic compositions of dissolver solutions analyzed from this study agree well with those obtained by traditional isotope dilution mass spectrometry (IDMS) and are consistent with the first IDGS experimental result. With the current detector efficiency, sample size, and a 100-min count time, the estimated precision is ∼0.5% for 239 Pu and 240 Pu isotopic analyses and ∼1% for the plutonium concentration analysis. 5 refs., 2 figs., 7 tabs

  14. Properties of magnetically diluted nanocrystals prepared by mechanochemical route

    International Nuclear Information System (INIS)

    Balaz, P.; Skorvanek, I.; Fabian, M.; Kovac, J.; Steinbach, F.; Feldhoff, A.; Sepelak, V.; Jiang, J.; Satka, A.; Kovac, J.

    2010-01-01

    The bulk and surface properties of magnetically diluted Cd 0.6 Mn 0.4 S nanocrystals synthesized by solid state route in a planetary mill were studied. XRD, SEM, TEM (HRTEM), low-temperature N 2 sorption, nanoparticle size distribution as well as SQUID magnetometry methods have been applied. The measurements identified the aggregates of small nanocrystals, 5-10 nm in size. The homogeneity of produced particles with well developed specific surface area (15-66 m 2 g -1 ) was documented. The transition from the paramagnetic to the spin-glass-like phase has been observed below ∼40 K. The changes in the magnetic behaviour at low temperatures seem to be correlated with the formation of the new surface area as a consequence of milling. The magnetically diluted Cd 0.6 Mn 0.4 S nanocrystals are obtained in the simple synthesis step, making the process attractive for industrial applications.

  15. Sibship Size and Gendered Resource Dilution in Different Societal Contexts.

    Directory of Open Access Journals (Sweden)

    Matthijs Kalmijn

    Full Text Available Resource dilution theory hypothesizes that children's educational attainment suffers from being raised with many siblings, as the parental resources have to be shared with more children. Based on economic and cultural theories, we hypothesize that resource dilution is gendered: especially a larger number of brothers is harmful to a person's educational attainment. Using the Survey of Health, Ageing and Retirement in Europe, covering 18 European countries, we show that the number of brothers is more negatively related with the odds of obtaining a college degree than the number of sisters. This holds particularly for women. However, this pattern is weaker in countries that are known to have a more gender-egalitarian climate.

  16. The development and site investigation of fume diluter

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Bok Youn; Kang, Chang Hee; Jo, Young Do; Lim, Sang Taek [Korea Institute of Geology Mining and Materials, Taejon (Korea, Republic of)

    1996-12-01

    It is third project year on `Application of mobile diesel equipment in underground mines` for providing appropriate measures to improve underground working environment contaminated by the diesel exhaust pollutants. For reducing the exhaust temperature bellow 70 deg. C to prevent production of the governing pollutant (NO{sub 2}), the fume diluter is verified the most effective device through the site investigation. Therefore, the fume diluter is strongly recommended instead of catalytic converter which is employed presently. The performances derived from the tests are as follows; 1) This device increased air flow to 6.7-8.4 times of the original exhaust, 2) Exhaust temperature can be reduced to 66 deg. C from 161 deg. C, 3) All the pollutants can be reduced to bellow than 30 % of exhaust concentration, 4) This device requires less cost and no maintenance. (author). 4 tabs., 4 figs.

  17. Critical mass variation of 239Pu with water dilution

    International Nuclear Information System (INIS)

    Pearlstein, S.

    1996-01-01

    The critical mass of an unreflected solid sphere of 239 Pu is ∼ 10 kg. The increase in critical mass observed for small water dilutions of unreflected 239 Pu spheres is paradoxical. Introducing small amounts of water uniformly throughout the sphere increases the spherical volume containing the same amount of 239 Pu as the critical solid sphere. The increase in radius decreases the surface-to-volume ratio of the sphere, which has the effect to first order of decreasing the neutron leakage, which is proportional to the surface, relative to the fissions, which are proportional to the volume. The reduction in neutron leakage is expected to reduce the critical mass, but instead, the critical mass is observed to increase. It is discussed how changes in the fast neutron spectrum with corresponding changes in the nuclear parameters result in an increase in critical mass for small water dilutions

  18. Percolation of polyatomic species on site diluted lattices

    International Nuclear Information System (INIS)

    Cornette, V.; Ramirez-Pastor, A.J.; Nieto, F.

    2006-01-01

    In this Letter, the percolation of (a) linear segments of size k and (b) k-mers (particles occupying k adjacent sites) of different structures and forms deposited on a diluted square lattice have been studied. The diluted lattice is built by randomly selecting a fraction of sites which are considered forbidden for deposition. The analysis of the obtained results is made in the framework of the finite size scaling theory. The characteristic parameters of the percolation problem are dependent not only on the form and structure of the k-mers but also on the properties of the lattice where they are deposited. A phase diagram separating a percolating from a non-percolating region is determined and discussed

  19. A model for the viscosity of dilute smectite gels

    International Nuclear Information System (INIS)

    Liu, L.

    2011-01-01

    A simple yet accurate model describing the viscosity of dilute suspensions of sodium montmorillonite in dilute homo-ionic solutions is presented. Taking the clay particle and the surrounding clouds of ions as a whole as an uncharged but soft, coin-like particle, the Huggins' equation for a suspension of uncharged particles is extended in the model to account for not only the primary and the secondary electro-viscous effects, but also the multi-particle interaction. The agreements between the predicted and measured results are excellent. The Huggins' coefficient obtained compares favorably with available data, while the intrinsic viscosity reduces to the Simha's equation in the large limit of ionic strength, suggesting that the model is robust. (authors)

  20. Novel Dilute Bismide, Epitaxy, Physical Properties and Device Application

    Directory of Open Access Journals (Sweden)

    Lijuan Wang

    2017-02-01

    Full Text Available Dilute bismide in which a small amount of bismuth is incorporated to host III-Vs is the least studied III-V compound semiconductor and has received steadily increasing attention since 2000. In this paper, we review theoretical predictions of physical properties of bismide alloys, epitaxial growth of bismide thin films and nanostructures, surface, structural, electric, transport and optic properties of various binaries and bismide alloys, and device applications.

  1. Electron pairing in dilute liquid metal-metal halide solutions

    Energy Technology Data Exchange (ETDEWEB)

    Selloni, A.; Car, R.; Parrinello, M.; Carnevali, P.

    1987-09-10

    Spin density functional theory is used to describe the interaction between solvated electrons in KCl in the high dilution limit. In agreement with recent calculations based on the path integral method our results for antiparallel spin predict a strong tendency to form localized bielectronic complexes. At variance with numerical path integral, our method can efficiently treat the case of parallel spins. For this case we find that electrons repel each other and localize into separate F-center-like states.

  2. Electron paramagnetic resonance studies of defects in dilute magnetic alloys

    International Nuclear Information System (INIS)

    Suss, J.T.; Raizman, A.

    1980-01-01

    The EPR spectrum of erbium was used to study the effects of cold-working (rolling and mechanical polishing) in dilute gold-erbium alloys. Variation in the EPR linewidth, intensity and asymmetry parameter (A/B ratio) were investigated. Most of the results could be interpreted in terms of segregation of erbium ions to subgrain boundaries (dislocations) in a surface layer of a few thousand Angstroms. (author)

  3. Water Stress Scatters Nitrogen Dilution Curves in Wheat

    Directory of Open Access Journals (Sweden)

    Marianne Hoogmoed

    2018-04-01

    Full Text Available Nitrogen dilution curves relate a crop’s critical nitrogen concentration (%Nc to biomass (W according to the allometric model %Nc = a W-b. This model has a strong theoretical foundation, and parameters a and b show little variation for well-watered crops. Here we explore the robustness of this model for water stressed crops. We established experiments to examine the combined effects of water stress, phenology, partitioning of biomass, and water-soluble carbohydrates (WSC, as driven by environment and variety, on the %Nc of wheat crops. We compared models where %Nc was plotted against biomass, growth stage and thermal time. The models were similarly scattered. Residuals of the %Nc - biomass model at anthesis were positively related to biomass, stem:biomass ratio, Δ13C and water supply, and negatively related to ear:biomass ratio and concentration of WSC. These are physiologically meaningful associations explaining the scatter of biomass-based dilution curves. Residuals of the thermal time model showed less consistent associations with these variables. The biomass dilution model developed for well-watered crops overestimates nitrogen deficiency of water-stressed crops, and a biomass-based model is conceptually more justified than developmental models. This has implications for diagnostic and modeling. As theory is lagging, a greater degree of empiricism might be useful to capture environmental, chiefly water, and genotype-dependent traits in the determination of critical nitrogen for diagnostic purposes. Sensitivity analysis would help to decide if scaling nitrogen dilution curves for crop water status, and genotype-dependent parameters are needed.

  4. Determination of microquantities of silver in platinum by isotope dilution

    International Nuclear Information System (INIS)

    Yedinakova, V.; Sladkovska, Y.

    1980-01-01

    A method is described for determining microquantities of silver in platinum. It is based on isotope dilution by means of substoichiometric extraction of dithizonates with carbon tetrachloride. The determination of silver according to this technique is not interfered by zinc or gold in quantities exceeding the silver content by one order of magnitude nor by a great excess of platinum. In the presence of copper the addition of complexon is necessary. (author)

  5. Thermomechanical Processing of Structural Steels with Dilute Niobium Additions

    Science.gov (United States)

    Cui, Z.; Patel, J.; Palmiere, E. J.

    The recrystallisation behaviour of medium carbon steels with dilute Nb addition was investigated by means of plane strain compression tests and the observation of prior austenite microstructures during different deformation conditions. It was found that complete suppression of recrystallisation did not occur in the deformation temperature range investigated. At lower deformation temperatures, partial recrystallisation occurred in the higher Nb sample. This gives the potential to obtain a full suppression of recrystallisation at lower deformation temperatures.

  6. Removal of sulfite liquor from digesters with partially diluted liquor

    Energy Technology Data Exchange (ETDEWEB)

    Leshchenko, I G; Sykol, V P

    1957-01-01

    The yield of reducing sugars was raised from 189 to 224 kg/ton of pulp by displacing the cooking liquor with diluted liquor. As the pressure during blow-off dropped to 3.5-3.0 atmosphere, weak sulfite liquor was added at the rate 120 cu m/hr. After 5-10 minutes the liquor was pumped from the digester to the ethanol plant.

  7. Learning and forgetting on asymmetric, diluted neural networks

    International Nuclear Information System (INIS)

    Derrida, B.; Nadal, J.P.

    1987-01-01

    It is possible to construct diluted asymmetric models of neural networks for which the dynamics can be calculated exactly. The authors test several learning schemes, in particular, models for which the values of the synapses remain bounded and depend on the history. Our analytical results on the relative efficiencies of the various learning schemes are qualitatively similar to the corresponding ones obtained numerically on fully connected symmetric networks

  8. Learning Supervised Topic Models for Classification and Regression from Crowds.

    Science.gov (United States)

    Rodrigues, Filipe; Lourenco, Mariana; Ribeiro, Bernardete; Pereira, Francisco C

    2017-12-01

    The growing need to analyze large collections of documents has led to great developments in topic modeling. Since documents are frequently associated with other related variables, such as labels or ratings, much interest has been placed on supervised topic models. However, the nature of most annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this article, we propose two supervised topic models, one for classification and another for regression problems, which account for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages of the proposed model over state-of-the-art approaches.

  9. An inclusive taxonomy of behavioral biases

    Directory of Open Access Journals (Sweden)

    David Peón

    2017-07-01

    Full Text Available This paper overviews the theoretical and empirical research on behavioral biases and their influence in the literature. To provide a systematic exposition, we present a unified framework that takes the reader through an original taxonomy, based on the reviews of relevant authors in the field. In particular, we establish three broad categories that may be distinguished: heuristics and biases; choices, values and frames; and social factors. We then describe the main biases within each category, and revise the main theoretical and empirical developments, linking each bias with other biases and anomalies that are related to them, according to the literature.

  10. Evaluation of linear regression techniques for atmospheric applications: the importance of appropriate weighting

    Directory of Open Access Journals (Sweden)

    C. Wu

    2018-03-01

    Full Text Available Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS, Deming regression (DR, orthogonal distance regression (ODR, weighted ODR (WODR, and York regression (YR. We first introduce a new data generation scheme that employs the Mersenne twister (MT pseudorandom number generator. The numerical simulations are also improved by (a refining the parameterization of nonlinear measurement uncertainties, (b inclusion of a linear measurement uncertainty, and (c inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot was developed to facilitate the implementation of error-in-variables regressions.

  11. Evaluation of linear regression techniques for atmospheric applications: the importance of appropriate weighting

    Science.gov (United States)

    Wu, Cheng; Zhen Yu, Jian

    2018-03-01

    Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS), Deming regression (DR), orthogonal distance regression (ODR), weighted ODR (WODR), and York regression (YR). We first introduce a new data generation scheme that employs the Mersenne twister (MT) pseudorandom number generator. The numerical simulations are also improved by (a) refining the parameterization of nonlinear measurement uncertainties, (b) inclusion of a linear measurement uncertainty, and (c) inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot) was developed to facilitate the implementation of error-in-variables regressions.

  12. Applied regression analysis a research tool

    CERN Document Server

    Pantula, Sastry; Dickey, David

    1998-01-01

    Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...

  13. Regional lung deposition of aged and diluted sidestream tobacco smoke

    International Nuclear Information System (INIS)

    Hofmann, W; Winkler-Heil, R; McAughey, J

    2009-01-01

    Since aged and diluted smoke particles are in general smaller and more stable than mainstream tobacco smoke, it should be possible to model their deposition on the basis of their measured particle diameters. However in practice, measured deposition values are consistently greater than those predicted by deposition models. Thus the primary objective of this study was to compare theoretical predictions obtained by the Monte Carlo code IDEAL with two human deposition studies to attempt to reconcile these differences. In the first study, male and female volunteers inhaled aged and diluted sidestream tobacco smoke at two steady-state concentrations under normal tidal breathing conditions. In the second study, male volunteers inhaled aged and diluted sidestream smoke labelled with 212 Pb to fixed inhalation patterns. Median particle diameters in the two studies were 125 nm (CMD) and 210 nm (AMD), respectively. Experimental data on total deposition were consistently higher than the corresponding theoretical predictions, exhibiting significant inter-subject variations. However, measured and calculated regional deposition data are quite similar to each other, except for the extra-thoracic region. This discrepancy suggests that either the initial particle diameter decreases upon inspiration and/or additional deposition mechanisms are operating in the case of tobacco smoke particles.

  14. Novel aspects of diluted and digital magnetic heterostructures

    International Nuclear Information System (INIS)

    Bonanni, A.

    1999-04-01

    In the present work novel aspects of diluted and digital II-VI-based heterostructures containing Mn ions are investigated. All the structures under study were fabricated by means of molecular beam epitaxy. Digital magnetic heterostructures have been prepared by incorporating discrete (sub)monolayers of the purely magnetic semiconductor MnTe into otherwise non magnetic CdTe quantum wells embedded in CdMgTe barriers. Formation and binding energy of magnetic polarons have been investigated in these structures and compared with the diluted case. Reflectance difference spectroscopy (RDS) performed ex-situ allowed to distinguish between signals due to the crystal anisotropy solely and those induced by the presence a magnetic elements. The problem of p-type doping of bulk diluted magnetic semiconductors II-VI-based is tackled. During and upon growth of ZnMnTe highly doped with N, in-situ RDS was carried out in order to investigate intra-ion transitions within the half filled 3d shell of Mn. Transport measurements and magnetometry at low temperature were performed to study, on the tracks of recent theoretical works, the influence of free carriers on the interaction between magnetic ions. As expected, indications of ferromagnetic ordering were found for the DMS with the highest concentration of carriers. Special attention was given to the formation of Mn islands on a II-VI substrate and to their change in morphology upon overgrowth with a mismatched material. A rich zoology of regularly shaped nanostructures could be produced. (author)

  15. Pollutant Dilution and Diffusion in Urban Street Canyon Neighboring Streets

    Science.gov (United States)

    Sun, Z.; Fu, Zh. M.

    2011-09-01

    In the present study we investigated the airflow patterns and air quality of a series of typical street canyon combinations, developed a mass balance model to determine the local pollutant dilution rate, and discuss the impact of upstream canyon on the air quality of downstream canyon. The results indicated that the geometrical size of upstream and downstream buildings have significant impacts on the ambient airflow patterns. The pollution distribution within the canyons varies with different building combinations and flow patterns. Within the upstream canyon, pollution always accumulates to the low building side for non-symmetrical canyon, and for symmetrical canyon high level of pollution occurs at the leeward side. The height of the middle and downstream buildings can evidently change the pollutant dispersion direction during the transport process. Within the polluted canyon, the pollutant dilution rate (PDR) also varies with different street canyon combinations. The highest PDR is observed when the upstream buildings are both low buildings no matter the height of downstream building. However, the two cases are likely to contribution pollution to the downstream canyon. The H-L-H combination is mostly against local pollution remove, while the L-H-L case is considered the best optimistic building combination with both the ability of diluting local pollution and not remarkably decreasing air quality of downstream canyon. The current work is expected instructive for city designers to optimize traffic patterns under typical existing geometry or in the development of urban geometry modification for air quality control.

  16. The issue of risk dilution in risk assessments

    International Nuclear Information System (INIS)

    Wilmot, R.; Robinson, P.

    2004-01-01

    This paper explores an issue that was first highlighted more than 20 years ago during an inquiry concerning the Sizeweli B nuclear power station in the UK. In the probabilistic safety assessment for this plant, the proponent had apparently reduced its estimates of risk by admitting to increased uncertainty about the timing of certain events. This situation is counter-intuitive, since an increase in uncertainty about the factors contributing to safety would be expected to lead to less confidence and hence to greater risk. This paradoxical situation was termed 'risk dilution' and it has been a topic of interest to reviewers of safety cases since. The recent international peer review of the Yucca Mountain performance assessments concluded that there was a potential for risk dilution in the assumptions and calculations presented. The next section describes how assumptions about the timing of events and other aspects of an assessment may lead to risk dilution, and this is followed by two examples based on recent performance assessments. The final section discusses how potential problems can be identified in safety cases, and the types of response that a regulator might adopt as a result. (authors)

  17. Physical modelling of a rapid boron dilution transient

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, N.G.; Hemstroem, B.; Karlsson, R. [Vattenfall Utveckling AB, Aelvkarleby (Sweden); Jacobson, S. [Vattenfall AB, Ringhals, Vaeroebacka (Sweden)

    1995-09-01

    The analysis of boron dilution accidents in pressurised water reactors has traditionally assumed that mixing is instantaneous and complete everywhere, eliminating in this way the possibility of concentration inhomogeneities. Situations can nevertheless arise where a volume of coolant with a low boron concentration may eventually enter the core and generate a severe reactivity transient. The work presented in this paper deals with a category of Rapid Boron Dilution Events characterised by a rapid start of a Reactor Coolant Pump (RCP) with a plug of relatively unborated water present in the RCS pipe. Model tests have been made at Vattenfall Utveckling AB in a simplified 1:5 scale model of a Westinghouse PWR. Conductivity measurements are used to determine dimensionless boron concentration. The main purpose of this experimental work is to define an experimental benchmark against which a mathematical model can be tested. The final goal is to be able to numerically predict Boron Dilution Transients. This work has been performed as a part of a Co-operative Agreement with Electricite` de France (EDF).

  18. The Dilution Effect and Information Integration in Perceptual Decision Making.

    Directory of Open Access Journals (Sweden)

    Jared M Hotaling

    Full Text Available In cognitive science there is a seeming paradox: On the one hand, studies of human judgment and decision making have repeatedly shown that people systematically violate optimal behavior when integrating information from multiple sources. On the other hand, optimal models, often Bayesian, have been successful at accounting for information integration in fields such as categorization, memory, and perception. This apparent conflict could be due, in part, to different materials and designs that lead to differences in the nature of processing. Stimuli that require controlled integration of information, such as the quantitative or linguistic information (commonly found in judgment studies, may lead to suboptimal performance. In contrast, perceptual stimuli may lend themselves to automatic processing, resulting in integration that is closer to optimal. We tested this hypothesis with an experiment in which participants categorized faces based on resemblance to a family patriarch. The amount of evidence contained in the top and bottom halves of each test face was independently manipulated. These data allow us to investigate a canonical example of sub-optimal information integration from the judgment and decision making literature, the dilution effect. Splitting the top and bottom halves of a face, a manipulation meant to encourage controlled integration of information, produced farther from optimal behavior and larger dilution effects. The Multi-component Information Accumulation model, a hybrid optimal/averaging model of information integration, successfully accounts for key accuracy, response time, and dilution effects.

  19. The Dilution Effect and Information Integration in Perceptual Decision Making.

    Science.gov (United States)

    Hotaling, Jared M; Cohen, Andrew L; Shiffrin, Richard M; Busemeyer, Jerome R

    2015-01-01

    In cognitive science there is a seeming paradox: On the one hand, studies of human judgment and decision making have repeatedly shown that people systematically violate optimal behavior when integrating information from multiple sources. On the other hand, optimal models, often Bayesian, have been successful at accounting for information integration in fields such as categorization, memory, and perception. This apparent conflict could be due, in part, to different materials and designs that lead to differences in the nature of processing. Stimuli that require controlled integration of information, such as the quantitative or linguistic information (commonly found in judgment studies), may lead to suboptimal performance. In contrast, perceptual stimuli may lend themselves to automatic processing, resulting in integration that is closer to optimal. We tested this hypothesis with an experiment in which participants categorized faces based on resemblance to a family patriarch. The amount of evidence contained in the top and bottom halves of each test face was independently manipulated. These data allow us to investigate a canonical example of sub-optimal information integration from the judgment and decision making literature, the dilution effect. Splitting the top and bottom halves of a face, a manipulation meant to encourage controlled integration of information, produced farther from optimal behavior and larger dilution effects. The Multi-component Information Accumulation model, a hybrid optimal/averaging model of information integration, successfully accounts for key accuracy, response time, and dilution effects.

  20. Infinitely dilute partial molar properties of proteins from computer simulation.

    Science.gov (United States)

    Ploetz, Elizabeth A; Smith, Paul E

    2014-11-13

    A detailed understanding of temperature and pressure effects on an infinitely dilute protein's conformational equilibrium requires knowledge of the corresponding infinitely dilute partial molar properties. Established molecular dynamics methodologies generally have not provided a way to calculate these properties without either a loss of thermodynamic rigor, the introduction of nonunique parameters, or a loss of information about which solute conformations specifically contributed to the output values. Here we implement a simple method that is thermodynamically rigorous and possesses none of the above disadvantages, and we report on the method's feasibility and computational demands. We calculate infinitely dilute partial molar properties for two proteins and attempt to distinguish the thermodynamic differences between a native and a denatured conformation of a designed miniprotein. We conclude that simple ensemble average properties can be calculated with very reasonable amounts of computational power. In contrast, properties corresponding to fluctuating quantities are computationally demanding to calculate precisely, although they can be obtained more easily by following the temperature and/or pressure dependence of the corresponding ensemble averages.

  1. Regression models of reactor diagnostic signals

    International Nuclear Information System (INIS)

    Vavrin, J.

    1989-01-01

    The application is described of an autoregression model as the simplest regression model of diagnostic signals in experimental analysis of diagnostic systems, in in-service monitoring of normal and anomalous conditions and their diagnostics. The method of diagnostics is described using a regression type diagnostic data base and regression spectral diagnostics. The diagnostics is described of neutron noise signals from anomalous modes in the experimental fuel assembly of a reactor. (author)

  2. Gender Bias Affects Forests Worldwide

    Directory of Open Access Journals (Sweden)

    Marlène Elias

    2017-04-01

    Full Text Available Gender biases persist in forestry research and practice. These biases result in reduced scientific rigor and inequitable, ineffective, and less efficient policies, programs, and interventions. Drawing from a two-volume collection of current and classic analyses on gender in forests, we outline five persistent and inter-related themes: gendered governance, tree tenure, forest spaces, division of labor, and ecological knowledge. Each emerges across geographic regions in the northern and southern hemisphere and reflects inequities in women’s and men’s ability to make decisions about and benefit from trees, forests, and their products. Women’s ability to participate in community-based forest governance is typically less than men’s, causing concern for social equity and forest stewardship. Women’s access to trees and their products is commonly more limited than men’s, and mediated by their relationship with their male counterparts. Spatial patterns of forest use reflect gender norms and taboos, and men’s greater access to transportation. The division of labor results in gender specialization in the collection of forest products, with variations in gender roles across regions. All these gender differences result in ecological knowledge that is distinct but also complementary and shifting across the genders. The ways gender plays out in relation to each theme may vary across cultures and contexts, but the influence of gender, which intersects with other factors of social differentiation in shaping forest landscapes, is global.

  3. Workplace ageism: discovering hidden bias.

    Science.gov (United States)

    Malinen, Sanna; Johnston, Lucy

    2013-01-01

    BACKGROUND/STUDY CONTEXT: Research largely shows no performance differences between older and younger employees, or that older workers even outperform younger employees, yet negative attitudes towards older workers can underpin discrimination. Unfortunately, traditional "explicit" techniques for assessing attitudes (i.e., self-report measures) have serious drawbacks. Therefore, using an approach that is novel to organizational contexts, the authors supplemented explicit with implicit (indirect) measures of attitudes towards older workers, and examined the malleability of both. This research consists of two studies. The authors measured self-report (explicit) attitudes towards older and younger workers with a survey, and implicit attitudes with a reaction-time-based measure of implicit associations. In addition, to test whether attitudes were malleable, the authors measured attitudes before and after a mental imagery intervention, where the authors asked participants in the experimental group to imagine respected and valued older workers from their surroundings. Negative, stable implicit attitudes towards older workers emerged in two studies. Conversely, explicit attitudes showed no age bias and were more susceptible to change intervention, such that attitudes became more positive towards older workers following the experimental manipulation. This research demonstrates the unconscious nature of bias against older workers, and highlights the utility of implicit attitude measures in the context of the workplace. In the current era of aging workforce and skill shortages, implicit measures may be necessary to illuminate hidden workplace ageism.

  4. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    Science.gov (United States)

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  5. Multivariate Regression Analysis and Slaughter Livestock,

    Science.gov (United States)

    AGRICULTURE, *ECONOMICS), (*MEAT, PRODUCTION), MULTIVARIATE ANALYSIS, REGRESSION ANALYSIS , ANIMALS, WEIGHT, COSTS, PREDICTIONS, STABILITY, MATHEMATICAL MODELS, STORAGE, BEEF, PORK, FOOD, STATISTICAL DATA, ACCURACY

  6. [From clinical judgment to linear regression model.

    Science.gov (United States)

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  7. Analysis of tag-position bias in MPSS technology

    Directory of Open Access Journals (Sweden)

    Rattray Magnus

    2006-04-01

    Full Text Available Abstract Background Massively Parallel Signature Sequencing (MPSS technology was recently developed as a high-throughput technology for measuring the concentration of mRNA transcripts in a sample. It has previously been observed that the position of the signature tag in a transcript (distance from 3' end can affect the measurement, but this effect has not been studied in detail. Results We quantify the effect of tag-position bias in Classic and Signature MPSS technology using published data from Arabidopsis, rice and human. We investigate the relationship between measured concentration and tag-position using nonlinear regression methods. The observed relationship is shown to be broadly consistent across different data sets. We find that there exist different and significant biases in both Classic and Signature MPSS data. For Classic MPSS data, genes with tag-position in the middle-range have highest measured abundance on average while genes with tag-position in the high-range, far from the 3' end, show a significant decrease. For Signature MPSS data, high-range tag-position genes tend to have a flatter relationship between tag-position and measured abundance. Thus, our results confirm that the Signature MPSS method fixes a substantial problem with the Classic MPSS method. For both Classic and Signature MPSS data there is a positive correlation between measured abundance and tag-position for low-range tag-position genes. Compared with the effects of mRNA length and number of exons, tag-position bias seems to be more significant in Arabadopsis. The tag-position bias is reflected both in the measured abundance of genes with a significant tag count and in the proportion of unexpressed genes identified. Conclusion Tag-position bias should be taken into consideration when measuring mRNA transcript abundance using MPSS technology, both in Classic and Signature MPSS methods.

  8. Capacitance Regression Modelling Analysis on Latex from Selected Rubber Tree Clones

    International Nuclear Information System (INIS)

    Rosli, A D; Baharudin, R; Hashim, H; Khairuzzaman, N A; Mohd Sampian, A F; Abdullah, N E; Kamaru'zzaman, M; Sulaiman, M S

    2015-01-01

    This paper investigates the capacitance regression modelling performance of latex for various rubber tree clones, namely clone 2002, 2008, 2014 and 3001. Conventionally, the rubber tree clones identification are based on observation towards tree features such as shape of leaf, trunk, branching habit and pattern of seeds texture. The former method requires expert persons and very time-consuming. Currently, there is no sensing device based on electrical properties that can be employed to measure different clones from latex samples. Hence, with a hypothesis that the dielectric constant of each clone varies, this paper discusses the development of a capacitance sensor via Capacitance Comparison Bridge (known as capacitance sensor) to measure an output voltage of different latex samples. The proposed sensor is initially tested with 30ml of latex sample prior to gradually addition of dilution water. The output voltage and capacitance obtained from the test are recorded and analyzed using Simple Linear Regression (SLR) model. This work outcome infers that latex clone of 2002 has produced the highest and reliable linear regression line with determination coefficient of 91.24%. In addition, the study also found that the capacitive elements in latex samples deteriorate if it is diluted with higher volume of water. (paper)

  9. Isotopic biases for actinide-only burnup credit

    International Nuclear Information System (INIS)

    Rahimi, M.; Lancaster, D.; Hoeffer, B.; Nichols, M.

    1997-01-01

    The primary purpose of this paper is to present the new methodology for establishing bias and uncertainty associated with isotopic prediction in spent fuel assemblies for burnup credit analysis. The analysis applies to the design of criticality control systems for spent fuel casks. A total of 54 spent fuel samples were modeled and analyzed using the Shielding Analyses Sequence (SAS2H). Multiple regression analysis and a trending test were performed to develop isotopic correction factors for 10 actinide burnup credit isotopes. 5 refs., 1 tab

  10. Dilution and slow injection reduces the incidence of rocuronium-induced withdrawal movements in children

    OpenAIRE

    Shin, Young Hee; Kim, Chung Su; Lee, Jong-Hwan; Sim, Woo Seog; Ko, Justin Sangwook; Cho, Hyun Sung; Jeong, Hui Yeon; Lee, Hye Won; Kim, Sang Hyun

    2011-01-01

    Background The aim of this study was to evaluate whether slow injection of diluted rocuronium could reduce rocuronium-induced withdrawal movements effectively in children. Methods After loss of consciousness, rocuronium 0.6 mg/kg was administered into 171 children according to the pre-assigned groups as follows: Group CF, injection of non-diluted rocuronium over 5 seconds; Group CS, injection of non-diluted rocuronium over 1 minute; Group DF, injection of diluted rocuronium (10 times) over 5 ...

  11. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  12. Crystal Fields in Dilute Rare-Earth Metals Obtained from Magnetization Measurements on Dilute Rare-Earth Alloys

    DEFF Research Database (Denmark)

    Touborg, P.; Høg, J.

    1974-01-01

    Crystal field parameters of Tb, Dy, and Er in Sc, Y, and Lu are summarized. These parameters are obtained from magnetization measurements on dilute single crystals, and successfully checked by a number of different methods. The crystal field parameters vary unpredictably with the rare-earth solute....... B40, B60, and B66 are similar in Y and Lu. Crystal field parameters for the pure metals Tb, Dy, and Er are estimated from the crystal fields in Y and Lu....

  13. Measurement of isotope abundance variations in nature by gravimetric spiking isotope dilution analysis (GS-IDA).

    Science.gov (United States)

    Chew, Gina; Walczyk, Thomas

    2013-04-02

    Subtle variations in the isotopic composition of elements carry unique information about physical and chemical processes in nature and are now exploited widely in diverse areas of research. Reliable measurement of natural isotope abundance variations is among the biggest challenges in inorganic mass spectrometry as they are highly sensitive to methodological bias. For decades, double spiking of the sample with a mix of two stable isotopes has been considered the reference technique for measuring such variations both by multicollector-inductively coupled plasma mass spectrometry (MC-ICPMS) and multicollector-thermal ionization mass spectrometry (MC-TIMS). However, this technique can only be applied to elements having at least four stable isotopes. Here we present a novel approach that requires measurement of three isotope signals only and which is more robust than the conventional double spiking technique. This became possible by gravimetric mixing of the sample with an isotopic spike in different proportions and by applying principles of isotope dilution for data analysis (GS-IDA). The potential and principle use of the technique is demonstrated for Mg in human urine using MC-TIMS for isotopic analysis. Mg is an element inaccessible to double spiking methods as it consists of three stable isotopes only and shows great potential for metabolically induced isotope effects waiting to be explored.

  14. Phase diagrams and switching of voltage and magnetic field in dilute magnetic semiconductor nanostructures

    Energy Technology Data Exchange (ETDEWEB)

    Escobedo, R. [Departamento de Matematica Aplicada y Ciencias de la Computacion, Universidad de Cantabria, 39005 Santander (Spain); Carretero, M.; Bonilla, L.L. [G. Millan Institute, Fluid Dynamics, Nanoscience and Industrial Maths., Universidad Carlos III de Madrid, 28911 Leganes (Spain); Unidad Asociada al Instituto de Ciencia de Materiales, CSIC, 28049 Cantoblanco, Madrid (Spain); Platero, G. [Instituto de Ciencia de Materiales, CSIC, 28049 Cantoblanco, Madrid (Spain)

    2010-04-15

    The response of an n-doped dc voltage biased II-VI multi-quantum well dilute magnetic semiconductor nanostructure having its first well doped with magnetic (Mn) impurities is analyzed by sweeping wide ranges of both the voltage and the Zeeman level splitting induced by an external magnetic field. The level splitting versus voltage phase diagram shows regions of stable self-sustained current oscillations immersed in a region of stable stationary states. Transitions between stationary states and self-sustained current oscillations are systematically analyzed by both voltage and level splitting abrupt switching. Sudden voltage or/and magnetic field changes may switch on current oscillations from an initial stationary state, and reciprocally, current oscillations may disappear after sudden changes of voltage or/and magnetic field changes into the stable stationary states region. The results show how to design such a device to operate as a spin injector and a spin oscillator by tuning the Zeeman splitting (through the applied external magnetic field), the applied voltage and the sample configuration parameters (doping density, barrier and well widths, etc.) to select the desired stationary or oscillatory behavior. Phase diagram of Zeeman level splitting {delta} vs. dimensionless applied voltage {phi} for N = 10 QWs. White region: stable stationary states; black: stable self-sustained current oscillations. (copyright 2010 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  15. Statistical methods for elimination of guarantee-time bias in cohort studies: a simulation study

    Directory of Open Access Journals (Sweden)

    In Sung Cho

    2017-08-01

    Full Text Available Abstract Background Aspirin has been considered to be beneficial in preventing cardiovascular diseases and cancer. Several pharmaco-epidemiology cohort studies have shown protective effects of aspirin on diseases using various statistical methods, with the Cox regression model being the most commonly used approach. However, there are some inherent limitations to the conventional Cox regression approach such as guarantee-time bias, resulting in an overestimation of the drug effect. To overcome such limitations, alternative approaches, such as the time-dependent Cox model and landmark methods have been proposed. This study aimed to compare the performance of three methods: Cox regression, time-dependent Cox model and landmark method with different landmark times in order to address the problem of guarantee-time bias. Methods Through statistical modeling and simulation studies, the performance of the above three methods were assessed in terms of type I error, bias, power, and mean squared error (MSE. In addition, the three statistical approaches were applied to a real data example from the Korean National Health Insurance Database. Effect of cumulative rosiglitazone dose on the risk of hepatocellular carcinoma was used as an example for illustration. Results In the simulated data, time-dependent Cox regression outperformed the landmark method in terms of bias and mean squared error but the type I error rates were similar. The results from real-data example showed the same patterns as the simulation findings. Conclusions While both time-dependent Cox regression model and landmark analysis are useful in resolving the problem of guarantee-time bias, time-dependent Cox regression is the most appropriate method for analyzing cumulative dose effects in pharmaco-epidemiological studies.

  16. Comparison of dilution factors for German wastewater treatment plant effluents in receiving streams to the fixed dilution factor from chemical risk assessment.

    Science.gov (United States)

    Link, Moritz; von der Ohe, Peter C; Voß, Katharina; Schäfer, Ralf B

    2017-11-15

    Incomplete removal during wastewater treatment leads to frequent detection of compounds such as pharmaceuticals and personal care products in municipal effluents. A fixed standard dilution factor of 10 for effluents entering receiving water bodies is used during the exposure assessment of several chemical risk assessments. However, the dilution potential of German receiving waters under low flow conditions is largely unknown and information is sparse for other European countries. We calculated dilution factors for two datasets differing in spatial extent and wastewater treatment plant (WWTP) size: a national dataset comprising 1225 large WWTPs in Central and Northern Germany and a federal dataset for 678 WWTPs of a single state in Southwest Germany. We found that the fixed factor approach overestimates the dilution potential of 60% and 40% of receiving waters in the national and the federal dataset, with median dilution factors of 5 and 14.5, respectively. Under mean flow conditions, 8% of calculated dilution factors were below 10, with a median dilution factor of 106. We also calculated regional dilution factors that accounted for effluent inputs from upstream WWTPs. For the national and the federal dataset, 70% and 60% of calculated regional dilution factors fell below 10 under mean low flow conditions, respectively. Decrease of regional dilution potential in small receiving streams was mainly driven by the next WWTP upstream with a 2.5 fold drop of median regional dilution factors. Our results show that using the standard dilution factor of 10 would result in the underestimation of environmental concentrations for authorised chemicals by a factor of 3-5 for about 10% of WWTPs, especially during low flow conditions. Consequently, measured environmental concentrations might exceed predicted environmental concentrations and ecological risks posed by effluents could be much higher, suggesting that a revision of current risk assessment practices may be required

  17. RAWS II: A MULTIPLE REGRESSION ANALYSIS PROGRAM,

    Science.gov (United States)

    This memorandum gives instructions for the use and operation of a revised version of RAWS, a multiple regression analysis program. The program...of preprocessed data, the directed retention of variable, listing of the matrix of the normal equations and its inverse, and the bypassing of the regression analysis to provide the input variable statistics only. (Author)

  18. A Simulation Investigation of Principal Component Regression.

    Science.gov (United States)

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  19. Hierarchical regression analysis in structural Equation Modeling

    NARCIS (Netherlands)

    de Jong, P.F.

    1999-01-01

    In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main

  20. Categorical regression dose-response modeling

    Science.gov (United States)

    The goal of this training is to provide participants with training on the use of the U.S. EPA’s Categorical Regression soft¬ware (CatReg) and its application to risk assessment. Categorical regression fits mathematical models to toxicity data that have been assigned ord...

  1. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  2. Stepwise versus Hierarchical Regression: Pros and Cons

    Science.gov (United States)

    Lewis, Mitzi

    2007-01-01

    Multiple regression is commonly used in social and behavioral data analysis. In multiple regression contexts, researchers are very often interested in determining the "best" predictors in the analysis. This focus may stem from a need to identify those predictors that are supportive of theory. Alternatively, the researcher may simply be interested…

  3. Suppression Situations in Multiple Linear Regression

    Science.gov (United States)

    Shieh, Gwowen

    2006-01-01

    This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…

  4. Gibrat’s law and quantile regressions

    DEFF Research Database (Denmark)

    Distante, Roberta; Petrella, Ivan; Santoro, Emiliano

    2017-01-01

    The nexus between firm growth, size and age in U.S. manufacturing is examined through the lens of quantile regression models. This methodology allows us to overcome serious shortcomings entailed by linear regression models employed by much of the existing literature, unveiling a number of important...

  5. Regression Analysis and the Sociological Imagination

    Science.gov (United States)

    De Maio, Fernando

    2014-01-01

    Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.

  6. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  7. Principles of Quantile Regression and an Application

    Science.gov (United States)

    Chen, Fang; Chalhoub-Deville, Micheline

    2014-01-01

    Newer statistical procedures are typically introduced to help address the limitations of those already in practice or to deal with emerging research needs. Quantile regression (QR) is introduced in this paper as a relatively new methodology, which is intended to overcome some of the limitations of least squares mean regression (LMR). QR is more…

  8. ON REGRESSION REPRESENTATIONS OF STOCHASTIC-PROCESSES

    NARCIS (Netherlands)

    RUSCHENDORF, L; DEVALK, [No Value

    We construct a.s. nonlinear regression representations of general stochastic processes (X(n))n is-an-element-of N. As a consequence we obtain in particular special regression representations of Markov chains and of certain m-dependent sequences. For m-dependent sequences we obtain a constructive

  9. Pharmacogenomics Bias - Systematic distortion of study results by genetic heterogeneity

    Directory of Open Access Journals (Sweden)

    Zietemann, Vera

    2008-04-01

    Full Text Available Background: Decision analyses of drug treatments in chronic diseases require modeling the progression of disease and treatment response beyond the time horizon of clinical or epidemiological studies. In many such models, progression and drug effect have been applied uniformly to all patients; heterogeneity in progression, including pharmacogenomic effects, has been ignored. Objective: We sought to systematically evaluate the existence, direction and relative magnitude of a pharmacogenomics bias (PGX-Bias resulting from failure to adjust for genetic heterogeneity in both treatment response (HT and heterogeneity in progression of disease (HP in decision-analytic studies based on clinical study data. Methods: We performed a systematic literature search in electronic databases for studies regarding the effect of genetic heterogeneity on the validity of study results. Included studies have been summarized in evidence tables. In the case of lacking evidence from published studies we sought to perform our own simulation considering both HT and HP. We constructed two simple Markov models with three basic health states (early-stage disease, late-stage disease, dead, one adjusting and the other not adjusting for genetic heterogeneity. Adjustment was done by creating different disease states for presence (G+ and absence (G- of a dichotomous genetic factor. We compared the life expectancy gains attributable to treatment resulting from both models and defined pharmacogenomics bias as percent deviation of treatment-related life expectancy gains in the unadjusted model from those in the adjusted model. We calculated the bias as a function of underlying model parameters to create generic results. We then applied our model to lipid-lowering therapy with pravastatin in patients with coronary atherosclerosis, incorporating the influence of two TaqIB polymorphism variants (B1 and B2 on progression and drug efficacy as reported in the DNA substudy of the REGRESS

  10. Regression of environmental noise in LIGO data

    International Nuclear Information System (INIS)

    Tiwari, V; Klimenko, S; Mitselmakher, G; Necula, V; Drago, M; Prodi, G; Frolov, V; Yakushin, I; Re, V; Salemi, F; Vedovato, G

    2015-01-01

    We address the problem of noise regression in the output of gravitational-wave (GW) interferometers, using data from the physical environmental monitors (PEM). The objective of the regression analysis is to predict environmental noise in the GW channel from the PEM measurements. One of the most promising regression methods is based on the construction of Wiener–Kolmogorov (WK) filters. Using this method, the seismic noise cancellation from the LIGO GW channel has already been performed. In the presented approach the WK method has been extended, incorporating banks of Wiener filters in the time–frequency domain, multi-channel analysis and regulation schemes, which greatly enhance the versatility of the regression analysis. Also we present the first results on regression of the bi-coherent noise in the LIGO data. (paper)

  11. Pathological assessment of liver fibrosis regression

    Directory of Open Access Journals (Sweden)

    WANG Bingqiong

    2017-03-01

    Full Text Available Hepatic fibrosis is the common pathological outcome of chronic hepatic diseases. An accurate assessment of fibrosis degree provides an important reference for a definite diagnosis of diseases, treatment decision-making, treatment outcome monitoring, and prognostic evaluation. At present, many clinical studies have proven that regression of hepatic fibrosis and early-stage liver cirrhosis can be achieved by effective treatment, and a correct evaluation of fibrosis regression has become a hot topic in clinical research. Liver biopsy has long been regarded as the gold standard for the assessment of hepatic fibrosis, and thus it plays an important role in the evaluation of fibrosis regression. This article reviews the clinical application of current pathological staging systems in the evaluation of fibrosis regression from the perspectives of semi-quantitative scoring system, quantitative approach, and qualitative approach, in order to propose a better pathological evaluation system for the assessment of fibrosis regression.

  12. Should metacognition be measured by logistic regression?

    Science.gov (United States)

    Rausch, Manuel; Zehetleitner, Michael

    2017-03-01

    Are logistic regression slopes suitable to quantify metacognitive sensitivity, i.e. the efficiency with which subjective reports differentiate between correct and incorrect task responses? We analytically show that logistic regression slopes are independent from rating criteria in one specific model of metacognition, which assumes (i) that rating decisions are based on sensory evidence generated independently of the sensory evidence used for primary task responses and (ii) that the distributions of evidence are logistic. Given a hierarchical model of metacognition, logistic regression slopes depend on rating criteria. According to all considered models, regression slopes depend on the primary task criterion. A reanalysis of previous data revealed that massive numbers of trials are required to distinguish between hierarchical and independent models with tolerable accuracy. It is argued that researchers who wish to use logistic regression as measure of metacognitive sensitivity need to control the primary task criterion and rating criteria. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Social reward shapes attentional biases.

    Science.gov (United States)

    Anderson, Brian A

    2016-01-01

    Paying attention to stimuli that predict a reward outcome is important for an organism to survive and thrive. When visual stimuli are associated with tangible, extrinsic rewards such as money or food, these stimuli acquire high attentional priority and come to automatically capture attention. In humans and other primates, however, many behaviors are not motivated directly by such extrinsic rewards, but rather by the social feedback that results from performing those behaviors. In the present study, I examine whether positive social feedback can similarly influence attentional bias. The results show that stimuli previously associated with a high probability of positive social feedback elicit value-driven attentional capture, much like stimuli associated with extrinsic rewards. Unlike with extrinsic rewards, however, such stimuli also influence task-specific motivation. My findings offer a potential mechanism by which social reward shapes the information that we prioritize when perceiving the world around us.

  14. Ratio Bias and Policy Preferences

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Tue

    2017-01-01

    Numbers permeate modern political communication. While current scholarship on framing effects has focused on the persuasive effects of words and arguments, this article shows that framing of numbers can also substantially affect policy preferences. Such effects are caused by ratio bias, which...... is a general tendency to focus on numerators and pay insufficient attention to denominators in ratios. Using a population-based survey experiment, I demonstrate how differently framed but logically equivalent representations of the exact same numerical value can have large effects on citizens’ preferences...... regarding salient political issues such as education and taxes. Furthermore, the effects of numerical framing are found across most groups of the population, largely regardless of their political predisposition and their general ability to understand and use numerical information. These findings have...

  15. Bias in the absorption coefficient determination of a fluorescent dye, standard reference material 1932 fluorescein solution

    International Nuclear Information System (INIS)

    DeRose, Paul C.; Kramer, Gary W.

    2005-01-01

    The absorption coefficient of standard reference material[registered] (SRM[registered]) 1932, fluorescein in a borate buffer solution (pH=9.5) has been determined at λ=488.0, 490.0, 490.5 and 491.0 nm using the US national reference UV/visible spectrophotometer. The purity of the fluorescein was determined to be 97.6% as part of the certification of SRM 1932. The solution measured was prepared gravimetrically by diluting SRM 1932 with additional borate buffer. The value of the absorption coefficient was corrected for bias due to fluorescence that reaches the detector and for dye purity. Bias due to fluorescence was found to be on the order of -1% for both monochromatic and polychromatic (e.g., diode-array based) spectrophotometers

  16. Some Cochrane risk of bias items are not important in osteoarthritis trials

    DEFF Research Database (Denmark)

    Bolvig, Julie; Juhl, Carsten B; Boutron, Isabelle

    2018-01-01

    of the risk of bias tool (RoB), trial size, single vs multi-site, and source of funding. Effect sizes were calculated as standardized mean differences (SMDs). Meta-regression was performed to identify "relevant study-level covariates" that decreases the between-study variance (τˆ2). RESULTS: Twenty reviews...

  17. Regression Model to Predict Global Solar Irradiance in Malaysia

    Directory of Open Access Journals (Sweden)

    Hairuniza Ahmed Kutty

    2015-01-01

    Full Text Available A novel regression model is developed to estimate the monthly global solar irradiance in Malaysia. The model is developed based on different available meteorological parameters, including temperature, cloud cover, rain precipitate, relative humidity, wind speed, pressure, and gust speed, by implementing regression analysis. This paper reports on the details of the analysis of the effect of each prediction parameter to identify the parameters that are relevant to estimating global solar irradiance. In addition, the proposed model is compared in terms of the root mean square error (RMSE, mean bias error (MBE, and the coefficient of determination (R2 with other models available from literature studies. Seven models based on single parameters (PM1 to PM7 and five multiple-parameter models (PM7 to PM12 are proposed. The new models perform well, with RMSE ranging from 0.429% to 1.774%, R2 ranging from 0.942 to 0.992, and MBE ranging from −0.1571% to 0.6025%. In general, cloud cover significantly affects the estimation of global solar irradiance. However, cloud cover in Malaysia lacks sufficient influence when included into multiple-parameter models although it performs fairly well in single-parameter prediction models.

  18. Good practices for quantitative bias analysis.

    Science.gov (United States)

    Lash, Timothy L; Fox, Matthew P; MacLehose, Richard F; Maldonado, George; McCandless, Lawrence C; Greenland, Sander

    2014-12-01

    Quantitative bias analysis serves several objectives in epidemiological research. First, it provides a quantitative estimate of the direction, magnitude and uncertainty arising from systematic errors. Second, the acts of identifying sources of systematic error, writing down models to quantify them, assigning values to the bias parameters and interpreting the results combat the human tendency towards overconfidence in research results, syntheses and critiques and the inferences that rest upon them. Finally, by suggesting aspects that dominate uncertainty in a particular research result or topic area, bias analysis can guide efficient allocation of sparse research resources. The fundamental methods of bias analyses have been known for decades, and there have been calls for more widespread use for nearly as long. There was a time when some believed that bias analyses were rarely undertaken because the methods were not widely known and because automated computing tools were not readily available to implement the methods. These shortcomings have been largely resolved. We must, therefore, contemplate other barriers to implementation. One possibility is that practitioners avoid the analyses because they lack confidence in the practice of bias analysis. The purpose of this paper is therefore to describe what we view as good practices for applying quantitative bias analysis to epidemiological data, directed towards those familiar with the methods. We focus on answering questions often posed to those of us who advocate incorporation of bias analysis methods into teaching and research. These include the following. When is bias analysis practical and productive? How does one select the biases that ought to be addressed? How does one select a method to model biases? How does one assign values to the parameters of a bias model? How does one present and interpret a bias analysis?. We hope that our guide to good practices for conducting and presenting bias analyses will encourage

  19. Body composition measures of obese adolescents by the deuterium oxide dilution method and by bioelectrical impedance

    Directory of Open Access Journals (Sweden)

    C.M.M. Resende

    2011-11-01

    Full Text Available The objectives of the present study were to describe and compare the body composition variables determined by bioelectrical impedance (BIA and the deuterium dilution method (DDM, to identify possible correlations and agreement between the two methods, and to construct a linear regression model including anthropometric measures. Obese adolescents were evaluated by anthropometric measures, and body composition was assessed by BIA and DDM. Forty obese adolescents were included in the study. Comparison of the mean values for the following variables: fat body mass (FM; kg, fat-free mass (FFM; kg, and total body water (TBW; % determined by DDM and by BIA revealed significant differences. BIA overestimated FFM and TBW and underestimated FM. When compared with data provided by DDM, the BIA data presented a significant correlation with FFM (r = 0.89; P < 0.001, FM (r = 0.93; P < 0.001 and TBW (r = 0.62; P < 0.001. The Bland-Altman plot showed no agreement for FFM, FM or TBW between data provided by BIA and DDM. The linear regression models proposed in our study with respect to FFM, FM, and TBW were well adjusted. FFM obtained by DDM = 0.842 x FFM obtained by BIA. FM obtained by DDM = 0.855 x FM obtained by BIA + 0.152 x weight (kg. TBW obtained by DDM = 0.813 x TBW obtained by BIA. The body composition results of obese adolescents determined by DDM can be predicted by using the measures provided by BIA through a regression equation.

  20. Probing Biased Signaling in Chemokine Receptors

    DEFF Research Database (Denmark)

    Amarandi, Roxana Maria; Hjortø, Gertrud Malene; Rosenkilde, Mette Marie

    2016-01-01

    The chemokine system mediates leukocyte migration during homeostatic and inflammatory processes. Traditionally, it is described as redundant and promiscuous, with a single chemokine ligand binding to different receptors and a single receptor having several ligands. Signaling of chemokine receptors...... of others has been termed signaling bias and can accordingly be grouped into ligand bias, receptor bias, and tissue bias. Bias has so far been broadly overlooked in the process of drug development. The low number of currently approved drugs targeting the chemokine system, as well as the broad range...... of failed clinical trials, reflects the need for a better understanding of the chemokine system. Thus, understanding the character, direction, and consequence of biased signaling in the chemokine system may aid the development of new therapeutics. This review describes experiments to assess G protein...