WorldWideScience

Sample records for confidence interval tests

  1. Parametric change point estimation, testing and confidence interval ...

    African Journals Online (AJOL)

    In many applications like finance, industry and medicine, it is important to consider that the model parameters may undergo changes at unknown moment in time. This paper deals with estimation, testing and confidence interval of a change point for a univariate variable which is assumed to be normally distributed. To detect ...

  2. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  3. Binomial confidence intervals for testing non-inferiority or superiority: a practitioner's dilemma.

    Science.gov (United States)

    Pradhan, Vivek; Evans, John C; Banerjee, Tathagata

    2016-08-01

    In testing for non-inferiority or superiority in a single arm study, the confidence interval of a single binomial proportion is frequently used. A number of such intervals are proposed in the literature and implemented in standard software packages. Unfortunately, use of different intervals leads to conflicting conclusions. Practitioners thus face a serious dilemma in deciding which one to depend on. Is there a way to resolve this dilemma? We address this question by investigating the performances of ten commonly used intervals of a single binomial proportion, in the light of two criteria, viz., coverage and expected length of the interval. © The Author(s) 2013.

  4. Robust misinterpretation of confidence intervals

    NARCIS (Netherlands)

    Hoekstra, Rink; Morey, Richard; Rouder, Jeffrey N.; Wagenmakers, Eric-Jan

    2014-01-01

    Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more

  5. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    Science.gov (United States)

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  6. Confidence intervals for correlations when data are not normal.

    Science.gov (United States)

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  7. A Note on Confidence Interval for the Power of the One Sample Test

    OpenAIRE

    A. Wong

    2010-01-01

    In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a ( 1 − ) 100% confidence interval for...

  8. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

    2005-01-01

    The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

  9. Interpretation of Confidence Interval Facing the Conflict

    Science.gov (United States)

    Andrade, Luisa; Fernández, Felipe

    2016-01-01

    As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

  10. Robust Confidence Interval for a Ratio of Standard Deviations

    Science.gov (United States)

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  11. Confidence Intervals from Normalized Data: A correction to Cousineau (2005

    Directory of Open Access Journals (Sweden)

    Richard D. Morey

    2008-09-01

    Full Text Available Presenting confidence intervals around means is a common method of expressing uncertainty in data. Loftus and Masson (1994 describe confidence intervals for means in within-subjects designs. These confidence intervals are based on the ANOVA mean squared error. Cousineau (2005 presents an alternative to the Loftus and Masson method, but his method produces confidence intervals that are smaller than those of Loftus and Masson. I show why this is the case and offer a simple correction that makes the expected size of Cousineau confidence intervals the same as that of Loftus and Masson confidence intervals.

  12. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  13. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

    Science.gov (United States)

    Kim, Kyung Yong; Lee, Won-Chan

    2018-01-01

    Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

  14. A Note on Confidence Interval for the Power of the One Sample Test

    Directory of Open Access Journals (Sweden)

    A. Wong

    2010-01-01

    Full Text Available In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a (1−100% confidence interval for the power of the Student's -test that detects the difference (−0. The calculations require only the density and the cumulative distribution functions of the standard normal distribution. In addition, the methodology presented can also be applied to determine the required sample size when the effect size and the power of a size test of mean are given.

  15. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    Science.gov (United States)

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  16. Using the confidence interval confidently.

    Science.gov (United States)

    Hazra, Avijit

    2017-10-01

    Biomedical research is seldom done with entire populations but rather with samples drawn from a population. Although we work with samples, our goal is to describe and draw inferences regarding the underlying population. It is possible to use a sample statistic and estimates of error in the sample to get a fair idea of the population parameter, not as a single value, but as a range of values. This range is the confidence interval (CI) which is estimated on the basis of a desired confidence level. Calculation of the CI of a sample statistic takes the general form: CI = Point estimate ± Margin of error, where the margin of error is given by the product of a critical value (z) derived from the standard normal curve and the standard error of point estimate. Calculation of the standard error varies depending on whether the sample statistic of interest is a mean, proportion, odds ratio (OR), and so on. The factors affecting the width of the CI include the desired confidence level, the sample size and the variability in the sample. Although the 95% CI is most often used in biomedical research, a CI can be calculated for any level of confidence. A 99% CI will be wider than 95% CI for the same sample. Conflict between clinical importance and statistical significance is an important issue in biomedical research. Clinical importance is best inferred by looking at the effect size, that is how much is the actual change or difference. However, statistical significance in terms of P only suggests whether there is any difference in probability terms. Use of the CI supplements the P value by providing an estimate of actual clinical effect. Of late, clinical trials are being designed specifically as superiority, non-inferiority or equivalence studies. The conclusions from these alternative trial designs are based on CI values rather than the P value from intergroup comparison.

  17. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

    Science.gov (United States)

    Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

    2017-01-01

    Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

  18. Confidence interval procedures for Monte Carlo transport simulations

    International Nuclear Information System (INIS)

    Pederson, S.P.

    1997-01-01

    The problem of obtaining valid confidence intervals based on estimates from sampled distributions using Monte Carlo particle transport simulation codes such as MCNP is examined. Such intervals can cover the true parameter of interest at a lower than nominal rate if the sampled distribution is extremely right-skewed by large tallies. Modifications to the standard theory of confidence intervals are discussed and compared with some existing heuristics, including batched means normality tests. Two new types of diagnostics are introduced to assess whether the conditions of central limit theorem-type results are satisfied: the relative variance of the variance determines whether the sample size is sufficiently large, and estimators of the slope of the right tail of the distribution are used to indicate the number of moments that exist. A simulation study is conducted to quantify the relationship between various diagnostics and coverage rates and to find sample-based quantities useful in indicating when intervals are expected to be valid. Simulated tally distributions are chosen to emulate behavior seen in difficult particle transport problems. Measures of variation in the sample variance s 2 are found to be much more effective than existing methods in predicting when coverage will be near nominal rates. Batched means tests are found to be overly conservative in this regard. A simple but pathological MCNP problem is presented as an example of false convergence using existing heuristics. The new methods readily detect the false convergence and show that the results of the problem, which are a factor of 4 too small, should not be used. Recommendations are made for applying these techniques in practice, using the statistical output currently produced by MCNP

  19. Graphing within-subjects confidence intervals using SPSS and S-Plus.

    Science.gov (United States)

    Wright, Daniel B

    2007-02-01

    Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.

  20. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

    Directory of Open Access Journals (Sweden)

    Ionut Bebu

    2016-06-01

    Full Text Available For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR from several studies, the number needed to treat (NNT, and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates.

  1. Differentially Private Confidence Intervals for Empirical Risk Minimization

    OpenAIRE

    Wang, Yue; Kifer, Daniel; Lee, Jaewoo

    2018-01-01

    The process of data mining with differential privacy produces results that are affected by two types of noise: sampling noise due to data collection and privacy noise that is designed to prevent the reconstruction of sensitive information. In this paper, we consider the problem of designing confidence intervals for the parameters of a variety of differentially private machine learning models. The algorithms can provide confidence intervals that satisfy differential privacy (as well as the mor...

  2. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    Science.gov (United States)

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  3. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    Science.gov (United States)

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  4. Confidence intervals for experiments with background and small numbers of events

    International Nuclear Information System (INIS)

    Bruechle, W.

    2003-01-01

    Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

  5. Confidence intervals for experiments with background and small numbers of events

    International Nuclear Information System (INIS)

    Bruechle, W.

    2002-07-01

    Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

  6. PCA-based bootstrap confidence interval tests for gene-disease association involving multiple SNPs

    Directory of Open Access Journals (Sweden)

    Xue Fuzhong

    2010-01-01

    Full Text Available Abstract Background Genetic association study is currently the primary vehicle for identification and characterization of disease-predisposing variant(s which usually involves multiple single-nucleotide polymorphisms (SNPs available. However, SNP-wise association tests raise concerns over multiple testing. Haplotype-based methods have the advantage of being able to account for correlations between neighbouring SNPs, yet assuming Hardy-Weinberg equilibrium (HWE and potentially large number degrees of freedom can harm its statistical power and robustness. Approaches based on principal component analysis (PCA are preferable in this regard but their performance varies with methods of extracting principal components (PCs. Results PCA-based bootstrap confidence interval test (PCA-BCIT, which directly uses the PC scores to assess gene-disease association, was developed and evaluated for three ways of extracting PCs, i.e., cases only(CAES, controls only(COES and cases and controls combined(CES. Extraction of PCs with COES is preferred to that with CAES and CES. Performance of the test was examined via simulations as well as analyses on data of rheumatoid arthritis and heroin addiction, which maintains nominal level under null hypothesis and showed comparable performance with permutation test. Conclusions PCA-BCIT is a valid and powerful method for assessing gene-disease association involving multiple SNPs.

  7. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    Science.gov (United States)

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

  8. Effect size, confidence intervals and statistical power in psychological research.

    Directory of Open Access Journals (Sweden)

    Téllez A.

    2015-07-01

    Full Text Available Quantitative psychological research is focused on detecting the occurrence of certain population phenomena by analyzing data from a sample, and statistics is a particularly helpful mathematical tool that is used by researchers to evaluate hypotheses and make decisions to accept or reject such hypotheses. In this paper, the various statistical tools in psychological research are reviewed. The limitations of null hypothesis significance testing (NHST and the advantages of using effect size and its respective confidence intervals are explained, as the latter two measurements can provide important information about the results of a study. These measurements also can facilitate data interpretation and easily detect trivial effects, enabling researchers to make decisions in a more clinically relevant fashion. Moreover, it is recommended to establish an appropriate sample size by calculating the optimum statistical power at the moment that the research is designed. Psychological journal editors are encouraged to follow APA recommendations strictly and ask authors of original research studies to report the effect size, its confidence intervals, statistical power and, when required, any measure of clinical significance. Additionally, we must account for the teaching of statistics at the graduate level. At that level, students do not receive sufficient information concerning the importance of using different types of effect sizes and their confidence intervals according to the different types of research designs; instead, most of the information is focused on the various tools of NHST.

  9. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

    Science.gov (United States)

    Carnegie, Nicole Bohme

    2011-04-15

    The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

  10. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  11. Using an R Shiny to Enhance the Learning Experience of Confidence Intervals

    Science.gov (United States)

    Williams, Immanuel James; Williams, Kelley Kim

    2018-01-01

    Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…

  12. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-12-01

    Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

  13. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    Science.gov (United States)

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  14. Robust misinterpretation of confidence intervals.

    Science.gov (United States)

    Hoekstra, Rink; Morey, Richard D; Rouder, Jeffrey N; Wagenmakers, Eric-Jan

    2014-10-01

    Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students-all in the field of psychology-were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.

  15. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-11-01

    MCNP's criticality methodology and some basic statistics are reviewed. Confidence intervals are discussed, as well as how to build them and their importance in the presentation of a Monte Carlo result. The combination of MCNP's three k eff estimators is shown, theoretically and empirically, by statistical studies and examples, to be the best k eff estimator. The method of combining estimators is based on a solid theoretical foundation, namely, the Gauss-Markov Theorem in regard to the least squares method. The confidence intervals of the combined estimator are also shown to have correct coverage rates for the examples considered

  16. Understanding Confidence Intervals With Visual Representations

    OpenAIRE

    Navruz, Bilgin; Delen, Erhan

    2014-01-01

    In the present paper, we showed how confidence intervals (CIs) are valuable and useful in research studies when they are used in the correct form with correct interpretations. The sixth edition of the APA (2010) Publication Manual strongly recommended reporting CIs in research studies, and it was described as “the best reporting strategy” (p. 34). Misconceptions and correct interpretations of CIs were presented from several textbooks. In addition, limitations of the null hypothesis statistica...

  17. The P Value Problem in Otolaryngology: Shifting to Effect Sizes and Confidence Intervals.

    Science.gov (United States)

    Vila, Peter M; Townsend, Melanie Elizabeth; Bhatt, Neel K; Kao, W Katherine; Sinha, Parul; Neely, J Gail

    2017-06-01

    There is a lack of reporting effect sizes and confidence intervals in the current biomedical literature. The objective of this article is to present a discussion of the recent paradigm shift encouraging the use of reporting effect sizes and confidence intervals. Although P values help to inform us about whether an effect exists due to chance, effect sizes inform us about the magnitude of the effect (clinical significance), and confidence intervals inform us about the range of plausible estimates for the general population mean (precision). Reporting effect sizes and confidence intervals is a necessary addition to the biomedical literature, and these concepts are reviewed in this article.

  18. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    Science.gov (United States)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  19. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  20. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  1. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    Science.gov (United States)

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  2. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    Science.gov (United States)

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  3. 用Delta法估计多维测验合成信度的置信区间%Estimating the Confidence Interval of Composite Reliability of a Multidimensional Test With the Delta Method

    Institute of Scientific and Technical Information of China (English)

    叶宝娟; 温忠麟

    2012-01-01

    Reliability is very important in evaluating the quality of a test. Based on the confirmatory factor analysis, composite reliabili- ty is a good index to estimate the test reliability for general applications. As is well known, point estimate contains limited information a- bout a population parameter and cannot indicate how far it can be from the population parameter. The confidence interval of the parame- ter can provide more information. In evaluating the quality of a test, the confidence interval of composite reliability has received atten- tion in recent years. There are three approaches to estimating the confidence interval of composite reliability of an unidimensional test: the Bootstrap method, the Delta method, and the direct use of the standard error of a software output (e. g. , LISREL). The Bootstrap method pro- vides empirical results of the standard error, and is the most credible method. But it needs data simulation techniques, and its computa- tion process is rather complex. The Delta method computes the standard error of composite reliability by approximate calculation. It is simpler than the Bootstrap method. The LISREL software can directly prompt the standard error, and it is the easiest among the three methods. By simulation study, it had been found that the interval estimates obtained by the Delta method and the Bootstrap method were almost identical, whereas the results obtained by LISREL and by the Bootstrap method were substantially different ( Ye & Wen, 2011 ). The Delta method is recommended when the confidence interval of composite reliability of a unidimensional test is estimated, because the Delta method is simpler than the Bootstrap method. There was little research about how to compute the confidence interval of composite reliability of a multidimensional test. We de- duced a formula by using the Delta method for computing the standard error of composite reliability of a multidimensional test. Based on the standard error, the

  4. The confidence-accuracy relationship for eyewitness identification decisions: Effects of exposure duration, retention interval, and divided attention.

    Science.gov (United States)

    Palmer, Matthew A; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

    2013-03-01

    Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the CA relationship for eyewitness identification decisions is affected by three, forensically relevant variables: exposure duration, retention interval, and divided attention at encoding. In Study 1 (N = 986), a field experiment, we examined the effects of exposure duration (5 s vs. 90 s) and retention interval (immediate testing vs. a 1-week delay) on the CA relationship. In Study 2 (N = 502), we examined the effects of attention during encoding on the CA relationship by reanalyzing data from a laboratory experiment in which participants viewed a stimulus video under full or divided attention conditions and then attempted to identify two targets from separate lineups. Across both studies, all three manipulations affected identification accuracy. The central analyses concerned the CA relation for positive identification decisions. For the manipulations of exposure duration and retention interval, overconfidence was greater in the more difficult conditions (shorter exposure; delayed testing) than the easier conditions. Only the exposure duration manipulation influenced resolution (which was better for 5 s than 90 s), and only the retention interval manipulation affected calibration (which was better for immediate testing than delayed testing). In all experimental conditions, accuracy and diagnosticity increased with confidence, particularly at the upper end of the confidence scale. Implications for theory and forensic settings are discussed.

  5. Confidence Intervals from Realizations of Simulated Nuclear Data

    Energy Technology Data Exchange (ETDEWEB)

    Younes, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ratkiewicz, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ressler, J. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-28

    Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.

  6. On Bayesian treatment of systematic uncertainties in confidence interval calculation

    CERN Document Server

    Tegenfeldt, Fredrik

    2005-01-01

    In high energy physics, a widely used method to treat systematic uncertainties in confidence interval calculations is based on combining a frequentist construction of confidence belts with a Bayesian treatment of systematic uncertainties. In this note we present a study of the coverage of this method for the standard Likelihood Ratio (aka Feldman & Cousins) construction for a Poisson process with known background and Gaussian or log-Normal distributed uncertainties in the background or signal efficiency. For uncertainties in the signal efficiency of upto 40 % we find over-coverage on the level of 2 to 4 % depending on the size of uncertainties and the region in signal space. Uncertainties in the background generally have smaller effect on the coverage. A considerable smoothing of the coverage curves is observed. A software package is presented which allows fast calculation of the confidence intervals for a variety of assumptions on shape and size of systematic uncertainties for different nuisance paramete...

  7. Confidence intervals for the lognormal probability distribution

    International Nuclear Information System (INIS)

    Smith, D.L.; Naberejnev, D.G.

    2004-01-01

    The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

  8. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    Science.gov (United States)

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  9. Energy Performance Certificate of building and confidence interval in assessment: An Italian case study

    International Nuclear Information System (INIS)

    Tronchin, Lamberto; Fabbri, Kristian

    2012-01-01

    The Directive 2002/91/CE introduced the Energy Performance Certificate (EPC), an energy policy tool. The aim of the EPC is to inform building buyers about the energy performance and energy costs of buildings. The EPCs represent a specific energy policy tool to orient the building sector and real-estate markets toward higher energy efficiency buildings. The effectiveness of the EPC depends on two factors: •The accuracy of energy performance evaluation made by independent experts. •The capability of the energy classification and of the scale of energy performance to control the energy index fluctuations. In this paper, the results of a case study located in Italy are shown. In this example, 162 independent technicians on energy performance of building evaluation have studied the same building. The results reveal which part of confidence intervals is dependent on software misunderstanding and that the energy classification ranges are able to tolerate the fluctuation of energy indices. The example was chosen in accordance with the legislation of the Emilia-Romagna Region on Energy Efficiency of Buildings. Following these results, some thermo-economic evaluation related to building and energy labelling are illustrated, as the EPC, which is an energy policy tool for the real-estate market and building sector to find a way to build or retrofit an energy efficiency building. - Highlights: ► Evaluation of the accuracy of energy performance of buildings in relation with the knowledge of independent experts. ► Round robin test based on 162 case studies on the confidence intervals expressed by independent experts. ► Statistical considerations between the confidence intervals expressed by independent experts and energy simulation software. ► Relation between “proper class” in energy classification of buildings and confidence intervals of independent experts.

  10. Confidence intervals for the first crossing point of two hazard functions.

    Science.gov (United States)

    Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

    2009-12-01

    The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

  11. Incidence of interval cancers in faecal immunochemical test colorectal screening programmes in Italy.

    Science.gov (United States)

    Giorgi Rossi, Paolo; Carretta, Elisa; Mangone, Lucia; Baracco, Susanna; Serraino, Diego; Zorzi, Manuel

    2018-03-01

    Objective In Italy, colorectal screening programmes using the faecal immunochemical test from ages 50 to 69 every two years have been in place since 2005. We aimed to measure the incidence of interval cancers in the two years after a negative faecal immunochemical test, and compare this with the pre-screening incidence of colorectal cancer. Methods Using data on colorectal cancers diagnosed in Italy from 2000 to 2008 collected by cancer registries in areas with active screening programmes, we identified cases that occurred within 24 months of negative screening tests. We used the number of tests with a negative result as a denominator, grouped by age and sex. Proportional incidence was calculated for the first and second year after screening. Results Among 579,176 and 226,738 persons with negative test results followed up at 12 and 24 months, respectively, we identified 100 interval cancers in the first year and 70 in the second year. The proportional incidence was 13% (95% confidence interval 10-15) and 23% (95% confidence interval 18-25), respectively. The estimate for the two-year incidence is 18%, which was slightly higher in females (22%; 95% confidence interval 17-26), and for proximal colon (22%; 95% confidence interval 16-28). Conclusion The incidence of interval cancers in the two years after a negative faecal immunochemical test in routine population-based colorectal cancer screening was less than one-fifth of the expected incidence. This is direct evidence that the faecal immunochemical test-based screening programme protocol has high sensitivity for cancers that will become symptomatic.

  12. Comparing confidence intervals for Goodman and Kruskal's gamma coefficient

    NARCIS (Netherlands)

    van der Ark, L.A.; van Aert, R.C.M.

    2015-01-01

    This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman-Kruskal CI, the Cliff-consistent CI, the

  13. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  14. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    Science.gov (United States)

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  15. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Science.gov (United States)

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  16. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Directory of Open Access Journals (Sweden)

    Tak Fung

    Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  17. Comparing confidence intervals for Goodman and Kruskal’s gamma coefficient

    NARCIS (Netherlands)

    van der Ark, L.A.; van Aert, R.C.M.

    2015-01-01

    This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman–Kruskal CI, the Cliff-consistent CI, the

  18. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    Science.gov (United States)

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

  19. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Science.gov (United States)

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  20. On a linear method in bootstrap confidence intervals

    Directory of Open Access Journals (Sweden)

    Andrea Pallini

    2007-10-01

    Full Text Available A linear method for the construction of asymptotic bootstrap confidence intervals is proposed. We approximate asymptotically pivotal and non-pivotal quantities, which are smooth functions of means of n independent and identically distributed random variables, by using a sum of n independent smooth functions of the same analytical form. Errors are of order Op(n-3/2 and Op(n-2, respectively. The linear method allows a straightforward approximation of bootstrap cumulants, by considering the set of n independent smooth functions as an original random sample to be resampled with replacement.

  1. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  2. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.

    2012-12-06

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

  3. Comparison of Bootstrap Confidence Intervals Using Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Roberto S. Flowers-Cano

    2018-02-01

    Full Text Available Design of hydraulic works requires the estimation of design hydrological events by statistical inference from a probability distribution. Using Monte Carlo simulations, we compared coverage of confidence intervals constructed with four bootstrap techniques: percentile bootstrap (BP, bias-corrected bootstrap (BC, accelerated bias-corrected bootstrap (BCA and a modified version of the standard bootstrap (MSB. Different simulation scenarios were analyzed. In some cases, the mother distribution function was fit to the random samples that were generated. In other cases, a distribution function different to the mother distribution was fit to the samples. When the fitted distribution had three parameters, and was the same as the mother distribution, the intervals constructed with the four techniques had acceptable coverage. However, the bootstrap techniques failed in several of the cases in which the fitted distribution had two parameters.

  4. Methods for confidence interval estimation of a ratio parameter with application to location quotients

    Directory of Open Access Journals (Sweden)

    Beyene Joseph

    2005-10-01

    Full Text Available Abstract Background The location quotient (LQ ratio, a measure designed to quantify and benchmark the degree of relative concentration of an activity in the analysis of area localization, has received considerable attention in the geographic and economics literature. This index can also naturally be applied in the context of population health to quantify and compare health outcomes across spatial domains. However, one commonly observed limitation of LQ is its widespread use as only a point estimate without an accompanying confidence interval. Methods In this paper we present statistical methods that can be used to construct confidence intervals for location quotients. The delta and Fieller's methods are generic approaches for a ratio parameter and the generalized linear modelling framework is a useful re-parameterization particularly helpful for generating profile-likelihood based confidence intervals for the location quotient. A simulation experiment is carried out to assess the performance of each of the analytic approaches and a health utilization data set is used for illustration. Results Both the simulation results as well as the findings from the empirical data show that the different analytical methods produce very similar confidence limits for location quotients. When incidence of outcome is not rare and sample sizes are large, the confidence limits are almost indistinguishable. The confidence limits from the generalized linear model approach might be preferable in small sample situations. Conclusion LQ is a useful measure which allows quantification and comparison of health and other outcomes across defined geographical regions. It is a very simple index to compute and has a straightforward interpretation. Reporting this estimate with appropriate confidence limits using methods presented in this paper will make the measure particularly attractive for policy and decision makers.

  5. Confidence interval of intrinsic optimum temperature estimated using thermodynamic SSI model

    Institute of Scientific and Technical Information of China (English)

    Takaya Ikemoto; Issei Kurahashi; Pei-Jian Shi

    2013-01-01

    The intrinsic optimum temperature for the development of ectotherms is one of the most important factors not only for their physiological processes but also for ecological and evolutional processes.The Sharpe-Schoolfield-Ikemoto (SSI) model succeeded in defining the temperature that can thermodynamically meet the condition that at a particular temperature the probability of an active enzyme reaching its maximum activity is realized.Previously,an algorithm was developed by Ikemoto (Tropical malaria does not mean hot environments.Journal of Medical Entomology,45,963-969) to estimate model parameters,but that program was computationally very time consuming.Now,investigators can use the SSI model more easily because a full automatic computer program was designed by Shi et al.(A modified program for estimating the parameters of the SSI model.Environmental Entomology,40,462-469).However,the statistical significance of the point estimate of the intrinsic optimum temperature for each ectotherm has not yet been determined.Here,we provided a new method for calculating the confidence interval of the estimated intrinsic optimum temperature by modifying the approximate bootstrap confidence intervals method.For this purpose,it was necessary to develop a new program for a faster estimation of the parameters in the SSI model,which we have also done.

  6. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    Science.gov (United States)

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Bootstrap resampling: a powerful method of assessing confidence intervals for doses from experimental data

    International Nuclear Information System (INIS)

    Iwi, G.; Millard, R.K.; Palmer, A.M.; Preece, A.W.; Saunders, M.

    1999-01-01

    Bootstrap resampling provides a versatile and reliable statistical method for estimating the accuracy of quantities which are calculated from experimental data. It is an empirically based method, in which large numbers of simulated datasets are generated by computer from existing measurements, so that approximate confidence intervals of the derived quantities may be obtained by direct numerical evaluation. A simple introduction to the method is given via a detailed example of estimating 95% confidence intervals for cumulated activity in the thyroid following injection of 99m Tc-sodium pertechnetate using activity-time data from 23 subjects. The application of the approach to estimating confidence limits for the self-dose to the kidney following injection of 99m Tc-DTPA organ imaging agent based on uptake data from 19 subjects is also illustrated. Results are then given for estimates of doses to the foetus following administration of 99m Tc-sodium pertechnetate for clinical reasons during pregnancy, averaged over 25 subjects. The bootstrap method is well suited for applications in radiation dosimetry including uncertainty, reliability and sensitivity analysis of dose coefficients in biokinetic models, but it can also be applied in a wide range of other biomedical situations. (author)

  8. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    Science.gov (United States)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  9. Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion

    Directory of Open Access Journals (Sweden)

    David Shilane

    2013-01-01

    Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

  10. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-01-01

    MCNP has three different, but correlated, estimators for Calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov Theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the individual estimator with the smallest variance. The importance of MCNP's batch statistics is demonstrated by an investigation of the effects of individual estimator variance bias on the combination of estimators, both heuristically with the analytical study and emprically with MCNP

  11. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-01-01

    The Monte Carlo code MCNP has three different, but correlated, estimators for calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the estimator with the smallest variance. Empirically, MCNP examples for several physical systems demonstrate the three-combined estimator's superiority over each of the three individual estimators and its correct coverage rates. Additionally, the importance of MCNP's statistical checks is demonstrated

  12. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

    Science.gov (United States)

    Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

    2011-02-20

    A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Confidence intervals for modeling anthocyanin retention in grape pomace during nonisothermal heating.

    Science.gov (United States)

    Mishra, D K; Dolan, K D; Yang, L

    2008-01-01

    Degradation of nutraceuticals in low- and intermediate-moisture foods heated at high temperature (>100 degrees C) is difficult to model because of the nonisothermal condition. Isothermal experiments above 100 degrees C are difficult to design because they require high pressure and small sample size in sealed containers. Therefore, a nonisothermal method was developed to estimate the thermal degradation kinetic parameter of nutraceuticals and determine the confidence intervals for the parameters and the predicted Y (concentration). Grape pomace at 42% moisture content (wb) was heated in sealed 202 x 214 steel cans in a steam retort at 126.7 degrees C for > 30 min. Can center temperature was measured by thermocouple and predicted using Comsol software. Thermal conductivity (k) and specific heat (C(p)) were estimated as quadratic functions of temperature using Comsol and nonlinear regression. The k and C(p) functions were then used to predict temperature inside the grape pomace during retorting. Similar heating experiments were run at different time-temperature treatments from 8 to 25 min for kinetic parameter estimation. Anthocyanin concentration in the grape pomace was measured using HPLC. Degradation rate constant (k(110 degrees C)) and activation energy (E(a)) were estimated using nonlinear regression. The thermophysical properties estimates at 100 degrees C were k = 0.501 W/m degrees C, Cp= 3600 J/kg and the kinetic parameters were k(110 degrees C)= 0.0607/min and E(a)= 65.32 kJ/mol. The 95% confidence intervals for the parameters and the confidence bands and prediction bands for anthocyanin retention were plotted. These methods are useful for thermal processing design for nutraceutical products.

  14. Statistical variability and confidence intervals for planar dose QA pass rates

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher; Kumaraswamy, Lalith; Podgorsak, Matthew B. [Department of Physics, State University of New York at Buffalo, Buffalo, New York 14260 (United States) and Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Department of Biostatistics, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Molecular and Cellular Biophysics and Biochemistry, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States) and Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, New York 14214 (United States)

    2011-11-15

    Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics of various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization

  15. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    Science.gov (United States)

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  16. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    Science.gov (United States)

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Planning an Availability Demonstration Test with Consideration of Confidence Level

    Directory of Open Access Journals (Sweden)

    Frank Müller

    2017-08-01

    Full Text Available The full service life of a technical product or system is usually not completed after an initial failure. With appropriate measures, the system can be returned to a functional state. Availability is an important parameter for evaluating such repairable systems: Failure and repair behaviors are required to determine this availability. These data are usually given as mean value distributions with a certain confidence level. Consequently, the availability value also needs to be expressed with a confidence level. This paper first highlights the bootstrap Monte Carlo simulation (BMCS for availability demonstration and inference with confidence intervals based on limited failure and repair data. The BMCS enables point-, steady-state and average availability to be determined with a confidence level based on the pure samples or mean value distributions in combination with the corresponding sample size of failure and repair behavior. Furthermore, the method enables individual sample sizes to be used. A sample calculation of a system with Weibull-distributed failure behavior and a sample of repair times is presented. Based on the BMCS, an extended, new procedure is introduced: the “inverse bootstrap Monte Carlo simulation” (IBMCS to be used for availability demonstration tests with consideration of confidence levels. The IBMCS provides a test plan comprising the required number of failures and repair actions that must be observed to demonstrate a certain availability value. The concept can be applied to each type of availability and can also be applied to the pure samples or distribution functions of failure and repair behavior. It does not require special types of distribution. In other words, for example, a Weibull, a lognormal or an exponential distribution can all be considered as distribution functions of failure and repair behavior. After presenting the IBMCS, a sample calculation will be carried out and the potential of the BMCS and the IBMCS

  18. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  19. Secure and Usable Bio-Passwords based on Confidence Interval

    Directory of Open Access Journals (Sweden)

    Aeyoung Kim

    2017-02-01

    Full Text Available The most popular user-authentication method is the password. Many authentication systems try to enhance their security by enforcing a strong password policy, and by using the password as the first factor, something you know, with the second factor being something you have. However, a strong password policy and a multi-factor authentication system can make it harder for a user to remember the password and login in. In this paper a bio-password-based scheme is proposed as a unique authentication method, which uses biometrics and confidence interval sets to enhance the security of the log-in process and make it easier as well. The method offers a user-friendly solution for creating and registering strong passwords without the user having to memorize them. Here we also show the results of our experiments which demonstrate the efficiency of this method and how it can be used to protect against a variety of malicious attacks.

  20. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  1. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

    Science.gov (United States)

    Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

    2010-08-06

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

  2. Rescaled Range Analysis and Detrended Fluctuation Analysis: Finite Sample Properties and Confidence Intervals

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    4/2010, č. 3 (2010), s. 236-250 ISSN 1802-4696 R&D Projects: GA ČR GD402/09/H045; GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310 Institutional research plan: CEZ:AV0Z10750506 Keywords : rescaled range analysis * detrended fluctuation analysis * Hurst exponent * long-range dependence Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/kristoufek-rescaled range analysis and detrended fluctuation analysis finite sample properties and confidence intervals.pdf

  3. nigerian students' self-confidence in responding to statements

    African Journals Online (AJOL)

    Temechegn

    Altogether the test is made up of 40 items covering students' ability to recall definition ... confidence interval within which student have confidence in their choice of the .... is mentioned these equilibrium systems come to memory of the learner.

  4. The 95% confidence intervals of error rates and discriminant coefficients

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2015-02-01

    Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

  5. Coverage probability of bootstrap confidence intervals in heavy-tailed frequency models, with application to precipitation data

    Czech Academy of Sciences Publication Activity Database

    Kyselý, Jan

    2010-01-01

    Roč. 101, 3-4 (2010), s. 345-361 ISSN 0177-798X R&D Projects: GA AV ČR KJB300420801 Institutional research plan: CEZ:AV0Z30420517 Keywords : bootstrap * extreme value analysis * confidence intervals * heavy-tailed distributions * precipitation amounts Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.684, year: 2010

  6. A note on Nonparametric Confidence Interval for a Shift Parameter ...

    African Journals Online (AJOL)

    The method is illustrated using the Cauchy distribution as a location model. The kernel-based method is found to have a shorter interval for the shift parameter between two Cauchy distributions than the one based on the Mann-Whitney test statistic. Keywords: Best Asymptotic Normal; Cauchy distribution; Kernel estimates; ...

  7. A NEW METHOD FOR CONSTRUCTING CONFIDENCE INTERVAL FOR CPM BASED ON FUZZY DATA

    Directory of Open Access Journals (Sweden)

    Bahram Sadeghpour Gildeh

    2011-06-01

    Full Text Available A measurement control system ensures that measuring equipment and measurement processes are fit for their intended use and its importance in achieving product quality objectives. In most real life applications, the observations are fuzzy. In some cases specification limits (SLs are not precise numbers and they are expressed in fuzzy terms, s o that the classical capability indices could not be applied. In this paper we obtain 100(1 - α% fuzzy confidence interval for C pm fuzzy process capability index, where instead of precise quality we have two membership functions for specification limits.

  8. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

    Science.gov (United States)

    Odgaard, Eric C.; Fowler, Robert L.

    2010-01-01

    Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

  9. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    Science.gov (United States)

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. An SPSS Macro to Compute Confidence Intervals for Pearson’s Correlation

    Directory of Open Access Journals (Sweden)

    Bruce Weaver

    2014-04-01

    Full Text Available In many disciplines, including psychology, medical research, epidemiology and public health, authors are required, or at least encouraged to report confidence intervals (CIs along with effect size estimates. Many students and researchers in these areas use IBM-SPSS for statistical analysis. Unfortunately, the CORRELATIONS procedure in SPSS does not provide CIs in the output. Various work-around solutions have been suggested for obtaining CIs for rhowith SPSS, but most of them have been sub-optimal. Since release 18, it has been possible to compute bootstrap CIs, but only if users have the optional bootstrap module. The !rhoCI macro described in this article is accessible to all SPSS users with release 14 or later. It directs output from the CORRELATIONS procedure to another dataset, restructures that dataset to have one row per correlation, computes a CI for each correlation, and displays the results in a single table. Because the macro uses the CORRELATIONS procedure, it allows users to specify a list of two or more variables to include in the correlation matrix, to choose a confidence level, and to select either listwise or pairwise deletion. Thus, it offers substantial improvements over previous solutions to theproblem of how to compute CIs for rho with SPSS.

  11. The best confidence interval of the failure rate and unavailability per demand when few experimental data are available

    International Nuclear Information System (INIS)

    Goodman, J.

    1985-01-01

    Using a few available data the likelihood functions for the failure rate and unavailability per demand are constructed. These likelihood functions are used to obtain likelihood density functions for the failure rate and unavailability per demand. The best (or shortest) confidence intervals for these functions are provided. The failure rate and unavailability per demand are important characteristics needed for reliability and availability analysis. The methods of estimation of these characteristics when plenty of observed data are available are well known. However, on many occasions when we deal with rare failure modes or with new equipment or components for which sufficient experience has not accumulated, we have scarce data where few or zero failures have occurred. In these cases, a technique which reflects exactly our state of knowledge is required. This technique is based on likelihood density function or Bayesian methods depending on the available prior distribution. To extract the maximum amount of information from the data the best confidence interval is determined

  12. Confidence Testing for Knowledge-Based Global Communities

    Science.gov (United States)

    Jack, Brady Michael; Liu, Chia-Ju; Chiu, Houn-Lin; Shymansky, James A.

    2009-01-01

    This proposal advocates the position that the use of confidence wagering (CW) during testing can predict the accuracy of a student's test answer selection during between-subject assessments. Data revealed female students were more favorable to taking risks when making CW and less inclined toward risk aversion than their male counterparts. Student…

  13. 46 CFR 57.06-2 - Production test plate interval of testing.

    Science.gov (United States)

    2010-10-01

    ... WELDING AND BRAZING Production Tests § 57.06-2 Production test plate interval of testing. (a) At least one... 46 Shipping 2 2010-10-01 2010-10-01 false Production test plate interval of testing. 57.06-2... follows: (1) When the extent of welding on a single vessel exceeds 50 lineal feet of either or both...

  14. Assessing a disaggregated energy input: using confidence intervals around translog elasticity estimates

    International Nuclear Information System (INIS)

    Hisnanick, J.J.; Kyer, B.L.

    1995-01-01

    The role of energy in the production of manufacturing output has been debated extensively in the literature, particularly its relationship with capital and labor. In an attempt to provide some clarification in this debate, a two-step methodology was used. First under the assumption of a five-factor production function specification, we distinguished between electric and non-electric energy and assessed each component's relationship with capital and labor. Second, we calculated both the Allen and price elasticities and constructed 95% confidence intervals around these values. Our approach led to the following conclusions: that the disaggregation of the energy input into electric and non-electric energy is justified; that capital and electric energy and capital and non-electric energy are substitutes, while labor and electric energy and labor and non-electric energy are complements in production; and that capital and energy are substitutes, while labor and energy are complements. (author)

  15. Existence test for asynchronous interval iterations

    DEFF Research Database (Denmark)

    Madsen, Kaj; Caprani, O.; Stauning, Ole

    1997-01-01

    In the search for regions that contain fixed points ofa real function of several variables, tests based on interval calculationscan be used to establish existence ornon-existence of fixed points in regions that are examined in the course ofthe search. The search can e.g. be performed...... as a synchronous (sequential) interval iteration:In each iteration step all components of the iterate are calculatedbased on the previous iterate. In this case it is straight forward to base simple interval existence and non-existencetests on the calculations done in each step of the iteration. The search can also...... on thecomponentwise calculations done in the course of the iteration. These componentwisetests are useful for parallel implementation of the search, sincethe tests can then be performed local to each processor and only when a test issuccessful do a processor communicate this result to other processors....

  16. Development and Evaluation of a Confidence-Weighting Computerized Adaptive Testing

    Science.gov (United States)

    Yen, Yung-Chin; Ho, Rong-Guey; Chen, Li-Ju; Chou, Kun-Yi; Chen, Yan-Lin

    2010-01-01

    The purpose of this study was to examine whether the efficiency, precision, and validity of computerized adaptive testing (CAT) could be improved by assessing confidence differences in knowledge that examinees possessed. We proposed a novel polytomous CAT model called the confidence-weighting computerized adaptive testing (CWCAT), which combined a…

  17. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  18. Interpretando correctamente en salud pública estimaciones puntuales, intervalos de confianza y contrastes de hipótesis Accurate interpretation of point estimates, confidence intervals, and hypothesis tests in public health

    Directory of Open Access Journals (Sweden)

    Manuel G Scotto

    2003-12-01

    Full Text Available El presente ensayo trata de aclarar algunos conceptos utilizados habitualmente en el campo de investigación de la salud pública, que en numerosas situaciones son interpretados de manera incorrecta. Entre ellos encontramos la estimación puntual, los intervalos de confianza, y los contrastes de hipótesis. Estableciendo un paralelismo entre estos tres conceptos, podemos observar cuáles son sus diferencias más importantes a la hora de ser interpretados, tanto desde el punto de vista del enfoque clásico como desde la óptica bayesiana.This essay reviews some statistical concepts frequently used in public health research that are commonly misinterpreted. These include point estimates, confidence intervals, and hypothesis tests. By comparing them using the classical and the Bayesian perspectives, their interpretation becomes clearer.

  19. Learning about confidence intervals with software R

    Directory of Open Access Journals (Sweden)

    Gariela Gonçalves

    2013-08-01

    Full Text Available 0 0 1 202 1111 USAL 9 2 1311 14.0 Normal 0 21 false false false ES JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:ES; mso-fareast-language:EN-US;} This work was to study the feasibility of implementing a teaching method that employs software, in a Computational Mathematics course, involving students and teachers through the use of the statistical software R in carrying out practical work, such as strengthening the traditional teaching. The statistical inference, namely the determination of confidence intervals, was the content selected for this experience. It was intended show, first of all, that it is possible to promote, through the proposal methodology, the acquisition of basic skills in statistical inference and to promote the positive relationships between teachers and students. It presents also a comparative study between the methodologies used and their quantitative and qualitative results on two consecutive school years, in several indicators. The data used in the study were obtained from the students to the exam questions in the years 2010/2011 and 2011/2012, from the achievement of a working group in 2011/2012 and via the responses to a questionnaire (optional and anonymous also applied in 2011 / 2012. In terms of results, we emphasize a better performance of students in the examination questions in 2011/2012, the year that students used the software R, and a very favorable student’s perspective about

  20. Confidence bounds and hypothesis tests for normal distribution coefficients of variation

    Science.gov (United States)

    Steve P. Verrill; Richard A. Johnson

    2007-01-01

    For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations. To develop these confidence bounds and test, we first establish that estimators based on Newton steps from n-...

  1. User guide to the UNC1NLI1 package and three utility programs for computation of nonlinear confidence and prediction intervals using MODFLOW-2000

    DEFF Research Database (Denmark)

    Christensen, Steen; Cooley, R.L.

    a model (for example when using the Parameter-Estimation Process of MODFLOW-2000) it is advantageous to also use regression-based methods to quantify uncertainty. For this reason the UNC Process computes (1) confidence intervals for parameters of the Parameter-Estimation Process and (2) confidence...

  2. AlphaCI: un programa de cálculo de intervalos de confianza para el coeficiente alfa de Cronbach AlphaCI: a computer program for computing confidence intervals around Cronbach's alfa coefficient

    Directory of Open Access Journals (Sweden)

    Rubén Ledesma

    2004-06-01

    Full Text Available El coeficiente alfa de Cronbach es el modo más habitual de estimar la fiabilidad de pruebas basadas en Teoría Clásica de los Test. En dicha estimación, los investigadores usualmente omiten informar intervalos de confianza para el coeficiente, un aspecto no solo recomendado por los especialistas, sino también requerido explícitamente en las normas editoriales de algunas revistas especializadas. Esta situación puede atribuirse a que los métodos de estimación de intervalos de confianza son poco conocidos, además de no estar disponibles al usuario en los programas estadísticos más populares. Así, en este trabajo se presenta un programa desarrollado dentro del sistema estadístico ViSta que permite calcular intervalos de confianza basados en el enfoque clásico y mediante la técnica bootstrap. Se espera promover la inclusión de intervalos de confianza para medidas de fiabilidad, facilitando el acceso a las herramientas necesarias para su aplicación. El programa es gratuito y puede obtenerse enviando un mail de solicitud al autor del trabajo.Cronbach's alpha coefficient is the most popular way of estimating reliability in measurement scales based on Classic Test Theory. When estimating it, researchers usually omit to report confidence intervals of this coefficient, as it is not only recommended by experts, but also required by some journal's guidelines. This situation is because the different methods of estimating confidence intervals are not well-known by researchers, as well as they are not being available among the most popular statistical packages. Therefore, this paper describes a computer program integrated into the ViSta statistical system, which allows computing confidence intervals based on the classical approach and using bootstrap technique. It is hoped that this work promotes the inclusion of confidence intervals of reliability measures, by increasing the availability of the required computer tools. The program is free and

  3. Bootstrap confidence intervals for three-way methods

    NARCIS (Netherlands)

    Kiers, Henk A.L.

    Results from exploratory three-way analysis techniques such as CANDECOMP/PARAFAC and Tucker3 analysis are usually presented without giving insight into uncertainties due to sampling. Here a bootstrap procedure is proposed that produces percentile intervals for all output parameters. Special

  4. A spreadsheet template compatible with Microsoft Excel and iWork Numbers that returns the simultaneous confidence intervals for all pairwise differences between multiple sample means.

    Science.gov (United States)

    Brown, Angus M

    2010-04-01

    The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.

  5. A nonparametric statistical method for determination of a confidence interval for the mean of a set of results obtained in a laboratory intercomparison

    International Nuclear Information System (INIS)

    Veglia, A.

    1981-08-01

    In cases where sets of data are obviously not normally distributed, the application of a nonparametric method for the estimation of a confidence interval for the mean seems to be more suitable than some other methods because such a method requires few assumptions about the population of data. A two-step statistical method is proposed which can be applied to any set of analytical results: elimination of outliers by a nonparametric method based on Tchebycheff's inequality, and determination of a confidence interval for the mean by a non-parametric method based on binominal distribution. The method is appropriate only for samples of size n>=10

  6. The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals.

    Science.gov (United States)

    Brand, Andrew; Bradley, Michael T

    2016-02-01

    Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.

  7. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  8. Prediction of the distillation temperatures of crude oils using ¹H NMR and support vector regression with estimated confidence intervals.

    Science.gov (United States)

    Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J

    2015-09-01

    This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Weighted profile likelihood-based confidence interval for the difference between two proportions with paired binomial data.

    Science.gov (United States)

    Pradhan, Vivek; Saha, Krishna K; Banerjee, Tathagata; Evans, John C

    2014-07-30

    Inference on the difference between two binomial proportions in the paired binomial setting is often an important problem in many biomedical investigations. Tang et al. (2010, Statistics in Medicine) discussed six methods to construct confidence intervals (henceforth, we abbreviate it as CI) for the difference between two proportions in paired binomial setting using method of variance estimates recovery. In this article, we propose weighted profile likelihood-based CIs for the difference between proportions of a paired binomial distribution. However, instead of the usual likelihood, we use weighted likelihood that is essentially making adjustments to the cell frequencies of a 2 × 2 table in the spirit of Agresti and Min (2005, Statistics in Medicine). We then conduct numerical studies to compare the performances of the proposed CIs with that of Tang et al. and Agresti and Min in terms of coverage probabilities and expected lengths. Our numerical study clearly indicates that the weighted profile likelihood-based intervals and Jeffreys interval (cf. Tang et al.) are superior in terms of achieving the nominal level, and in terms of expected lengths, they are competitive. Finally, we illustrate the use of the proposed CIs with real-life examples. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Number of core samples: Mean concentrations and confidence intervals

    International Nuclear Information System (INIS)

    Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

    1995-01-01

    This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

  11. Computing confidence and prediction intervals of industrial equipment degradation by bootstrapped support vector regression

    International Nuclear Information System (INIS)

    Lins, Isis Didier; Droguett, Enrique López; Moura, Márcio das Chagas; Zio, Enrico; Jacinto, Carlos Magno

    2015-01-01

    Data-driven learning methods for predicting the evolution of the degradation processes affecting equipment are becoming increasingly attractive in reliability and prognostics applications. Among these, we consider here Support Vector Regression (SVR), which has provided promising results in various applications. Nevertheless, the predictions provided by SVR are point estimates whereas in order to take better informed decisions, an uncertainty assessment should be also carried out. For this, we apply bootstrap to SVR so as to obtain confidence and prediction intervals, without having to make any assumption about probability distributions and with good performance even when only a small data set is available. The bootstrapped SVR is first verified on Monte Carlo experiments and then is applied to a real case study concerning the prediction of degradation of a component from the offshore oil industry. The results obtained indicate that the bootstrapped SVR is a promising tool for providing reliable point and interval estimates, which can inform maintenance-related decisions on degrading components. - Highlights: • Bootstrap (pairs/residuals) and SVR are used as an uncertainty analysis framework. • Numerical experiments are performed to assess accuracy and coverage properties. • More bootstrap replications does not significantly improve performance. • Degradation of equipment of offshore oil wells is estimated by bootstrapped SVR. • Estimates about the scale growth rate can support maintenance-related decisions

  12. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

    Science.gov (United States)

    Harari, Gil

    2014-01-01

    Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

  13. Confidence bounds and hypothesis tests for normal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill; Richard A. Johnson

    2007-01-01

    For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations.

  14. Evaluation of test intervals strategies with a risk monitor

    International Nuclear Information System (INIS)

    Soerman, J.

    2005-01-01

    The Swedish nuclear power utility Oskarshamn Power Group (OKG), is investigating how the use of a risk monitor can facilitate and improve risk-informed decision-making at their nuclear power plants. The intent is to evaluate if risk-informed decision-making can be accepted. A pilot project was initiated and carried out in 2004. The project included investigating if a risk monitor can be used for optimising test intervals for diesel- and gas turbine generators with regard to risk level. The Oskarhamn 2 (O2), PSA Level 1 model was converted into a risk monitor using RiskSpectrum RiskWatcher (RSRW) software. The converted PSA model included the complete PSA model for the power operation mode. RSRW then performs a complete requantification for every analysis. Time dependent reliability data are taken into account, i.e. a shorter test interval will increases the components availability (possibility to e.g. start on demand). The converted O2 model was then used to investigate whether it would be possible to balance longer test intervals for diesel generators, gas turbine generators and high pressure injection system with shorter test intervals for the low pressure injection system, while maintaining a low risk level at the plant. The results show that a new mixture of test intervals can be implemented with only marginally changes in the risk calculated with the risk monitor model. The results indicate that the total number of test activities for the systems included in the pilot study could be reduced by 20% with a maintained level of risk. A risk monitor taking into account the impact from test intervals in availability calculations for components is well suited for evaluation of test interval strategies. It also enables the analyst to evaluate the risk level over a period of time including the impact the actual status of the plant may have on the risk level. (author)

  15. Statistical intervals a guide for practitioners

    CERN Document Server

    Hahn, Gerald J

    2011-01-01

    Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati

  16. Confidence intervals for effect sizes: compliance and clinical significance in the Journal of Consulting and clinical Psychology.

    Science.gov (United States)

    Odgaard, Eric C; Fowler, Robert L

    2010-06-01

    In 2005, the Journal of Consulting and Clinical Psychology (JCCP) became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest editorial effort to improve statistical reporting practices in any APA journal in at least a decade, in this article we investigate the efficacy of that change. All intervention studies published in JCCP in 2003, 2004, 2007, and 2008 were reviewed. Each article was coded for method of clinical significance, type of ES, and type of associated CI, broken down by statistical test (F, t, chi-square, r/R(2), and multivariate modeling). By 2008, clinical significance compliance was 75% (up from 31%), with 94% of studies reporting some measure of ES (reporting improved for individual statistical tests ranging from eta(2) = .05 to .17, with reasonable CIs). Reporting of CIs for ESs also improved, although only to 40%. Also, the vast majority of reported CIs used approximations, which become progressively less accurate for smaller sample sizes and larger ESs (cf. Algina & Kessleman, 2003). Changes are near asymptote for ESs and clinical significance, but CIs lag behind. As CIs for ESs are required for primary outcomes, we show how to compute CIs for the vast majority of ESs reported in JCCP, with an example of how to use CIs for ESs as a method to assess clinical significance.

  17. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    Science.gov (United States)

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial

  18. Uncertainty in population growth rates: determining confidence intervals from point estimates of parameters.

    Directory of Open Access Journals (Sweden)

    Eleanor S Devenish Nelson

    Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.

  19. Confidence interval estimation of the difference between two sensitivities to the early disease stage.

    Science.gov (United States)

    Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili

    2014-03-01

    Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Including test errors in evaluating surveillance test intervals

    International Nuclear Information System (INIS)

    Kim, I.S.; Samanta, P.K.; Martorell, S.; Vesely, W.E.

    1991-01-01

    Technical Specifications require surveillance testing to assure that the standby systems important to safety will start and perform their intended functions in the event of plant abnormality. However, as evidenced by operating experience, the surveillance tests may be adversely impact safety because of their undesirable side effects, such as initiation of plant transients during testing or wearing-out of safety systems due to testing. This paper first defines the concerns, i.e., the potential adverse effects of surveillance testing, from a risk perspective. Then, we present a methodology to evaluate the risk impact of those adverse effects, focusing on two important kinds of adverse impacts of surveillance testing: (1) risk impact of test-caused trips and (2) risk impact of test-caused equipment wear. The quantitative risk methodology is demonstrated with several surveillance tests conducted at boiling water reactors, such as the tests of the main steam isolation valves, the turbine overspeed protection system, and the emergency diesel generators. We present the results of the risk-effectiveness evaluation of surveillance test intervals, which compares the adverse risk impact with the beneficial risk impact of testing from potential failure detection, along with insights from sensitivity studies

  1. Bayesian-statistical decision threshold, detection limit, and confidence interval in nuclear radiation measurement

    International Nuclear Information System (INIS)

    Weise, K.

    1998-01-01

    When a contribution of a particular nuclear radiation is to be detected, for instance, a spectral line of interest for some purpose of radiation protection, and quantities and their uncertainties must be taken into account which, such as influence quantities, cannot be determined by repeated measurements or by counting nuclear radiation events, then conventional statistics of event frequencies is not sufficient for defining the decision threshold, the detection limit, and the limits of a confidence interval. These characteristic limits are therefore redefined on the basis of Bayesian statistics for a wider applicability and in such a way that the usual practice remains as far as possible unaffected. The principle of maximum entropy is applied to establish probability distributions from available information. Quantiles of these distributions are used for defining the characteristic limits. But such a distribution must not be interpreted as a distribution of event frequencies such as the Poisson distribution. It rather expresses the actual state of incomplete knowledge of a physical quantity. The different definitions and interpretations and their quantitative consequences are presented and discussed with two examples. The new approach provides a theoretical basis for the DIN 25482-10 standard presently in preparation for general applications of the characteristic limits. (orig.) [de

  2. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.

    Science.gov (United States)

    Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E

    2015-10-01

    Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.

  3. Optimal test intervals for shutdown systems for the Cernavoda nuclear power station

    International Nuclear Information System (INIS)

    Negut, Gh.; Laslau, F.

    1993-01-01

    Cernavoda nuclear power station required a complete PSA study. As a part of this study, an important goal to enhance the effectiveness of the plant operation is to establish optimal test intervals for the important engineering safety systems. The paper presents, briefly, the current methods to optimize the test intervals. For this reason it was used Vesely methods to establish optimal test intervals and Frantic code to survey the influence of the test intervals on system availability. The applications were done on the Shutdown System no. 1, a shutdown system provided whit solid rods and on Shutdown System no. 2 provided with injecting poison. The shutdown systems receive nine total independent scram signals that dictate the test interval. Fault trees for the both safety systems were developed. For the fault tree solutions an original code developed in our Institute was used. The results, intended to be implemented in the technical specifications for test and operation of Cernavoda NPS are presented

  4. Surveillance test interval optimization

    International Nuclear Information System (INIS)

    Cepin, M.; Mavko, B.

    1995-01-01

    Technical specifications have been developed on the bases of deterministic analyses, engineering judgment, and expert opinion. This paper introduces our risk-based approach to surveillance test interval (STI) optimization. This approach consists of three main levels. The first level is the component level, which serves as a rough estimation of the optimal STI and can be calculated analytically by a differentiating equation for mean unavailability. The second and third levels give more representative results. They take into account the results of probabilistic risk assessment (PRA) calculated by a personal computer (PC) based code and are based on system unavailability at the system level and on core damage frequency at the plant level

  5. Optimal Testing Intervals in the Squatting Test to Determine Baroreflex Sensitivity

    OpenAIRE

    Ishitsuka, S.; Kusuyama, N.; Tanaka, M.

    2014-01-01

    The recently introduced “squatting test” (ST) utilizes a simple postural change to perturb the blood pressure and to assess baroreflex sensitivity (BRS). In our study, we estimated the reproducibility of and the optimal testing interval between the STs in healthy volunteers. Thirty-four subjects free of cardiovascular disorders and taking no medication were instructed to perform the repeated ST at 30-sec, 1-min, and 3-min intervals in duplicate in a random sequence, while the systolic blood p...

  6. The Model Confidence Set

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

    The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS......, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine...... the MCS of the best in terms of in-sample likelihood criteria....

  7. Limited test data: The choice between confidence limits and inverse probability

    International Nuclear Information System (INIS)

    Nichols, P.

    1975-01-01

    For a unit which has been successfully designed to a high standard of reliability, any test programme of reasonable size will result in only a small number of failures. In these circumstances the failure rate estimated from the tests will depend on the statistical treatment applied. When a large number of units is to be manufactured, an unexpected high failure rate will certainly result in a large number of failures, so it is necessary to guard against optimistic unrepresentative test results by using a confidence limit approach. If only a small number of production units is involved, failures may not occur even with a higher than expected failure rate, and so one may be able to accept a method which allows for the possibility of either optimistic or pessimistic test results, and in this case an inverse probability approach, based on Bayes' theorem, might be used. The paper first draws attention to an apparently significant difference in the numerical results from the two methods, particularly for the overall probability of several units arranged in redundant logic. It then discusses a possible objection to the inverse method, followed by a demonstration that, for a large population and a very reasonable choice of prior probability, the inverse probability and confidence limit methods give the same numerical result. Finally, it is argued that a confidence limit approach is overpessimistic when a small number of production units is involved, and that both methods give the same answer for a large population. (author)

  8. Perpetrator admissions and earwitness renditions: the effects of retention interval and rehearsal on accuracy of and confidence in memory for criminal accounts

    OpenAIRE

    Boydell, Carroll

    2008-01-01

    While much research has explored how well earwitnesses can identify the voice of a perpetrator, little research has examined how well they can recall details from a perpetrator’s confession. This study examines the accuracy-confidence correlation for memory for details from a perpetrator’s verbal account of a crime, as well as the effects of two variables commonly encountered in a criminal investigation (rehearsal and length of retention interval) on that correlation. Results suggest that con...

  9. Effects of Forgetting Phenomenon on Surveillance Test Interval

    International Nuclear Information System (INIS)

    Lee, Ho-Joong; Jang, Seung-Cheol

    2007-01-01

    Technical Specifications (TS) requirements for nuclear power plants (NPPs) define Surveillance Requirements (SRs) to assure safety during operation. SRs include surveillance test intervals (STIs) and the optimization of the STIs is one of the main issues in risk-informed applications. Surveillance tests are required in NPPs to detect failures in standby equipment to assure their availability in an accident. However, operating experience of the plants suggests that, in addition to the beneficial effects of detecting latent faults, the tests also may have adverse effects on plant operation or equipment; e.g., plant transient caused by the test and wear-out of safety system equipment due to repeated testing. Recent studies have quantitatively evaluated both the beneficial and adverse effects of testing to decide on an acceptable test interval. The purpose of this research is to investigate the effects of forgetting phenomenon on STI. It is a fundamental human characteristic that a person engaged in a repetitive task will improve his performance over time. The learning phenomenon is observed by the decrease in operation time per unit as operators gain experience by performing additional tasks. However, once there is a break of sufficient length, forgetting starts to take place. In surveillance tests, the most common factor to determine the amount of forgetting is the length of STI, where the longer the STI, the greater the amount of forgetting

  10. Identifying the bad guy in a lineup using confidence judgments under deadline pressure.

    Science.gov (United States)

    Brewer, Neil; Weber, Nathan; Wootton, David; Lindsay, D Stephen

    2012-10-01

    Eyewitness-identification tests often culminate in witnesses not picking the culprit or identifying innocent suspects. We tested a radical alternative to the traditional lineup procedure used in such tests. Rather than making a positive identification, witnesses made confidence judgments under a short deadline about whether each lineup member was the culprit. We compared this deadline procedure with the traditional sequential-lineup procedure in three experiments with retention intervals ranging from 5 min to 1 week. A classification algorithm that identified confidence criteria that optimally discriminated accurate from inaccurate decisions revealed that decision accuracy was 24% to 66% higher under the deadline procedure than under the traditional procedure. Confidence profiles across lineup stimuli were more informative than were identification decisions about the likelihood that an individual witness recognized the culprit or correctly recognized that the culprit was not present. Large differences between the maximum and the next-highest confidence value signaled very high accuracy. Future support for this procedure across varied conditions would highlight a viable alternative to the problematic lineup procedures that have traditionally been used by law enforcement.

  11. The patients' perspective of international normalized ratio self-testing, remote communication of test results and confidence to move to self-management.

    Science.gov (United States)

    Grogan, Anne; Coughlan, Michael; Prizeman, Geraldine; O'Connell, Niamh; O'Mahony, Nora; Quinn, Katherine; McKee, Gabrielle

    2017-12-01

    To elicit the perceptions of patients, who self-tested their international normalized ratio and communicated their results via a text or phone messaging system, to determine their satisfaction with the education and support that they received and to establish their confidence to move to self-management. Self-testing of international normalized ratio has been shown to be reliable and is fast becoming common practice. As innovations are introduced to point of care testing, more research is needed to elicit patients' perceptions of the self-testing process. This three site study used a cross-sectional prospective descriptive survey. Three hundred and thirty patients who were prescribed warfarin and using international normalized ratio self-testing were invited to take part in the study. The anonymous survey examined patient profile, patients' usage, issues, perceptions, confidence and satisfaction with using the self-testing system and their preparedness for self-management of warfarin dosage. The response rate was 57% (n = 178). Patients' confidence in self-testing was high (90%). Patients expressed a high level of satisfaction with the support received, but expressed the need for more information on support groups, side effects of warfarin, dietary information and how to dispose of needles. When asked if they felt confident to adjust their own warfarin levels 73% agreed. Chi-squared tests for independence revealed that none of the patient profile factors examined influenced this confidence. The patients cited the greatest advantages of the service were reduced burden, more autonomy, convenience and ease of use. The main disadvantages cited were cost and communication issues. Patients were satisfied with self-testing. The majority felt they were ready to move to self-management. The introduction of innovations to remote point of care testing, such as warfarin self-testing, needs to have support at least equal to that provided in a hospital setting. © 2017 John

  12. Effects of human errors on the determination of surveillance test interval

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Koo, Bon Hyun

    1990-01-01

    This paper incorporates the effects of human error relevant to the periodic test on the unavailability of the safety system as well as the component unavailability. Two types of possible human error during the test are considered. One is the possibility that a good safety system is inadvertently left in a bad state after the test (Type A human error) and the other is the possibility that bad safety system is undetected upon the test (Type B human error). An event tree model is developed for the steady-state unavailability of safety system to determine the effects of human errors on the component unavailability and the test interval. We perform the reliability analysis of safety injection system (SIS) by applying aforementioned two types of human error to safety injection pumps. Results of various sensitivity analyses show that; 1) the appropriate test interval decreases and steady-state unavailability increases as the probabilities of both types of human errors increase, and they are far more sensitive to Type A human error than Type B and 2) the SIS unavailability increases slightly as the probability of Type B human error increases, and significantly as the probability of Type A human error increases. Therefore, to avoid underestimation, the effects of human error should be incorporated in the system reliability analysis which aims at the relaxations of the surveillance test intervals, and Type A human error has more important effect on the unavailability and surveillance test interval

  13. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    Science.gov (United States)

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  14. Correct Bayesian and frequentist intervals are similar

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1986-01-01

    This paper argues that Bayesians and frequentists will normally reach numerically similar conclusions, when dealing with vague data or sparse data. It is shown that both statistical methodologies can deal reasonably with vague data. With sparse data, in many important practical cases Bayesian interval estimates and frequentist confidence intervals are approximately equal, although with discrete data the frequentist intervals are somewhat longer. This is not to say that the two methodologies are equally easy to use: The construction of a frequentist confidence interval may require new theoretical development. Bayesians methods typically require numerical integration, perhaps over many variables. Also, Bayesian can easily fall into the trap of over-optimism about their amount of prior knowledge. But in cases where both intervals are found correctly, the two intervals are usually not very different. (orig.)

  15. Predicting fecal coliform using the interval-to-interval approach and SWAT in the Miyun watershed, China.

    Science.gov (United States)

    Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu; Qiu, Jiali; Li, Yangyang

    2017-06-01

    Pathogens in manure can cause waterborne-disease outbreaks, serious illness, and even death in humans. Therefore, information about the transformation and transport of bacteria is crucial for determining their source. In this study, the Soil and Water Assessment Tool (SWAT) was applied to simulate fecal coliform bacteria load in the Miyun Reservoir watershed, China. The data for the fecal coliform were obtained at three sampling sites, Chenying (CY), Gubeikou (GBK), and Xiahui (XH). The calibration processes of the fecal coliform were conducted using the CY and GBK sites, and validation was conducted at the XH site. An interval-to-interval approach was designed and incorporated into the processes of fecal coliform calibration and validation. The 95% confidence interval of the predicted values and the 95% confidence interval of measured values were considered during calibration and validation in the interval-to-interval approach. Compared with the traditional point-to-point comparison, this method can improve simulation accuracy. The results indicated that the simulation of fecal coliform using the interval-to-interval approach was reasonable for the watershed. This method could provide a new research direction for future model calibration and validation studies.

  16. A Methodology for Evaluation of Inservice Test Intervals for Pumps and Motor Operated Valves

    International Nuclear Information System (INIS)

    McElhaney, K.L.

    1999-01-01

    The nuclear industry has begun efforts to reevaluate inservice tests (ISTs) for key components such as pumps and valves. At issue are two important questions--What kinds of tests provide the most meaningful information about component health, and what periodic test intervals are appropriate? In the past, requirements for component testing were prescribed by the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code. The tests and test intervals specified in the Code were generic in nature and test intervals were relatively short. Operating experience has shown, however, that performance and safety improvements and cost savings could be realized by tailoring IST programs to similar components with comparable safety importance and service conditions. In many cases, test intervals may be lengthened, resulting in cost savings for utilities and their customers

  17. The prognostic value of the QT interval and QT interval dispersion in all-cause and cardiac mortality and morbidity in a population of Danish citizens.

    Science.gov (United States)

    Elming, H; Holm, E; Jun, L; Torp-Pedersen, C; Køber, L; Kircshoff, M; Malik, M; Camm, J

    1998-09-01

    To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality, as well as in cardiac morbidity, in a general population. The QT interval was measured in all leads from a standard 12-lead ECG in a random sample of 1658 women and 1797 men aged 30-60 years. QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads. All cause mortality over 13 years, and cardiovascular mortality as well as cardiac morbidity over 11 years, were the main outcome parameters. Subjects with a prolonged QT interval (430 ms or more) or prolonged QT interval dispersion (80 ms or more) were at higher risk of cardiovascular death and cardiac morbidity than subjects whose QT interval was less than 360 ms, or whose QT interval dispersion was less than 30 ms. Cardiovascular death relative risk ratios, adjusted for age, gender, myocardial infarct, angina pectoris, diabetes mellitus, arterial hypertension, smoking habits, serum cholesterol level, and heart rate were 2.9 for the QT interval (95% confidence interval 1.1-7.8) and 4.4 for QT interval dispersion (95% confidence interval 1.0-19-1). Fatal and non-fatal cardiac morbidity relative risk ratios were similar, at 2.7 (95% confidence interval 1.4-5.5) for the QT interval and 2.2 (95% confidence interval 1.1-4.0) for QT interval dispersion. Prolongation of the QT interval and QT interval dispersion independently affected the prognosis of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years.

  18. Confidence Testing of Shell 405 and S-405 Catalysts in a Monopropellant Hydrazine Thruster

    Science.gov (United States)

    McRight, Patrick; Popp, Chris; Pierce, Charles; Turpin, Alicia; Urbanchock, Walter; Wilson, Mike

    2005-01-01

    As part of the transfer of catalyst manufacturing technology from Shell Chemical Company (Shell 405 catalyst manufactured in Houston, Texas) to Aerojet (S-405 manufactured in Redmond, Washington), Aerojet demonstrated the equivalence of S-405 and Shell 405 at beginning of life. Some US aerospace users expressed a desire to conduct a preliminary confidence test to assess end-of-life characteristics for S-405. NASA Marshall Space Flight Center (MSFC) and Aerojet entered a contractual agreement in 2004 to conduct a confidence test using a pair of 0.2-lbf MR-103G monopropellant hydrazine thrusters, comparing S-405 and Shell 405 side by side. This paper summarizes the formulation of this test program, explains the test matrix, describes the progress of the test, and analyzes the test results. This paper also includes a discussion of the limitations of this test and the ramifications of the test results for assessing the need for future qualification testing in particular hydrazine thruster applications.

  19. Optimal test intervals of standby components based on actual plant-specific data

    International Nuclear Information System (INIS)

    Jones, R.B.; Bickel, J.H.

    1987-01-01

    Based on standard reliability analysis techniques, both under testing and over testing affect the availability of standby components. If tests are performed too often, unavailability is increased since the equipment is being used excessively. Conversely if testing is performed too infrequently, the likelihood of component unavailability is also increased due to the formation of rust, heat or radiation damage, dirt infiltration, etc. Thus from a physical perspective, an optimal test interval should exist which minimizes unavailability. This paper illustrates the application of an unavailability model that calculates optimal testing intervals for components with a failure database. (orig./HSCH)

  20. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    Science.gov (United States)

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  1. Microvascular anastomosis simulation using a chicken thigh model: Interval versus massed training.

    Science.gov (United States)

    Schoeff, Stephen; Hernandez, Brian; Robinson, Derek J; Jameson, Mark J; Shonka, David C

    2017-11-01

    To compare the effectiveness of massed versus interval training when teaching otolaryngology residents microvascular suturing on a validated microsurgical model. Otolaryngology residents were placed into interval (n = 7) or massed (n = 7) training groups. The interval group performed three separate 30-minute practice sessions separated by at least 1 week, and the massed group performed a single 90-minute practice session. Both groups viewed a video demonstration and recorded a pretest prior to the first training session. A post-test was administered following the last practice session. At an academic medical center, 14 otolaryngology residents were assigned using stratified randomization to interval or massed training. Blinded evaluators graded performance using a validated microvascular Objective Structured Assessment of Technical Skill tool. The tool is comprised of two major components: task-specific score (TSS) and global rating scale (GRS). Participants also received pre- and poststudy surveys to compare subjective confidence in multiple aspects of microvascular skill acquisition. Overall, all residents showed increased TSS and GRS on post- versus pretest. After completion of training, the interval group had a statistically significant increase in both TSS and GRS, whereas the massed group's increase was not significant. Residents in both groups reported significantly increased levels of confidence after completion of the study. Self-directed learning using a chicken thigh artery model may benefit microsurgical skills, competence, and confidence for resident surgeons. Interval training results in significant improvement in early development of microvascular anastomosis skills, whereas massed training does not. NA. Laryngoscope, 127:2490-2494, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  2. Confidence limits for parameters of Poisson and binomial distributions

    International Nuclear Information System (INIS)

    Arnett, L.M.

    1976-04-01

    The confidence limits for the frequency in a Poisson process and for the proportion of successes in a binomial process were calculated and tabulated for the situations in which the observed values of the frequency or proportion and an a priori distribution of these parameters are available. Methods are used that produce limits with exactly the stated confidence levels. The confidence interval [a,b] is calculated so that Pr [a less than or equal to lambda less than or equal to b c,μ], where c is the observed value of the parameter, and μ is the a priori hypothesis of the distribution of this parameter. A Bayesian type analysis is used. The intervals calculated are narrower and appreciably different from results, known to be conservative, that are often used in problems of this type. Pearson and Hartley recognized the characteristics of their methods and contemplated that exact methods could someday be used. The calculation of the exact intervals requires involved numerical analyses readily implemented only on digital computers not available to Pearson and Hartley. A Monte Carlo experiment was conducted to verify a selected interval from those calculated. This numerical experiment confirmed the results of the analytical methods and the prediction of Pearson and Hartley that their published tables give conservative results

  3. Does interaction matter? Testing whether a confidence heuristic can replace interaction in collective decision-making.

    Science.gov (United States)

    Bang, Dan; Fusaroli, Riccardo; Tylén, Kristian; Olsen, Karsten; Latham, Peter E; Lau, Jennifer Y F; Roepstorff, Andreas; Rees, Geraint; Frith, Chris D; Bahrami, Bahador

    2014-05-01

    In a range of contexts, individuals arrive at collective decisions by sharing confidence in their judgements. This tendency to evaluate the reliability of information by the confidence with which it is expressed has been termed the 'confidence heuristic'. We tested two ways of implementing the confidence heuristic in the context of a collective perceptual decision-making task: either directly, by opting for the judgement made with higher confidence, or indirectly, by opting for the faster judgement, exploiting an inverse correlation between confidence and reaction time. We found that the success of these heuristics depends on how similar individuals are in terms of the reliability of their judgements and, more importantly, that for dissimilar individuals such heuristics are dramatically inferior to interaction. Interaction allows individuals to alleviate, but not fully resolve, differences in the reliability of their judgements. We discuss the implications of these findings for models of confidence and collective decision-making. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  4. The impact of communication barriers on diagnostic confidence and ancillary testing in the emergency department.

    Science.gov (United States)

    Garra, Gregory; Albino, Hiram; Chapman, Heather; Singer, Adam J; Thode, Henry C

    2010-06-01

    Communication barriers (CBs) compromise the diagnostic power of the medical interview and may result in increased reliance on diagnostic tests or incorrect test ordering. The prevalence and degree to which these barriers affect diagnosis, testing, and treatment are unknown. To quantify and characterize CBs encountered in the Emergency Department (ED), and assess the effect of CBs on initial diagnosis and perceived reliance on ancillary testing. This was a prospective survey completed by emergency physicians after initial adult patient encounters. CB severity, diagnostic confidence, and reliance on ancillary testing were quantified on a 100-mm Visual Analog Scale (VAS) from least (0) to most (100). Data were collected on 417 ED patient encounters. CBs were reported in 46%; with a mean severity of 50 mm on a 100-mm VAS with endpoints of "perfect communication and "no communication." Language was the most commonly reported form of CB (28%). More than one CB was identified in 6%. The 100-mm VAS rating of diagnostic confidence was lower in patients with perceived CBs (64 mm) vs. those without CBs (80 mm), p Communication barriers in our ED setting were common, and resulted in lower diagnostic confidence and increased perception that ancillary tests are needed to narrow the diagnosis. Copyright 2010 Elsevier Inc. All rights reserved.

  5. Analysis of unavailability related to demand failures as a function of the testing interval

    International Nuclear Information System (INIS)

    Carretero, J.A.; Pereira, M.B.; Perez Lobo, E.M.

    1998-01-01

    The unavailability related to the demand failure of a component is the sum of the contributions of the failures in demand and in waiting. An important point in PSAs is the calculation of unavailabilities of the basic events of demand failure. Several criteria are used for this, with the objective of simplifying said quantification. The information available from two nuclear power plants has been analysed, in order to determine the tendency in the models in demand and in waiting, as a function of the test intervals, the following conclusions were obtained: - There is a clear tendency for the possibility of failure in demand to increase as the interval between tests increases - The test intervals considered in PSAs are not always coherent with the estimates of real demand; this implies a penalty when using the in waiting model, due to the underlying conservatism Therefore, increasing the intervals between tests over time (a tendency studied in nuclear power plants)could cause demand due to tests to b e significantly less than that due to real actuations. This implies a need to apply test intervals based on historic demands and not on those due to historic tests, in order to avoid conservatism. (Author)

  6. R package to estimate intracluster correlation coefficient with confidence interval for binary data.

    Science.gov (United States)

    Chakraborty, Hrishikesh; Hossain, Akhtar

    2018-03-01

    The Intracluster Correlation Coefficient (ICC) is a major parameter of interest in cluster randomized trials that measures the degree to which responses within the same cluster are correlated. There are several types of ICC estimators and its confidence intervals (CI) suggested in the literature for binary data. Studies have compared relative weaknesses and advantages of ICC estimators as well as its CI for binary data and suggested situations where one is advantageous in practical research. The commonly used statistical computing systems currently facilitate estimation of only a very few variants of ICC and its CI. To address the limitations of current statistical packages, we developed an R package, ICCbin, to facilitate estimating ICC and its CI for binary responses using different methods. The ICCbin package is designed to provide estimates of ICC in 16 different ways including analysis of variance methods, moments based estimation, direct probabilistic methods, correlation based estimation, and resampling method. CI of ICC is estimated using 5 different methods. It also generates cluster binary data using exchangeable correlation structure. ICCbin package provides two functions for users. The function rcbin() generates cluster binary data and the function iccbin() estimates ICC and it's CI. The users can choose appropriate ICC and its CI estimate from the wide selection of estimates from the outputs. The R package ICCbin presents very flexible and easy to use ways to generate cluster binary data and to estimate ICC and it's CI for binary response using different methods. The package ICCbin is freely available for use with R from the CRAN repository (https://cran.r-project.org/package=ICCbin). We believe that this package can be a very useful tool for researchers to design cluster randomized trials with binary outcome. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Easy and Informative: Using Confidence-Weighted True-False Items for Knowledge Tests in Psychology Courses

    Science.gov (United States)

    Dutke, Stephan; Barenberg, Jonathan

    2015-01-01

    We introduce a specific type of item for knowledge tests, confidence-weighted true-false (CTF) items, and review experiences of its application in psychology courses. A CTF item is a statement about the learning content to which students respond whether the statement is true or false, and they rate their confidence level. Previous studies using…

  8. Long-term maintenance of immediate or delayed extinction is determined by the extinction-test interval

    OpenAIRE

    Johnson, Justin S.; Escobar, Martha; Kimble, Whitney L.

    2010-01-01

    Short acquisition-extinction intervals (immediate extinction) can lead to either more or less spontaneous recovery than long acquisition-extinction intervals (delayed extinction). Using rat subjects, we observed less spontaneous recovery following immediate than delayed extinction (Experiment 1). However, this was the case only if a relatively long extinction-test interval was used; a relatively short extinction-test interval yielded the opposite result (Experiment 2). Previous data appear co...

  9. Confidence bands for inverse regression models

    International Nuclear Information System (INIS)

    Birke, Melanie; Bissantz, Nicolai; Holzmann, Hajo

    2010-01-01

    We construct uniform confidence bands for the regression function in inverse, homoscedastic regression models with convolution-type operators. Here, the convolution is between two non-periodic functions on the whole real line rather than between two periodic functions on a compact interval, since the former situation arguably arises more often in applications. First, following Bickel and Rosenblatt (1973 Ann. Stat. 1 1071–95) we construct asymptotic confidence bands which are based on strong approximations and on a limit theorem for the supremum of a stationary Gaussian process. Further, we propose bootstrap confidence bands based on the residual bootstrap and prove consistency of the bootstrap procedure. A simulation study shows that the bootstrap confidence bands perform reasonably well for moderate sample sizes. Finally, we apply our method to data from a gel electrophoresis experiment with genetically engineered neuronal receptor subunits incubated with rat brain extract

  10. A study on the optimization of test interval for check valves of Ulchin Unit 3 using the risk-informed in-service testing approach

    International Nuclear Information System (INIS)

    Kang, D. I.; Kim, K. Y.; Yang, Z. A.; Ha, J. J.

    2002-01-01

    We optimized the test interval for check valves of Ulchin Unit 3 using the risk-informed in-service testing (IST) approach. First, we categorized the IST check valves for Ulchin Unit 3 according to their contributions to the safety of Ulchin Unit 3. Next, we performed the risk analysis on the relaxation of test interval for check valves identified as relatively low important to the safety of Ulchin Unit 3 to identify the maximum increasable test interval of them. Finally, we estimated the number of tests of IST check valves to be performed due to the changes of test interval. These study results are as follows: The categorization of IST check valve importance; the number of the HSSCs is 24(11.48%), the ISSCs is 40 (19.14%), and the LSSCs is 462(69.38%). The maximum increasable test interval; 6 times of current test interval of ISSCs2 and 40 times of that of LSSCs. The number of tests of IST check valves to be performed during 6 refueling time can be reduced from 7692 to 1333 ( 82.7%)

  11. Factorial-based response-surface modeling with confidence intervals for optimizing thermal-optical transmission analysis of atmospheric black carbon

    International Nuclear Information System (INIS)

    Conny, J.M.; Norris, G.A.; Gould, T.R.

    2009-01-01

    Thermal-optical transmission (TOT) analysis measures black carbon (BC) in atmospheric aerosol on a fibrous filter. The method pyrolyzes organic carbon (OC) and employs laser light absorption to distinguish BC from the pyrolyzed OC; however, the instrument does not necessarily separate the two physically. In addition, a comprehensive temperature protocol for the analysis based on the Beer-Lambert Law remains elusive. Here, empirical response-surface modeling was used to show how the temperature protocol in TOT analysis can be modified to distinguish pyrolyzed OC from BC based on the Beer-Lambert Law. We determined the apparent specific absorption cross sections for pyrolyzed OC (σ Char ) and BC (σ BC ), which accounted for individual absorption enhancement effects within the filter. Response-surface models of these cross sections were derived from a three-factor central-composite factorial experimental design: temperature and duration of the high-temperature step in the helium phase, and the heating increase in the helium-oxygen phase. The response surface for σ BC , which varied with instrument conditions, revealed a ridge indicating the correct conditions for OC pyrolysis in helium. The intersection of the σ BC and σ Char surfaces indicated the conditions where the cross sections were equivalent, satisfying an important assumption upon which the method relies. 95% confidence interval surfaces defined a confidence region for a range of pyrolysis conditions. Analyses of wintertime samples from Seattle, WA revealed a temperature between 830 deg. C and 850 deg. C as most suitable for the helium high-temperature step lasting 150 s. However, a temperature as low as 750 deg. C could not be rejected statistically

  12. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  13. Estimating confidence intervals in predicted responses for oscillatory biological models.

    Science.gov (United States)

    St John, Peter C; Doyle, Francis J

    2013-07-29

    The dynamics of gene regulation play a crucial role in a cellular control: allowing the cell to express the right proteins to meet changing needs. Some needs, such as correctly anticipating the day-night cycle, require complicated oscillatory features. In the analysis of gene regulatory networks, mathematical models are frequently used to understand how a network's structure enables it to respond appropriately to external inputs. These models typically consist of a set of ordinary differential equations, describing a network of biochemical reactions, and unknown kinetic parameters, chosen such that the model best captures experimental data. However, since a model's parameter values are uncertain, and since dynamic responses to inputs are highly parameter-dependent, it is difficult to assess the confidence associated with these in silico predictions. In particular, models with complex dynamics - such as oscillations - must be fit with computationally expensive global optimization routines, and cannot take advantage of existing measures of identifiability. Despite their difficulty to model mathematically, limit cycle oscillations play a key role in many biological processes, including cell cycling, metabolism, neuron firing, and circadian rhythms. In this study, we employ an efficient parameter estimation technique to enable a bootstrap uncertainty analysis for limit cycle models. Since the primary role of systems biology models is the insight they provide on responses to rate perturbations, we extend our uncertainty analysis to include first order sensitivity coefficients. Using a literature model of circadian rhythms, we show how predictive precision is degraded with decreasing sample points and increasing relative error. Additionally, we show how this method can be used for model discrimination by comparing the output identifiability of two candidate model structures to published literature data. Our method permits modellers of oscillatory systems to confidently

  14. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  15. Self-confidence and metacognitive processes

    Directory of Open Access Journals (Sweden)

    Kleitman Sabina

    2005-01-01

    Full Text Available This paper examines the status of Self-confidence trait. Two studies strongly suggest that Self-confidence is a component of metacognition. In the first study, participants (N=132 were administered measures of Self-concept, a newly devised Memory and Reasoning Competence Inventory (MARCI, and a Verbal Reasoning Test (VRT. The results indicate a significant relationship between confidence ratings on the VRT and the Reasoning component of MARCI. The second study (N=296 employed an extensive battery of cognitive tests and several metacognitive measures. Results indicate the presence of robust Self-confidence and Metacognitive Awareness factors, and a significant correlation between them. Self-confidence taps not only processes linked to performance on items that have correct answers, but also beliefs about events that may never occur.

  16. A computer program (COSTUM) to calculate confidence intervals for in situ stress measurements. V. 1

    International Nuclear Information System (INIS)

    Dzik, E.J.; Walker, J.R.; Martin, C.D.

    1989-03-01

    The state of in situ stress is one of the parameters required both for the design and analysis of underground excavations and for the evaluation of numerical models used to simulate underground conditions. To account for the variability and uncertainty of in situ stress measurements, it is desirable to apply confidence limits to measured stresses. Several measurements of the state of stress along a borehole are often made to estimate the average state of stress at a point. Since stress is a tensor, calculating the mean stress and confidence limits using scalar techniques is inappropriate as well as incorrect. A computer program has been written to calculate and present the mean principle stresses and the confidence limits for the magnitudes and directions of the mean principle stresses. This report describes the computer program, COSTUM

  17. Reactor safety impact of functional test intervals: an application of Bayesian decision theory

    International Nuclear Information System (INIS)

    Buoni, F.B.

    1978-01-01

    Functional test intervals for important nuclear reactor systems can be obtained by viewing safety assessment as a decision process and functional testing as a Bayesian learning or information process. A preposterior analysis is used as the analytical model to find the preposterior expected reliability of a system as a function of test intervals. Persistent and transitory failure models are shown to yield different results. Functional tests of systems subject to persistent failure are effective in maintaining system reliability goals. Functional testing is not effective for systems subject to transitory failure; preventive maintenance must be used. A Bayesian posterior analysis of testing data can discriminate between persistent and transitory failure. The role of functional testing is seen to be an aid in assessing the future performance of reactor systems

  18. Confidence bounds for nonlinear dose-response relationships

    DEFF Research Database (Denmark)

    Baayen, C; Hougaard, P

    2015-01-01

    An important aim of drug trials is to characterize the dose-response relationship of a new compound. Such a relationship can often be described by a parametric (nonlinear) function that is monotone in dose. If such a model is fitted, it is useful to know the uncertainty of the fitted curve...... intervals for the dose-response curve. These confidence bounds have better coverage than Wald intervals and are more precise and generally faster than bootstrap methods. Moreover, if monotonicity is assumed, the profile likelihood approach takes this automatically into account. The approach is illustrated...

  19. Browns Ferry Nuclear Plant: variation in test intervals for high-pressure coolant injection (HPCI) system

    International Nuclear Information System (INIS)

    Christie, R.F.; Stetkar, J.W.

    1985-01-01

    The change in availability of the high-pressure coolant injection system (HPCIS) due to a change in pump and valve test interval from monthly to quarterly was analyzed. This analysis started by using the HPCIS base line evaluation produced as part of the Browns Ferry Nuclear Plant (BFN) Probabilistic Risk Assessment (PRA). The base line evaluation showed that the dominant contributors to the unavailability of the HPCI system are hardware failures and the resultant downtime for unscheduled maintenance. The effect of changing the pump and valve test interval from monthly to quarterly was analyzed by considering the system unavailability due to hardware failures, the unavailability due to testing, and the unavailability due to human errors that potentially could occur during testing. The magnitude of the changes in unavailability affected by the change in test interval are discussed. The analysis showed a small increase in the availability of the HPCIS to respond to loss of coolant accidents (LOCAs) and a small decrease in the availability of the HPCIS to respond to transients which require HPCIS actuation. In summary, the increase in test interval from monthly to quarterly does not significantly impact the overall HPCIS availability

  20. The design and analysis of salmonid tagging studies in the Columbia River. Volume 7: Monte-Carlo comparison of confidence internal procedures for estimating survival in a release-recapture study, with applications to Snake River salmonids

    International Nuclear Information System (INIS)

    Lowther, A.B.; Skalski, J.

    1996-06-01

    Confidence intervals for survival probabilities between hydroelectric facilities of migrating juvenile salmonids can be computed from the output of the SURPH software developed at the Center for Quantitative Science at the University of Washington. These intervals have been constructed using the estimate of the survival probability, its associated standard error, and assuming the estimate is normally distributed. In order to test the validity and performance of this procedure, two additional confidence interval procedures for estimating survival probabilities were tested and compared using simulated mark-recapture data. Intervals were constructed using normal probability theory, using a percentile-based empirical bootstrap algorithm, and using the profile likelihood concept. Performance of each method was assessed for a variety of initial conditions (release sizes, survival probabilities, detection probabilities). These initial conditions were chosen to encompass the range of parameter values seen in the 1993 and 1994 Snake River juvenile salmonid survival studies. The comparisons among the three estimation methods included average interval width, interval symmetry, and interval coverage

  1. Targeting Low Career Confidence Using the Career Planning Confidence Scale

    Science.gov (United States)

    McAuliffe, Garrett; Jurgens, Jill C.; Pickering, Worth; Calliotte, James; Macera, Anthony; Zerwas, Steven

    2006-01-01

    The authors describe the development and validation of a test of career planning confidence that makes possible the targeting of specific problem issues in employment counseling. The scale, developed using a rational process and the authors' experience with clients, was tested for criterion-related validity against 2 other measures. The scale…

  2. Hypothesis Testing of Inclusion of the Tolerance Interval for the Assessment of Food Safety.

    Directory of Open Access Journals (Sweden)

    Hungyen Chen

    Full Text Available In the testing of food quality and safety, we contrast the contents of the newly proposed food (genetically modified food against those of conventional foods. Because the contents vary largely between crop varieties and production environments, we propose a two-sample test of substantial equivalence that examines the inclusion of the tolerance intervals of the two populations, the population of the contents of the proposed food, which we call the target population, and the population of the contents of the conventional food, which we call the reference population. Rejection of the test hypothesis guarantees that the contents of the proposed foods essentially do not include outliers in the population of the contents of the conventional food. The existing tolerance interval (TI0 is constructed to have at least a pre-specified level of the coverage probability. Here, we newly introduce the complementary tolerance interval (TI1 that is guaranteed to have at most a pre-specified level of the coverage probability. By applying TI0 and TI1 to the samples from the target population and the reference population respectively, we construct a test statistic for testing inclusion of the two tolerance intervals. To examine the performance of the testing procedure, we conducted a simulation that reflects the effects of gene and environment, and residual from a crop experiment. As a case study, we applied the hypothesis testing to test if the distribution of the protein content of rice in Kyushu area is included in the distribution of the protein content in the other areas in Japan.

  3. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    Science.gov (United States)

    Kim, Yuneung; Lim, Johan; Park, DoHwan

    2015-11-01

    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  5. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    Science.gov (United States)

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  6. Downside Business Confidence Spillovers in Europe: Evidence from Causality-in-Risk Tests

    OpenAIRE

    Atukeren, Erdal; Cevik, Emrah Ismail; Korkmaz, Turhan

    2015-01-01

    This paper employs Hong et al.’s (2009) extreme risk spillovers test to investigate the bilateral business confidence spillovers between Greece, Italy, Spain, Portugal, France, and Germany. After controlling for domestic economic developments in each country and common international factors, downside risk spillovers are detected as a causal feedback between Spain and Portugal and unilaterally from Spain to Italy. Extremely low business sentiments in France, Germany, and Greece are mostly due ...

  7. Testing and qualification of confidence in statistical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Serghiuta, D.; Tholammakkil, J.; Hammouda, N. [Canadian Nuclear Safety Commission (Canada); O' Hagan, A. [Sheffield Univ. (United Kingdom)

    2014-07-01

    tests, but targeted to the context of the particular application and aimed at identifying the domain of validity of the proposed tolerance limit method and algorithm, might provide the necessary confidence in the proposed statistical procedure. The Ontario Power Generation, Bruce Power and AMEC-NSS have supported this work and contributed to the development and execution of the test cases. Their statistical method and results are not, however, discussed in this paper. (author)

  8. Dynamic visual noise reduces confidence in short-term memory for visual information.

    Science.gov (United States)

    Kemps, Eva; Andrade, Jackie

    2012-05-01

    Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.

  9. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions

    Directory of Open Access Journals (Sweden)

    Rik Crutzen

    2017-07-01

    Full Text Available When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy, we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

  10. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions.

    Science.gov (United States)

    Crutzen, Rik; Peters, Gjalt-Jorn Ygram; Noijen, Judith

    2017-01-01

    When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention) in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy), we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

  11. Graphical interpretation of confidence curves in rankit plots

    DEFF Research Database (Denmark)

    Hyltoft Petersen, Per; Blaabjerg, Ole; Andersen, Marianne

    2004-01-01

    A well-known transformation from the bell-shaped Gaussian (normal) curve to a straight line in the rankit plot is investigated, and a tool for evaluation of the distribution of reference groups is presented. It is based on the confidence intervals for percentiles of the calculated Gaussian distri...

  12. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  13. Ventral striatal activity correlates with memory confidence for old- and new-responses in a difficult recognition test.

    Directory of Open Access Journals (Sweden)

    Ulrike Schwarze

    Full Text Available Activity in the ventral striatum has frequently been associated with retrieval success, i.e., it is higher for hits than correct rejections. Based on the prominent role of the ventral striatum in the reward circuit, its activity has been interpreted to reflect the higher subjective value of hits compared to correct rejections in standard recognition tests. This hypothesis was supported by a recent study showing that ventral striatal activity is higher for correct rejections than hits when the value of rejections is increased by external incentives. These findings imply that the striatal response during recognition is context-sensitive and modulated by the adaptive significance of "oldness" or "newness" to the current goals. The present study is based on the idea that not only external incentives, but also other deviations from standard recognition tests which affect the subjective value of specific response types should modulate striatal activity. Therefore, we explored ventral striatal activity in an unusually difficult recognition test that was characterized by low levels of confidence and accuracy. Based on the human uncertainty aversion, in such a recognition context, the subjective value of all high confident decisions is expected to be higher than usual, i.e., also rejecting items with high certainty is deemed rewarding. In an accompanying behavioural experiment, participants rated the pleasantness of each recognition response. As hypothesized, ventral striatal activity correlated in the current unusually difficult recognition test not only with retrieval success, but also with confidence. Moreover, participants indicated that they were more satisfied by higher confidence in addition to perceived oldness of an item. Taken together, the results are in line with the hypothesis that ventral striatal activity during recognition codes the subjective value of different response types that is modulated by the context of the recognition test.

  14. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    Science.gov (United States)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  15. Parents' obesity-related behavior and confidence to support behavioral change in their obese child: data from the STAR study.

    Science.gov (United States)

    Arsenault, Lisa N; Xu, Kathleen; Taveras, Elsie M; Hacker, Karen A

    2014-01-01

    Successful childhood obesity interventions frequently focus on behavioral modification and involve parents or family members. Parental confidence in supporting behavior change may be an element of successful family-based prevention efforts. We aimed to determine whether parents' own obesity-related behaviors were related to their confidence in supporting their child's achievement of obesity-related behavioral goals. Cross-sectional analyses of data collected at baseline of a randomized control trial testing a treatment intervention for obese children (n = 787) in primary care settings (n = 14). Five obesity-related behaviors (physical activity, screen time, sugar-sweetened beverage, sleep duration, fast food) were self-reported by parents for themselves and their child. Behaviors were dichotomized on the basis of achievement of behavioral goals. Five confidence questions asked how confident the parent was in helping their child achieve each goal. Logistic regression modeling high confidence was conducted with goal achievement and demographics as independent variables. Parents achieving physical activity or sleep duration goals were significantly more likely to be highly confident in supporting their child's achievement of those goals (physical activity, odds ratio 1.76; 95% confidence interval 1.19-2.60; sleep, odds ratio 1.74; 95% confidence interval 1.09-2.79) independent of sociodemographic variables and child's current behavior. Parental achievements of TV watching and fast food goals were also associated with confidence, but significance was attenuated after child's behavior was included in models. Parents' own obesity-related behaviors are factors that may affect their confidence to support their child's behavior change. Providers seeking to prevent childhood obesity should address parent/family behaviors as part of their obesity prevention strategies. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  16. A Methodology for Evaluation of Inservice Test Intervals for Pumps and Motor-Operated Valves

    International Nuclear Information System (INIS)

    Cox, D.F.; Haynes, H.D.; McElhaney, K.L.; Otaduy, P.J.; Staunton, R.H.; Vesely, W.E.

    1999-01-01

    Recent nuclear industry reevaluation of component inservice testing (IST) requirements is resulting in requests for IST interval extensions and changes to traditional IST programs. To evaluate these requests, long-term component performance and the methods for mitigating degradation need to be understood. Determining the appropriate IST intervals, along with component testing, monitoring, trending, and maintenance effects, has become necessary. This study provides guidelines to support the evaluation of IST intervals for pumps and motor-operated valves (MOVs). It presents specific engineering information pertinent to the performance and monitoring/testing of pumps and MOVs, provides an analytical methodology for assessing the bounding effects of aging on component margin behavior, and identifies basic elements of an overall program to help ensure component operability. Guidance for assessing probabilistic methods and the risk importance and safety consequences of the performance of pumps and MOVs has not been specifically included within the scope of this report, but these elements may be included in licensee change requests

  17. We will be champions: Leaders' confidence in 'us' inspires team members' team confidence and performance.

    Science.gov (United States)

    Fransen, K; Steffens, N K; Haslam, S A; Vanbeselaere, N; Vande Broek, G; Boen, F

    2016-12-01

    The present research examines the impact of leaders' confidence in their team on the team confidence and performance of their teammates. In an experiment involving newly assembled soccer teams, we manipulated the team confidence expressed by the team leader (high vs neutral vs low) and assessed team members' responses and performance as they unfolded during a competition (i.e., in a first baseline session and a second test session). Our findings pointed to team confidence contagion such that when the leader had expressed high (rather than neutral or low) team confidence, team members perceived their team to be more efficacious and were more confident in the team's ability to win. Moreover, leaders' team confidence affected individual and team performance such that teams led by a highly confident leader performed better than those led by a less confident leader. Finally, the results supported a hypothesized mediational model in showing that the effect of leaders' confidence on team members' team confidence and performance was mediated by the leader's perceived identity leadership and members' team identification. In conclusion, the findings of this experiment suggest that leaders' team confidence can enhance members' team confidence and performance by fostering members' identification with the team. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. The idiosyncratic nature of confidence.

    Science.gov (United States)

    Navajas, Joaquin; Hindocha, Chandni; Foda, Hebah; Keramati, Mehdi; Latham, Peter E; Bahrami, Bahador

    2017-11-01

    Confidence is the 'feeling of knowing' that accompanies decision making. Bayesian theory proposes that confidence is a function solely of the perceived probability of being correct. Empirical research has suggested, however, that different individuals may perform different computations to estimate confidence from uncertain evidence. To test this hypothesis, we collected confidence reports in a task where subjects made categorical decisions about the mean of a sequence. We found that for most individuals, confidence did indeed reflect the perceived probability of being correct. However, in approximately half of them, confidence also reflected a different probabilistic quantity: the perceived uncertainty in the estimated variable. We found that the contribution of both quantities was stable over weeks. We also observed that the influence of the perceived probability of being correct was stable across two tasks, one perceptual and one cognitive. Overall, our findings provide a computational interpretation of individual differences in human confidence.

  19. Risk based test interval and maintenance optimisation - Application and uses

    International Nuclear Information System (INIS)

    Sparre, E.

    1999-10-01

    The project is part of an IAEA co-ordinated Research Project (CRP) on 'Development of Methodologies for Optimisation of Surveillance Testing and Maintenance of Safety Related Equipment at NPPs'. The purpose of the project is to investigate the sensitivity of the results obtained when performing risk based optimisation of the technical specifications. Previous projects have shown that complete LPSA models can be created and that these models allow optimisation of technical specifications. However, these optimisations did not include any in depth check of the result sensitivity with regards to methods, model completeness etc. Four different test intervals have been investigated in this study. Aside from an original, nominal, optimisation a set of sensitivity analyses has been performed and the results from these analyses have been compared to the original optimisation. The analyses indicate that the result of an optimisation is rather stable. However, it is not possible to draw any certain conclusions without performing a number of sensitivity analyses. Significant differences in the optimisation result were discovered when analysing an alternative configuration. Also deterministic uncertainties seem to affect the result of an optimisation largely. The sensitivity of failure data uncertainties is important to investigate in detail since the methodology is based on the assumption that the unavailability of a component is dependent on the length of the test interval

  20. MK-801 and memantine act differently on short-term memory tested with different time-intervals in the Morris water maze test.

    Science.gov (United States)

    Duda, Weronika; Wesierska, Malgorzata; Ostaszewski, Pawel; Vales, Karel; Nekovarova, Tereza; Stuchlik, Ales

    2016-09-15

    N-methyl-d-aspartate receptors (NMDARs) play a crucial role in spatial memory formation. In neuropharmacological studies their functioning strongly depends on testing conditions and the dosage of NMDAR antagonists. The aim of this study was to assess the immediate effects of NMDAR block by (+)MK-801 or memantine on short-term allothetic memory. Memory was tested in a working memory version of the Morris water maze test. In our version of the test, rats underwent one day of training with 8 trials, and then three experimental days when rats were injected intraperitoneally with low- 5 (MeL), high - 20 (MeH) mg/kg memantine, 0.1mg/kg MK-801 or 1ml/kg saline (SAL) 30min before testing, for three consecutive days. On each experimental day there was just one acquisition and one test trial, with an inter-trial interval of 5 or 15min. During training the hidden platform was relocated after each trial and during the experiment after each day. The follow-up effect was assessed on day 9. Intact rats improved their spatial memory across the one training day. With a 5min interval MeH rats had longer latency then all rats during retrieval. With a 15min interval the MeH rats presented worse working memory measured as retrieval minus acquisition trial for path than SAL and MeL and for latency than MeL rats. MK-801 rats had longer latency than SAL during retrieval. Thus, the high dose of memantine, contrary to low dose of MK-801 disrupts short-term memory independent on the time interval between acquisition and retrieval. This shows that short-term memory tested in a working memory version of water maze is sensitive to several parameters: i.e., NMDA receptor antagonist type, dosage and the time interval between learning and testing. Copyright © 2016. Published by Elsevier B.V.

  1. Determination of confidence limits for experiments with low numbers of counts

    International Nuclear Information System (INIS)

    Kraft, R.P.; Burrows, D.N.; Nousek, J.A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference. 12 refs

  2. The Metamemory Approach to Confidence: A Test Using Semantic Memory

    Science.gov (United States)

    Brewer, William F.; Sampaio, Cristina

    2012-01-01

    The metamemory approach to memory confidence was extended and elaborated to deal with semantic memory tasks. The metamemory approach assumes that memory confidence is based on the products and processes of a completed memory task, as well as metamemory beliefs that individuals have about how their memory products and processes relate to memory…

  3. What if there were no significance tests?

    CERN Document Server

    Harlow, Lisa L; Steiger, James H

    2013-01-01

    This book is the result of a spirited debate stimulated by a recent meeting of the Society of Multivariate Experimental Psychology. Although the viewpoints span a range of perspectives, the overriding theme that emerges states that significance testing may still be useful if supplemented with some or all of the following -- Bayesian logic, caution, confidence intervals, effect sizes and power, other goodness of approximation measures, replication and meta-analysis, sound reasoning, and theory appraisal and corroboration. The book is organized into five general areas. The first presents an overview of significance testing issues that sythesizes the highlights of the remainder of the book. The next discusses the debate in which significance testing should be rejected or retained. The third outlines various methods that may supplement current significance testing procedures. The fourth discusses Bayesian approaches and methods and the use of confidence intervals versus significance tests. The last presents the p...

  4. QTc interval prolongation in children with Turner syndrome: the results of exercise testing and 24-h ECG.

    Science.gov (United States)

    Dalla Pozza, Robert; Bechtold, Susanne; Urschel, Simon; Netz, Heinrich; Schwarz, Hans-Peter

    2009-01-01

    Turner syndrome (TS) is the most common sex chromosome abnormality in females. Recently, a prolongation of the rate-corrected QT (QTc) interval in the electrocardiogram (ECG) of TS patients has been reported. A prolonged QTc interval has been correlated to an increased risk for sudden cardiac death, and medical treatment is warranted in patients with congenital long QT syndrome (LQTS). Additionally, several drugs of common use are contraindicated in LQTS because of their effects on myocardial repolarization. The importance of the QTc prolongation in TS patients is not known at present. Eighteen TS patients with a prolonged QTc interval (group 1) and 11 TS patients with a normal QTc interval (group 2) (mean age 12.6+/-3.1 vs. 11.8+/-2.1 years, respectively) were tested. The QTc interval was calculated during exercise testing and during 24-h ECG recordings. None of the patients experienced adverse cardiac events during the tests. The mean QTc interval decreased from 0.467 to 0.432 s in group 1 and from 0.432 to 0.412 s in group 2. During the 24-h ECG, the maximum QTc interval was significantly prolonged in group 1 (0.51 vs. 0.465 s, pinformation about the cardiac risk in the single TS patient with a prolonged QTc interval. This helps in counseling these girls, as clear therapeutic guidelines are currently lacking.

  5. Exact nonparametric confidence bands for the survivor function.

    Science.gov (United States)

    Matthews, David

    2013-10-12

    A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan-Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.

  6. Calculation of solar irradiation prediction intervals combining volatility and kernel density estimates

    International Nuclear Information System (INIS)

    Trapero, Juan R.

    2016-01-01

    In order to integrate solar energy into the grid it is important to predict the solar radiation accurately, where forecast errors can lead to significant costs. Recently, the increasing statistical approaches that cope with this problem is yielding a prolific literature. In general terms, the main research discussion is centred on selecting the “best” forecasting technique in accuracy terms. However, the need of the users of such forecasts require, apart from point forecasts, information about the variability of such forecast to compute prediction intervals. In this work, we will analyze kernel density estimation approaches, volatility forecasting models and combination of both of them in order to improve the prediction intervals performance. The results show that an optimal combination in terms of prediction interval statistical tests can achieve the desired confidence level with a lower average interval width. Data from a facility located in Spain are used to illustrate our methodology. - Highlights: • This work explores uncertainty forecasting models to build prediction intervals. • Kernel density estimators, exponential smoothing and GARCH models are compared. • An optimal combination of methods provides the best results. • A good compromise between coverage and average interval width is shown.

  7. Determination and Interpretation of Characteristic Limits for Radioactivity Measurements: Decision Threshhold, Detection Limit and Limits of the Confidence Interval

    International Nuclear Information System (INIS)

    2017-01-01

    Since 2004, the environment programme of the IAEA has included activities aimed at developing a set of procedures for analytical measurements of radionuclides in food and the environment. Reliable, comparable and fit for purpose results are essential for any analytical measurement. Guidelines and national and international standards for laboratory practices to fulfil quality assurance requirements are extremely important when performing such measurements. The guidelines and standards should be comprehensive, clearly formulated and readily available to both the analyst and the customer. ISO 11929:2010 is the international standard on the determination of the characteristic limits (decision threshold, detection limit and limits of the confidence interval) for measuring ionizing radiation. For nuclear analytical laboratories involved in the measurement of radioactivity in food and the environment, robust determination of the characteristic limits of radioanalytical techniques is essential with regard to national and international regulations on permitted levels of radioactivity. However, characteristic limits defined in ISO 11929:2010 are complex, and the correct application of the standard in laboratories requires a full understanding of various concepts. This publication provides additional information to Member States in the understanding of the terminology, definitions and concepts in ISO 11929:2010, thus facilitating its implementation in Member State laboratories.

  8. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

    Science.gov (United States)

    Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

    2014-01-01

    The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

  9. Beyond hypercorrection: remembering corrective feedback for low-confidence errors.

    Science.gov (United States)

    Griffiths, Lauren; Higham, Philip A

    2018-02-01

    Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.

  10. The integrated model of sport confidence: a canonical correlation and mediational analysis.

    Science.gov (United States)

    Koehn, Stefan; Pearce, Alan J; Morris, Tony

    2013-12-01

    The main purpose of the study was to examine crucial parts of Vealey's (2001) integrated framework hypothesizing that sport confidence is a mediating variable between sources of sport confidence (including achievement, self-regulation, and social climate) and athletes' affect in competition. The sample consisted of 386 athletes, who completed the Sources of Sport Confidence Questionnaire, Trait Sport Confidence Inventory, and Dispositional Flow Scale-2. Canonical correlation analysis revealed a confidence-achievement dimension underlying flow. Bias-corrected bootstrap confidence intervals in AMOS 20.0 were used in examining mediation effects between source domains and dispositional flow. Results showed that sport confidence partially mediated the relationship between achievement and self-regulation domains and flow, whereas no significant mediation was found for social climate. On a subscale level, full mediation models emerged for achievement and flow dimensions of challenge-skills balance, clear goals, and concentration on the task at hand.

  11. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  12. Optimization of Allowed Outage Time and Surveillance Test Intervals

    Energy Technology Data Exchange (ETDEWEB)

    Al-Dheeb, Mujahed; Kang, Sunkoo; Kim, Jonghyun [KEPCO international nuclear graduate school, Ulsan (Korea, Republic of)

    2015-10-15

    The primary purpose of surveillance testing is to assure that the components of standby safety systems will be operable when they are needed in an accident. By testing these components, failures can be detected that may have occurred since the last test or the time when the equipment was last known to be operational. The probability a system or system component performs a specified function or mission under given conditions at a prescribed time is called availability (A). Unavailability (U) as a risk measure is just the complementary probability to A(t). The increase of U means the risk is increased as well. D and T have an important impact on components, or systems, unavailability. The extension of D impacts the maintenance duration distributions for at-power operations, making them longer. This, in turn, increases the unavailability due to maintenance in the systems analysis. As for T, overly-frequent surveillances can result in high system unavailability. This is because the system may be taken out of service often due to the surveillance itself and due to the repair of test-caused failures of the component. The test-caused failures include those incurred by wear and tear of the component due to the surveillances. On the other hand, as the surveillance interval increases, the component's unavailability will grow because of increased occurrences of time-dependent random failures. In that situation, the component cannot be relied upon, and accordingly the system unavailability will increase. Thus, there should be an optimal component surveillance interval in terms of the corresponding system availability. This paper aims at finding the optimal T and D which result in minimum unavailability which in turn reduces the risk. Applying the methodology in section 2 to find the values of optimal T and D for two components, i.e., safety injection pump (SIP) and turbine driven aux feedwater pump (TDAFP). Section 4 is addressing interaction between D and T. In general

  13. Optimization of Allowed Outage Time and Surveillance Test Intervals

    International Nuclear Information System (INIS)

    Al-Dheeb, Mujahed; Kang, Sunkoo; Kim, Jonghyun

    2015-01-01

    The primary purpose of surveillance testing is to assure that the components of standby safety systems will be operable when they are needed in an accident. By testing these components, failures can be detected that may have occurred since the last test or the time when the equipment was last known to be operational. The probability a system or system component performs a specified function or mission under given conditions at a prescribed time is called availability (A). Unavailability (U) as a risk measure is just the complementary probability to A(t). The increase of U means the risk is increased as well. D and T have an important impact on components, or systems, unavailability. The extension of D impacts the maintenance duration distributions for at-power operations, making them longer. This, in turn, increases the unavailability due to maintenance in the systems analysis. As for T, overly-frequent surveillances can result in high system unavailability. This is because the system may be taken out of service often due to the surveillance itself and due to the repair of test-caused failures of the component. The test-caused failures include those incurred by wear and tear of the component due to the surveillances. On the other hand, as the surveillance interval increases, the component's unavailability will grow because of increased occurrences of time-dependent random failures. In that situation, the component cannot be relied upon, and accordingly the system unavailability will increase. Thus, there should be an optimal component surveillance interval in terms of the corresponding system availability. This paper aims at finding the optimal T and D which result in minimum unavailability which in turn reduces the risk. Applying the methodology in section 2 to find the values of optimal T and D for two components, i.e., safety injection pump (SIP) and turbine driven aux feedwater pump (TDAFP). Section 4 is addressing interaction between D and T. In general

  14. Exploration of analysis methods for diagnostic imaging tests: problems with ROC AUC and confidence scores in CT colonography.

    Science.gov (United States)

    Mallett, Susan; Halligan, Steve; Collins, Gary S; Altman, Doug G

    2014-01-01

    Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests.

  15. Normal probability plots with confidence.

    Science.gov (United States)

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A Comparative Risk Assessment of Extended Integrated Leak Rate Testing Intervals

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Ji Yong; Hwang, Seok Won; Lee, Byung Sik [Korea Hydro and Nuclear Power Co., Daejeon (Korea, Republic of)

    2009-10-15

    This paper presents the risk impacts of extending the Integrated Leak Rate Testing (ILRT) intervals (from five years to ten years) of Yonggwang (YGN) Unit 1 and 2. These risk impacts depended on the annual variances of meteorological data and resident population. Main comparisons were performed between the initial risk assessment (2005) for the purpose of extending ILRT interval and risk reassessment (2009) where the changed plant internal configurations (core inventory and radioisotope release fraction) and plant external alterations (wind directions, rainfall and population distributions) were monitored. The reassessment showed that there was imperceptible risk increase when the ILRT interval was extended compared to the initial risk assessment. In addition, the increased value of the Large Early Release Frequency (LERF) also satisfied the acceptance guideline proposed on Reg. Guide 1.174. The MACCS II code was used for evaluating the offsite consequence analysis. The primary risk index were used as the Probabilistic Population Dose (PPD) by considering the early effects within 80 km. The Probabilistic Safety Assessment (PSA) of YGN 1 and 2 was applied to evaluate the accident frequency of each source term category and the used PSA scope was limited to internal event.

  17. Five-Year Risk of Interval-Invasive Second Breast Cancer

    Science.gov (United States)

    Buist, Diana S. M.; Houssami, Nehmat; Dowling, Emily C.; Halpern, Elkan F.; Gazelle, G. Scott; Lehman, Constance D.; Henderson, Louise M.; Hubbard, Rebecca A.

    2015-01-01

    Background: Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. Methods: We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. Results: The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. Conclusions: PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. PMID:25904721

  18. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

    Science.gov (United States)

    Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

  19. Sex differences in confidence influence patterns of conformity.

    Science.gov (United States)

    Cross, Catharine P; Brown, Gillian R; Morgan, Thomas J H; Laland, Kevin N

    2017-11-01

    Lack of confidence in one's own ability can increase the likelihood of relying on social information. Sex differences in confidence have been extensively investigated in cognitive tasks, but implications for conformity have not been directly tested. Here, we tested the hypothesis that, in a task that shows sex differences in confidence, an indirect effect of sex on social information use will also be evident. Participants (N = 168) were administered a mental rotation (MR) task or a letter transformation (LT) task. After providing an answer, participants reported their confidence before seeing the responses of demonstrators and being allowed to change their initial answer. In the MR, but not the LT, task, women showed lower levels of confidence than men, and confidence mediated an indirect effect of sex on the likelihood of switching answers. These results provide novel, experimental evidence that confidence is a general explanatory mechanism underpinning susceptibility to social influences. Our results have implications for the interpretation of the wider literature on sex differences in conformity. © 2016 The British Psychological Society.

  20. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    Science.gov (United States)

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  1. Prolonged corrected QT interval is predictive of future stroke events even in subjects without ECG-diagnosed left ventricular hypertrophy.

    Science.gov (United States)

    Ishikawa, Joji; Ishikawa, Shizukiyo; Kario, Kazuomi

    2015-03-01

    We attempted to evaluate whether subjects who exhibit prolonged corrected QT (QTc) interval (≥440 ms in men and ≥460 ms in women) on ECG, with and without ECG-diagnosed left ventricular hypertrophy (ECG-LVH; Cornell product, ≥244 mV×ms), are at increased risk of stroke. Among the 10 643 subjects, there were a total of 375 stroke events during the follow-up period (128.7±28.1 months; 114 142 person-years). The subjects with prolonged QTc interval (hazard ratio, 2.13; 95% confidence interval, 1.22-3.73) had an increased risk of stroke even after adjustment for ECG-LVH (hazard ratio, 1.71; 95% confidence interval, 1.22-2.40). When we stratified the subjects into those with neither a prolonged QTc interval nor ECG-LVH, those with a prolonged QTc interval but without ECG-LVH, and those with ECG-LVH, multivariate-adjusted Cox proportional hazards analysis demonstrated that the subjects with prolonged QTc intervals but not ECG-LVH (1.2% of all subjects; incidence, 10.7%; hazard ratio, 2.70, 95% confidence interval, 1.48-4.94) and those with ECG-LVH (incidence, 7.9%; hazard ratio, 1.83; 95% confidence interval, 1.31-2.57) had an increased risk of stroke events, compared with those with neither a prolonged QTc interval nor ECG-LVH. In conclusion, prolonged QTc interval was associated with stroke risk even among patients without ECG-LVH in the general population. © 2014 American Heart Association, Inc.

  2. Notes on testing equality and interval estimation in Poisson frequency data under a three-treatment three-period crossover trial.

    Science.gov (United States)

    Lui, Kung-Jong; Chang, Kuang-Chao

    2016-10-01

    When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.

  3. CIMP status of interval colon cancers: another piece to the puzzle.

    Science.gov (United States)

    Arain, Mustafa A; Sawhney, Mandeep; Sheikh, Shehla; Anway, Ruth; Thyagarajan, Bharat; Bond, John H; Shaukat, Aasma

    2010-05-01

    Colon cancers diagnosed in the interval after a complete colonoscopy may occur due to limitations of colonoscopy or due to the development of new tumors, possibly reflecting molecular and environmental differences in tumorigenesis resulting in rapid tumor growth. In a previous study from our group, interval cancers (colon cancers diagnosed within 5 years of a complete colonoscopy) were almost four times more likely to demonstrate microsatellite instability (MSI) than non-interval cancers. In this study we extended our molecular analysis to compare the CpG island methylator phenotype (CIMP) status of interval and non-interval colorectal cancers and investigate the relationship between the CIMP and MSI pathways in the pathogenesis of interval cancers. We searched our institution's cancer registry for interval cancers, defined as colon cancers that developed within 5 years of a complete colonoscopy. These were frequency matched in a 1:2 ratio by age and sex to patients with non-interval cancers (defined as colon cancers diagnosed on a patient's first recorded colonoscopy). Archived cancer specimens for all subjects were retrieved and tested for CIMP gene markers. The MSI status of subjects identified between 1989 and 2004 was known from our previous study. Tissue specimens of newly identified cases and controls (between 2005 and 2006) were tested for MSI. There were 1,323 cases of colon cancer diagnosed over the 17-year study period, of which 63 were identified as having interval cancer and matched to 131 subjects with non-interval cancer. Study subjects were almost all Caucasian men. CIMP was present in 57% of interval cancers compared to 33% of non-interval cancers (P=0.004). As shown previously, interval cancers were more likely than non-interval cancers to occur in the proximal colon (63% vs. 39%; P=0.002), and have MSI 29% vs. 11%, P=0.004). In multivariable logistic regression model, proximal location (odds ratio (OR) 1.85; 95% confidence interval (CI) 1

  4. The Relationship Between Eyewitness Confidence and Identification Accuracy: A New Synthesis.

    Science.gov (United States)

    Wixted, John T; Wells, Gary L

    2017-05-01

    The U.S. legal system increasingly accepts the idea that the confidence expressed by an eyewitness who identified a suspect from a lineup provides little information as to the accuracy of that identification. There was a time when this pessimistic assessment was entirely reasonable because of the questionable eyewitness-identification procedures that police commonly employed. However, after more than 30 years of eyewitness-identification research, our understanding of how to properly conduct a lineup has evolved considerably, and the time seems ripe to ask how eyewitness confidence informs accuracy under more pristine testing conditions (e.g., initial, uncontaminated memory tests using fair lineups, with no lineup administrator influence, and with an immediate confidence statement). Under those conditions, mock-crime studies and police department field studies have consistently shown that, for adults, (a) confidence and accuracy are strongly related and (b) high-confidence suspect identifications are remarkably accurate. However, when certain non-pristine testing conditions prevail (e.g., when unfair lineups are used), the accuracy of even a high-confidence suspect ID is seriously compromised. Unfortunately, some jurisdictions have not yet made reforms that would create pristine testing conditions and, hence, our conclusions about the reliability of high-confidence identifications cannot yet be applied to those jurisdictions. However, understanding the information value of eyewitness confidence under pristine testing conditions can help the criminal justice system to simultaneously achieve both of its main objectives: to exonerate the innocent (by better appreciating that initial, low-confidence suspect identifications are error prone) and to convict the guilty (by better appreciating that initial, high-confidence suspect identifications are surprisingly accurate under proper testing conditions).

  5. Predictor sort sampling and one-sided confidence bounds on quantiles

    Science.gov (United States)

    Steve Verrill; Victoria L. Herian; David W. Green

    2002-01-01

    Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...

  6. Five-band microwave radiometer system for noninvasive brain temperature measurement in newborn babies: Phantom experiment and confidence interval

    Science.gov (United States)

    Sugiura, T.; Hirata, H.; Hand, J. W.; van Leeuwen, J. M. J.; Mizushina, S.

    2011-10-01

    Clinical trials of hypothermic brain treatment for newborn babies are currently hindered by the difficulty in measuring deep brain temperatures. As one of the possible methods for noninvasive and continuous temperature monitoring that is completely passive and inherently safe is passive microwave radiometry (MWR). We have developed a five-band microwave radiometer system with a single dual-polarized, rectangular waveguide antenna operating within the 1-4 GHz range and a method for retrieving the temperature profile from five radiometric brightness temperatures. This paper addresses (1) the temperature calibration for five microwave receivers, (2) the measurement experiment using a phantom model that mimics the temperature profile in a newborn baby, and (3) the feasibility for noninvasive monitoring of deep brain temperatures. Temperature resolutions were 0.103, 0.129, 0.138, 0.105 and 0.111 K for 1.2, 1.65, 2.3, 3.0 and 3.6 GHz receivers, respectively. The precision of temperature estimation (2σ confidence interval) was about 0.7°C at a 5-cm depth from the phantom surface. Accuracy, which is the difference between the estimated temperature using this system and the measured temperature by a thermocouple at a depth of 5 cm, was about 2°C. The current result is not satisfactory for clinical application because the clinical requirement for accuracy must be better than 1°C for both precision and accuracy at a depth of 5 cm. Since a couple of possible causes for this inaccuracy have been identified, we believe that the system can take a step closer to the clinical application of MWR for hypothermic rescue treatment.

  7. Ambulatory Function and Perception of Confidence in Persons with Stroke with a Custom-Made Hinged versus a Standard Ankle Foot Orthosis

    Directory of Open Access Journals (Sweden)

    Angélique Slijper

    2012-01-01

    Full Text Available Objective. The aim was to compare walking with an individually designed dynamic hinged ankle foot orthosis (DAFO and a standard carbon composite ankle foot orthosis (C-AFO. Methods. Twelve participants, mean age 56 years (range 26–72, with hemiparesis due to stroke were included in the study. During the six-minute walk test (6MW, walking velocity, the Physiological Cost Index (PCI, and the degree of experienced exertion were measured with a DAFO and C-AFO, respectively, followed by a Stairs Test velocity and perceived confidence was rated. Results. The mean differences in favor for the DAFO were in 6MW 24.3 m (95% confidence interval [CI] 4.90, 43.76, PCI −0.09 beats/m (95% CI −0.27, 0.95, velocity 0.04 m/s (95% CI −0.01, 0.097, and in the Stairs Test −11.8 s (95% CI −19.05, −4.48. All participants except one perceived the degree of experienced exertion lower and felt more confident when walking with the DAFO. Conclusions. Wearing a DAFO resulted in longer walking distance and faster stair climbing compared to walking with a C-AFO. Eleven of twelve participants felt more confident with the DAFO, which may be more important than speed and distance and the most important reason for prescribing an AFO.

  8. Five-year risk of interval-invasive second breast cancer.

    Science.gov (United States)

    Lee, Janie M; Buist, Diana S M; Houssami, Nehmat; Dowling, Emily C; Halpern, Elkan F; Gazelle, G Scott; Lehman, Constance D; Henderson, Louise M; Hubbard, Rebecca A

    2015-07-01

    Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. © The Author 2015. Published by Oxford University Press. All rights reserved

  9. Low utilization of HIV testing during pregnancy: What are the barriers to HIV testing for women in rural India?

    Science.gov (United States)

    Sinha, Gita; Dyalchand, Ashok; Khale, Manisha; Kulkarni, Gopal; Vasudevan, Shubha; Bollinger, Robert C

    2008-02-01

    Sixty percent of India's HIV cases occur in rural residents. Despite government policy to expand antenatal HIV screening and prevention of maternal-to-child transmission (PMTCT), little is known about HIV testing among rural women during pregnancy. Between January and March 2006, a cross-sectional sample of 400 recently pregnant women from rural Maharashtra was administered a questionnaire regarding HIV awareness, risk, and history of antenatal HIV testing. Thirteen women (3.3%) reported receiving antenatal HIV testing. Neither antenatal care utilization nor history of sexually transmitted infection (STI) symptoms influenced odds of receiving HIV testing. Women who did not receive HIV testing, compared with women who did, were 95% less likely to have received antenatal HIV counseling (odds ratio = 0.05, 95% confidence interval: 0.02 to 0.17) and 80% less aware of an existing HIV testing facility (odds ratio = 0.19, 95% confidence interval: 0.04 to 0.75). Despite measurable HIV prevalence, high antenatal care utilization, and STI symptom history, recently pregnant rural Indian women report low HIV testing. Barriers to HIV testing during pregnancy include lack of discussion by antenatal care providers and lack of awareness of existing testing services. Provider-initiated HIV counseling and testing during pregnancy would optimize HIV prevention for women throughout rural India.

  10. Evaluation of the Trail Making Test and interval timing as measures of cognition in healthy adults: comparisons by age, education, and gender.

    Science.gov (United States)

    Płotek, Włodzimierz; Łyskawa, Wojciech; Kluzik, Anna; Grześkowiak, Małgorzata; Podlewski, Roland; Żaba, Zbigniew; Drobnik, Leon

    2014-02-03

    Human cognitive functioning can be assessed using different methods of testing. Age, level of education, and gender may influence the results of cognitive tests. The well-known Trail Making Test (TMT), which is often used to measure the frontal lobe function, and the experimental test of Interval Timing (IT) were compared. The methods used in IT included reproduction of auditory and visual stimuli, with the subsequent production of the time intervals of 1-, 2-, 5-, and 7-seconds durations with no pattern. Subjects included 64 healthy adult volunteers aged 18-63 (33 women, 31 men). Comparisons were made based on age, education, and gender. TMT was performed quickly and was influenced by age, education, and gender. All reproduced visual and produced intervals were shortened and the reproduction of auditory stimuli was more complex. Age, education, and gender have more pronounced impact on the cognitive test than on the interval timing test. The reproduction of the short auditory stimuli was more accurate in comparison to other modalities used in the IT test. The interval timing, when compared to the TMT, offers an interesting possibility of testing. Further studies are necessary to confirm the initial observation.

  11. The Effect of Retention Interval Task Difficulty on Young Children's Prospective Memory: Testing the Intention Monitoring Hypothesis

    Science.gov (United States)

    Mahy, Caitlin E. V.; Moses, Louis J.

    2015-01-01

    The current study examined the impact of retention interval task difficulty on 4- and 5-year-olds' prospective memory (PM) to test the hypothesis that children periodically monitor their intentions during the retention interval and that disrupting this monitoring may result in poorer PM performance. In addition, relations among PM, working memory,…

  12. Predictors of Willingness to Read in English: Testing a Model Based on Possible Selves and Self-Confidence

    Science.gov (United States)

    Khajavy, Gholam Hassan; Ghonsooly, Behzad

    2017-01-01

    The aim of the present study is twofold. First, it tests a model of willingness to read (WTR) based on L2 motivation and communication confidence (communication anxiety and perceived communicative competence). Second, it applies the recent theory of L2 motivation proposed by Dörnyei [2005. "The Psychology of Language Learner: Individual…

  13. Confidence-Based Learning in Investment Analysis

    Science.gov (United States)

    Serradell-Lopez, Enric; Lara-Navarra, Pablo; Castillo-Merino, David; González-González, Inés

    The aim of this study is to determine the effectiveness of using multiple choice tests in subjects related to the administration and business management. To this end we used a multiple-choice test with specific questions to verify the extent of knowledge gained and the confidence and trust in the answers. The tests were performed in a group of 200 students at the bachelor's degree in Business Administration and Management. The analysis made have been implemented in one subject of the scope of investment analysis and measured the level of knowledge gained and the degree of trust and security in the responses at two different times of the course. The measurements have been taken into account different levels of difficulty in the questions asked and the time spent by students to complete the test. The results confirm that students are generally able to obtain more knowledge along the way and get increases in the degree of trust and confidence in the answers. It is confirmed as the difficulty level of the questions set a priori by the heads of the subjects are related to levels of security and confidence in the answers. It is estimated that the improvement in the skills learned is viewed favourably by businesses and are especially important for job placement of students.

  14. Reliability of a Computerized Neurocognitive Test in Baseline Concussion Testing of High School Athletes.

    Science.gov (United States)

    MacDonald, James; Duerson, Drew

    2015-07-01

    Baseline assessments using computerized neurocognitive tests are frequently used in the management of sport-related concussions. Such testing is often done on an annual basis in a community setting. Reliability is a fundamental test characteristic that should be established for such tests. Our study examined the test-retest reliability of a computerized neurocognitive test in high school athletes over 1 year. Repeated measures design. Two American high schools. High school athletes (N = 117) participating in American football or soccer during the 2011-2012 and 2012-2013 academic years. All study participants completed 2 baseline computerized neurocognitive tests taken 1 year apart at their respective schools. The test measures performance on 4 cognitive tasks: identification speed (Attention), detection speed (Processing Speed), one card learning accuracy (Learning), and one back speed (Working Memory). Reliability was assessed by measuring the intraclass correlation coefficient (ICC) between the repeated measures of the 4 cognitive tasks. Pearson and Spearman correlation coefficients were calculated as a secondary outcome measure. The measure for identification speed performed best (ICC = 0.672; 95% confidence interval, 0.559-0.760) and the measure for one card learning accuracy performed worst (ICC = 0.401; 95% confidence interval, 0.237-0.542). All tests had marginal or low reliability. In a population of high school athletes, computerized neurocognitive testing performed in a community setting demonstrated low to marginal test-retest reliability on baseline assessments 1 year apart. Further investigation should focus on (1) improving the reliability of individual tasks tested, (2) controlling for external factors that might affect test performance, and (3) identifying the ideal time interval to repeat baseline testing in high school athletes. Computerized neurocognitive tests are used frequently in high school athletes, often within a model of baseline testing

  15. Unexplained Graft Dysfunction after Heart Transplantation—Role of Novel Molecular Expression Test Score and QTc-Interval: A Case Report

    Directory of Open Access Journals (Sweden)

    Khurram Shahzad

    2010-01-01

    Full Text Available In the current era of immunosuppressive medications there is increased observed incidence of graft dysfunction in the absence of known histological criteria of rejection after heart transplantation. A noninvasive molecular expression diagnostic test was developed and validated to rule out histological acute cellular rejection. In this paper we present for the first time, longitudinal pattern of changes in this novel diagnostic test score along with QTc-interval in a patient who was admitted with unexplained graft dysfunction. Patient presented with graft failure with negative findings on all known criteria of rejection including acute cellular rejection, antibody mediated rejection and cardiac allograft vasculopathy. The molecular expression test score showed gradual increase and QTc-interval showed gradual prolongation with the gradual decline in graft function. This paper exemplifies that in patients presenting with unexplained graft dysfunction, GEP test score and QTc-interval correlate with the changes in the graft function.

  16. Reference intervals for selected serum biochemistry analytes in cheetahs Acinonyx jubatus.

    Science.gov (United States)

    Hudson-Lamb, Gavin C; Schoeman, Johan P; Hooijberg, Emma H; Heinrich, Sonja K; Tordiffe, Adrian S W

    2016-02-26

    Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L - 166 mmol/L), potassium (3.9 mmol/L - 5.2 mmol/L), magnesium (0.8 mmol/L - 1.2 mmol/L), chloride (97 mmol/L - 130 mmol/L), urea (8.2 mmol/L - 25.1 mmol/L) and creatinine (88 µmol/L - 288 µmol/L). Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  17. Test interval optimization of safety systems of nuclear power plant using fuzzy-genetic approach

    International Nuclear Information System (INIS)

    Durga Rao, K.; Gopika, V.; Kushwaha, H.S.; Verma, A.K.; Srividya, A.

    2007-01-01

    Probabilistic safety assessment (PSA) is the most effective and efficient tool for safety and risk management in nuclear power plants (NPP). PSA studies not only evaluate risk/safety of systems but also their results are very useful in safe, economical and effective design and operation of NPPs. The latter application is popularly known as 'Risk-Informed Decision Making'. Evaluation of technical specifications is one such important application of Risk-Informed decision making. Deciding test interval (TI), one of the important technical specifications, with the given resources and risk effectiveness is an optimization problem. Uncertainty is inherently present in the availability parameters such as failure rate and repair time due to the limitation in assessing these parameters precisely. This paper presents a solution to test interval optimization problem with uncertain parameters in the model with fuzzy-genetic approach along with a case of application from a safety system of Indian pressurized heavy water reactor (PHWR)

  18. Is consumer confidence an indicator of JSE performance?

    OpenAIRE

    Kamini Solanki; Yudhvir Seetharam

    2014-01-01

    While most studies examine the impact of business confidence on market performance, we instead focus on the consumer because consumer spending habits are a natural extension of trading activity on the equity market. This particular study examines investor sentiment as measured by the Consumer Confidence Index in South Africa and its effect on the Johannesburg Stock Exchange (JSE). We employ Granger causality tests to investigate the relationship across time between the Consumer Confidence Ind...

  19. Confidence Estimation of Reliability Indices of the System with Elements Duplication and Recovery

    Directory of Open Access Journals (Sweden)

    I. V. Pavlov

    2017-01-01

    Full Text Available The article considers a problem to estimate a confidence interval of the main reliability indices such as availability rate, mean time between failures, and operative availability (in the stationary state for the model of the system with duplication and independent recovery of elements.Presents a solution of the problem for a situation that often arises in practice, when there are unknown exact values of the reliability parameters of the elements, and only test data of the system or its individual parts (elements, subsystems for reliability are known. It should be noted that the problems of the confidence estimate of reliability indices of the complex systems based on the testing results of their individual elements are fairly common function in engineering practice when designing and running the various engineering systems. The available papers consider this problem, mainly, for non-recovery systems.Describes a solution of this problem for the important particular case when the system elements are duplicated by the reserved elements, and the elements that have failed in the course of system operation are recovered (regardless of the state of other elements.An approximate solution of this problem is obtained for the case of high reliability or "fast recovery" of elements on the assumption that the average recovery time of elements is small as compared to the average time between failures.

  20. Vaccination Confidence and Parental Refusal/Delay of Early Childhood Vaccines.

    Directory of Open Access Journals (Sweden)

    Melissa B Gilkey

    Full Text Available To support efforts to address parental hesitancy towards early childhood vaccination, we sought to validate the Vaccination Confidence Scale using data from a large, population-based sample of U.S. parents.We used weighted data from 9,354 parents who completed the 2011 National Immunization Survey. Parents reported on the immunization history of a 19- to 35-month-old child in their households. Healthcare providers then verified children's vaccination status for vaccines including measles, mumps, and rubella (MMR, varicella, and seasonal flu. We used separate multivariable logistic regression models to assess associations between parents' mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status.A substantial minority of parents reported a history of vaccine refusal (15% or delay (27%. Vaccination confidence was negatively associated with refusal of any vaccine (odds ratio [OR] = 0.58, 95% confidence interval [CI], 0.54-0.63 as well as refusal of MMR, varicella, and flu vaccines specifically. Negative associations between vaccination confidence and measures of vaccine delay were more moderate, including delay of any vaccine (OR = 0.81, 95% CI, 0.76-0.86. Vaccination confidence was positively associated with having received vaccines, including MMR (OR = 1.53, 95% CI, 1.40-1.68, varicella (OR = 1.54, 95% CI, 1.42-1.66, and flu vaccines (OR = 1.32, 95% CI, 1.23-1.42.Vaccination confidence was consistently associated with early childhood vaccination behavior across multiple vaccine types. Our findings support expanding the application of the Vaccination Confidence Scale to measure vaccination beliefs among parents of young children.

  1. The Confidence-Accuracy Relationship for Eyewitness Identification Decisions: Effects of Exposure Duration, Retention Interval, and Divided Attention

    Science.gov (United States)

    Palmer, Matthew A.; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

    2013-01-01

    Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the…

  2. 40 CFR 1054.310 - How must I select engines for production-line testing?

    Science.gov (United States)

    2010-07-01

    ...% confidence intervals for a one-tail distribution. σ = Test sample standard deviation (see paragraph (c)(2) of this section). x = Mean of emission test results of the sample. STD = Emission standard (or family...)). (e) After each new test, recalculate the required sample size using the updated mean values, standard...

  3. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks

    Science.gov (United States)

    Hedne, Mikael R.; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the “warmth ratings” previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related. PMID:27630598

  4. Prognostic value of QTc interval dispersion changes during exercise testing in hypertensive men

    Directory of Open Access Journals (Sweden)

    Đorđević Dragan

    2008-01-01

    Full Text Available INTRODUCTION The prognostic significance of QTc dispersion changes during exercise testing (ET in patients with left ventricular hypertrophy is not clear. OBJECTIVE The aim was to study the dynamics of QTc interval dispersion (QTcd in patients (pts with left ventricular hypertrophy (LVH during the exercise testing and its prognostic significance. METHOD In the study we included 55 men (aged 53 years with hypertensive left ventricular hypertrophy and a negative ET (LVH group, 20 men (aged 58 years with a positive ET and 20 healthy men (aged 55 years. There was no statistically significant difference in the left ventricular mass index (LVMI between LVH group and ILVH group (160.9±14.9 g/m2 and 152.8±22.7 g/m2. The first ECG was done before the ET and the second one was done during the first minute of recovery, with calculation of QTc dispersion. The patients were followed during five years for new cardiovascular events. RESULTS During the ET, the QTcd significantly increased in LVH group (56.8±18.0 - 76.7±22.6 ms; p<0.001. A statistically significant correlation was found between the amount of ST segment depression at the end of ET and QTc dispersion at the beginning and at the end of ET (r=0.673 and r=0.698; p<0.01. The QTc dispersion was increased in 35 (63.6% patients and decreased in 20 (36.4% patients during the ET. Three patients (5.4% in the first group had adverse cardiovascular events during the five-year follow-up. A multiple stepwise regression model was formed by including age, LVMI, QTc interval, QTc dispersion and change of QTc dispersion during the ET. There was no prognostic significance of QTc interval and QTc dispersion during five-year follow-up in regard to adverse cardiovascular events, but prognostic value was found for LVMI (coefficient β=0.480; p<0.001. CONCLUSION The increase of QTc interval dispersion is common in men with positive ET for myocardial ischemia and there is a correlation between QTc dispersion and

  5. Consumer’s and merchant’s confidence in internet payments

    Directory of Open Access Journals (Sweden)

    Franc Bračun

    2003-01-01

    Full Text Available Performing payment transactions over the Internet is becoming increasingly important. Whenever one interacts with others, he or she faces the problem of uncertainty because in interacting with others, one makes him or herself vulnerable, i.e. one can be betrayed. Thus, perceived risk and confidence are of fundamental importance in electronic payment transactions. A higher risk leads to greater hesitance about entering into a business relationship with a high degree of uncertainty; and therefore, to an increased need for confidence. This paper has two objectives. First, it aims to introduce and test a theoretical model that predicts consumer and merchant acceptance of the Internet payment solution by explaining the complex set of relationships among the key factors influencing confidence in electronic payment transactions. Second, the paper attempts to shed light on the complex interrelationship among confidence, control and perceived risk. An empirical study was conducted to test the proposed model using data from consumers and merchants in Slovenia. The results show how perceived risk dimensions and post-transaction control influence consumer’s and merchant’s confidence in electronic payment transactions, and the impact of confidence on the adoption of mass-market on-line payment solutions.

  6. Optimal Wind Power Uncertainty Intervals for Electricity Market Operation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Ying; Zhou, Zhi; Botterud, Audun; Zhang, Kaifeng

    2018-01-01

    It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixed integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.

  7. High-intensity cycle interval training improves cycling and running performance in triathletes.

    Science.gov (United States)

    Etxebarria, Naroa; Anson, Judith M; Pyne, David B; Ferguson, Richard A

    2014-01-01

    Effective cycle training for triathlon is a challenge for coaches. We compared the effects of two variants of cycle high-intensity interval training (HIT) on triathlon-specific cycling and running. Fourteen moderately-trained male triathletes ([Formula: see text]O2peak 58.7 ± 8.1 mL kg(-1) min(-1); mean ± SD) completed on separate occasions a maximal incremental test ([Formula: see text]O2peak and maximal aerobic power), 16 × 20 s cycle sprints and a 1-h triathlon-specific cycle followed immediately by a 5 km run time trial. Participants were then pair-matched and assigned randomly to either a long high-intensity interval training (LONG) (6-8 × 5 min efforts) or short high-intensity interval training (SHORT) (9-11 × 10, 20 and 40 s efforts) HIT cycle training intervention. Six training sessions were completed over 3 weeks before participants repeated the baseline testing. Both groups had an ∼7% increase in [Formula: see text]O2peak (SHORT 7.3%, ±4.6%; mean, ±90% confidence limits; LONG 7.5%, ±1.7%). There was a moderate improvement in mean power for both the SHORT (10.3%, ±4.4%) and LONG (10.7%, ±6.8%) groups during the last eight 20-s sprints. There was a small to moderate decrease in heart rate, blood lactate and perceived exertion in both groups during the 1-h triathlon-specific cycling but only the LONG group had a substantial decrease in the subsequent 5-km run time (64, ±59 s). Moderately-trained triathletes should use both short and long high-intensity intervals to improve cycling physiology and performance. Longer 5-min intervals on the bike are more likely to benefit 5 km running performance.

  8. Non-invasive prenatal cell-free fetal DNA testing for down syndrome and other chromosomal abnormalities

    Directory of Open Access Journals (Sweden)

    Darija Strah

    2015-12-01

    Full Text Available Background: Chorionic villus sampling and amniocentesis as definitive diagnostic procedures represent a gold standard for prenatal diagnosis of chromosomal abnormalities. The methods are invasive and lead to a miscarriage and fetal loss in approximately 0.5–1 %. Non-invasive prenatal DNA testing (NIPT is based on the analysis of cell-free fetal DNA from maternal blood. It represents a highly accurate screening test for detecting the most common fetal chromosomal abnormalities. In our study we present the results of NIPT testing in the Diagnostic Center Strah, Slovenia, over the last 3 years.Methods: In our study, 123 pregnant women from 11th to 18th week of pregnancy were included. All of them had First trimester assessment of risk for trisomy 21, done before NIPT testing.Results: 5 of total 6 high-risk NIPT cases (including 3 cases of Down syndrome and 2 cases of Klinefelter’s syndrome were confirmed by fetal karyotyping. One case–Edwards syndrome was false positive. Patau syndrome, triple X syndrome or Turner syndrome were not observed in any of the cases. Furthermore, there were no false negative cases reported. In general, NIPT testing had 100 % sensitivity (95 % confidence interval: 46.29 %–100.00 % and 98.95 % specificity (95 % confidence interval: 93.44 %–99.95 %. In determining Down syndrome alone, specificity (95 % confidence interval: 95.25 %- 100.00 % and sensitivity (95 % confidence interval: 31.00 %–100.00 % turned out to be 100 %. In 2015, the average turnaround time for analysis was 8.3 days from the day when the sample was taken. Repeated blood sampling was required in 2 cases (redraw rate = 1.6 %.Conclusions: Our results confirm that NIPT represents a fast, safe and highly accurate advanced screening test for most common chromosomal abnormalities. In current clinical practice, NIPT would significantly decrease the number of unnecessary invasive procedures and the rate of fetal

  9. Modeling Confidence and Response Time in Recognition Memory

    Science.gov (United States)

    Ratcliff, Roger; Starns, Jeffrey J.

    2009-01-01

    A new model for confidence judgments in recognition memory is presented. In the model, the match between a single test item and memory produces a distribution of evidence, with better matches corresponding to distributions with higher means. On this match dimension, confidence criteria are placed, and the areas between the criteria under the…

  10. 40 CFR 1045.310 - How must I select engines for production-line testing?

    Science.gov (United States)

    2010-07-01

    ... select and test one more engine. Then, calculate the required sample size for the model year as described.... It defines 95% confidence intervals for a one-tail distribution. σ = Test sample standard deviation (see paragraph (c)(2) of this section). x = Mean of emission test results of the sample. STD = Emission...

  11. Experimental congruence of interval scale production from paired comparisons and ranking for image evaluation

    Science.gov (United States)

    Handley, John C.; Babcock, Jason S.; Pelz, Jeff B.

    2003-12-01

    Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.

  12. How do regulators measure public confidence?

    International Nuclear Information System (INIS)

    Schmitt, A.; Besenyei, E.

    2006-01-01

    The conclusions and recommendations of this session can be summarized this way. - There are some important elements of confidence: visibility, satisfaction, credibility and reputation. The latter can consist of trust, positive image and knowledge of the role the organisation plays. A good reputation is hard to achieve but easy to lose. - There is a need to define what public confidence is and what to measure. The difficulty is that confidence is a matter of perception of the public, so what we try to measure is the perception. - It is controversial how to take into account the results of confidence measurement because of the influence of the context. It is not an exact science, results should be examined cautiously and surveys should be conducted frequently, at least every two years. - Different experiences were explained: - Quantitative surveys - among the general public or more specific groups like the media; - Qualitative research - with test groups and small panels; - Semi-quantitative studies - among stakeholders who have regular contracts with the regulatory body. It is not clear if the results should be shared with the public or just with other authorities and governmental organisations. - Efforts are needed to increase visibility, which is a prerequisite for confidence. - A practical example of organizing an emergency exercise and an information campaign without taking into account the real concerns of the people was given to show how public confidence can be decreased. - We learned about a new method - the so-called socio-drama - which addresses another issue also connected to confidence - the notion of understanding between stakeholders around a nuclear site. It is another way of looking at confidence in a more restricted group. (authors)

  13. Short-Term Wind Power Interval Forecasting Based on an EEMD-RT-RVM Model

    Directory of Open Access Journals (Sweden)

    Haixiang Zang

    2016-01-01

    Full Text Available Accurate short-term wind power forecasting is important for improving the security and economic success of power grids. Existing wind power forecasting methods are mostly types of deterministic point forecasting. Deterministic point forecasting is vulnerable to forecasting errors and cannot effectively deal with the random nature of wind power. In order to solve the above problems, we propose a short-term wind power interval forecasting model based on ensemble empirical mode decomposition (EEMD, runs test (RT, and relevance vector machine (RVM. First, in order to reduce the complexity of data, the original wind power sequence is decomposed into a plurality of intrinsic mode function (IMF components and residual (RES component by using EEMD. Next, we use the RT method to reconstruct the components and obtain three new components characterized by the fine-to-coarse order. Finally, we obtain the overall forecasting results (with preestablished confidence levels by superimposing the forecasting results of each new component. Our results show that, compared with existing methods, our proposed short-term interval forecasting method has less forecasting errors, narrower interval widths, and larger interval coverage percentages. Ultimately, our forecasting model is more suitable for engineering applications and other forecasting methods for new energy.

  14. Reference intervals for selected serum biochemistry analytes in cheetahs (Acinonyx jubatus

    Directory of Open Access Journals (Sweden)

    Gavin C. Hudson-Lamb

    2016-02-01

    Full Text Available Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L – 166 mmol/L, potassium (3.9 mmol/L – 5.2 mmol/L, magnesium (0.8 mmol/L – 1.2 mmol/L, chloride (97 mmol/L – 130 mmol/L, urea (8.2 mmol/L – 25.1 mmol/L and creatinine (88 µmol/L – 288 µmol/L. Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  15. Animal Spirits and Extreme Confidence: No Guts, No Glory?

    NARCIS (Netherlands)

    M.G. Douwens-Zonneveld (Mariska)

    2012-01-01

    textabstractThis study investigates to what extent extreme confidence of either management or security analysts may impact financial or operating performance. We construct a multidimensional degree of company confidence measure from a wide range of corporate decisions. We empirically test this

  16. Application of Interval Arithmetic in the Evaluation of Transfer Capabilities by Considering the Sources of Uncertainty

    Directory of Open Access Journals (Sweden)

    Prabha Umapathy

    2009-01-01

    Full Text Available Total transfer capability (TTC is an important index in a power system with large volume of inter-area power exchanges. This paper proposes a novel technique to determine the TTC and its confidence intervals in the system by considering the uncertainties in the load and line parameters. The optimal power flow (OPF method is used to obtain the TTC. Variations in the load and line parameters are incorporated using the interval arithmetic (IA method. The IEEE 30 bus test system is used to illustrate the proposed methodology. Various uncertainties in the line, load and both line and load are incorporated in the evaluation of total transfer capability. From the results, it is observed that the solutions obtained through the proposed method provide much wider information in terms of closed interval form which is more useful in ensuring secured operation of the interconnected system in the presence of uncertainties in load and line parameters.

  17. Interpregnancy interval and risk of autistic disorder.

    Science.gov (United States)

    Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla

    2013-11-01

    A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.

  18. Oscillatory dynamics of an intravenous glucose tolerance test model with delay interval

    Science.gov (United States)

    Shi, Xiangyun; Kuang, Yang; Makroglou, Athena; Mokshagundam, Sriprakash; Li, Jiaxu

    2017-11-01

    Type 2 diabetes mellitus (T2DM) has become prevalent pandemic disease in view of the modern life style. Both diabetic population and health expenses grow rapidly according to American Diabetes Association. Detecting the potential onset of T2DM is an essential focal point in the research of diabetes mellitus. The intravenous glucose tolerance test (IVGTT) is an effective protocol to determine the insulin sensitivity, glucose effectiveness, and pancreatic β-cell functionality, through the analysis and parameter estimation of a proper differential equation model. Delay differential equations have been used to study the complex physiological phenomena including the glucose and insulin regulations. In this paper, we propose a novel approach to model the time delay in IVGTT modeling. This novel approach uses two parameters to simulate not only both discrete time delay and distributed time delay in the past interval, but also the time delay distributed in a past sub-interval. Normally, larger time delay, either a discrete or a distributed delay, will destabilize the system. However, we find that time delay over a sub-interval might not. We present analytically some basic model properties, which are desirable biologically and mathematically. We show that this relatively simple model provides good fit to fluctuating patient data sets and reveals some intriguing dynamics. Moreover, our numerical simulation results indicate that our model may remove the defect in well known Minimal Model, which often overestimates the glucose effectiveness index.

  19. Comparison of Statistical Methods for Detector Testing Programs

    Energy Technology Data Exchange (ETDEWEB)

    Rennie, John Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Abhold, Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-14

    A typical goal for any detector testing program is to ascertain not only the performance of the detector systems under test, but also the confidence that systems accepted using that testing program’s acceptance criteria will exceed a minimum acceptable performance (which is usually expressed as the minimum acceptable success probability, p). A similar problem often arises in statistics, where we would like to ascertain the fraction, p, of a population of items that possess a property that may take one of two possible values. Typically, the problem is approached by drawing a fixed sample of size n, with the number of items out of n that possess the desired property, x, being termed successes. The sample mean gives an estimate of the population mean p ≈ x/n, although usually it is desirable to accompany such an estimate with a statement concerning the range within which p may fall and the confidence associated with that range. Procedures for establishing such ranges and confidence limits are described in detail by Clopper, Brown, and Agresti for two-sided symmetric confidence intervals.

  20. One - sided t Test by SPSS%单侧t检验的SPSS实现

    Institute of Scientific and Technical Information of China (English)

    韩曦英

    2014-01-01

    利用置信区间和假设检验之间的关系,本文给出了经由 SPSS 置信区间的结果做出单侧 t检验结论的方法;同时根据 t 统计量的对称分布的性质,给出了经由 SPSS 双侧检验的 P 值做出单侧t 检验的结论的方法,并进行实例演示。%By using the relationship between hypothesis testing and confidence intervals, a method for one - sided t test by the outcome of the confidence intervals in SPSS is proposed. Meanwhile, based on the symmetry of t - distribution,a method for one - sided t test by the outcome of the two - tailed t test in SPSS is made. And then the two methods are illustrated by examples.

  1. Study on the thyroid function of thoroughbred horses by means of 'in vitro' 125I-T3 modified and 125I-T4 tests

    International Nuclear Information System (INIS)

    Martin, B.W. de

    1975-01-01

    Sera of 71 animals, divided in groups of males and females, in repose and after activity were studied. The method to establish the percentage of the 125 I-lyothyronine retention in resin (Test 125 I-T 3 or T 3 ) was modified by the use of 0.2 ml of serum on the resin column, after addition of the marked hormone. This modification served to prove that thoroughbred equines show binding of the I-lyothyronine to the serum four times reduced, indicating, therefore, that these animals have four times more ligation sites of triidothyronin saturation in the serum, when compared with the results obtained from human beings. The variance analysis applied to the T 3 Test showed no significant results at the 95% level as regards to activity. For the 71 animals, the author has found an average of 50.30% of the 125 I-Lyothyronine in resin retention, being the confidence interval for this group between 48.75% and 51.85% to a 95% confidence coefficient. Evaluating the results of the T 4 Test by means of the variance analysis, we noticed that the male and female groups in repose differed statistically from the groups after activity to a 95% confidence coefficient. The author has grouped the results of the T 4 Test of 32 equines, 18 males and 14 females, in repose, obtaining an average of 0.61 mcg and 0.51 mcg and 0.71 mcg T 4 /100 ml as confidence interval to a 95% confidence coefficient. We have listed 39 results of T 4 Test, being 23 males and 61 Females, after activity, obtaining an average of 2.01 mcg of thyroxin by 100 ml of serum and 1.72 mcg and 2.30 T 4 /100 ml as confidence interval to a 95% confidence coefficient

  2. Effects of an intensive clinical skills course on senior nursing students' self-confidence and clinical competence: A quasi-experimental post-test study.

    Science.gov (United States)

    Park, Soohyun

    2018-02-01

    To foster nursing professionals, nursing education requires the integration of knowledge and practice. Nursing students in their senior year experience considerable stress in performing the core nursing skills because, typically, they have limited opportunities to practice these skills in their clinical practicum. Therefore, nurse educators should revise the nursing curricula to focus on core nursing skills. To identify the effect of an intensive clinical skills course for senior nursing students on their self-confidence and clinical competence. A quasi-experimental post-test study. A university in South Korea during the 2015-2016 academic year. A convenience sample of 162 senior nursing students. The experimental group (n=79) underwent the intensive clinical skills course, whereas the control group (n=83) did not. During the course, students repeatedly practiced the 20 items that make up the core basic nursing skills using clinical scenarios. Participants' self-confidence in the core clinical nursing skills was measured using a 10-point scale, while their clinical competence with these skills was measured using the core clinical nursing skills checklist. Independent t-test and chi-square tests were used to analyze the data. The mean scores in self-confidence and clinical competence were higher in the experimental group than in the control group. This intensive clinical skills courses had a positive effect on senior nursing students' self-confidence and clinical competence for the core clinical nursing skills. This study emphasizes the importance of reeducation using a clinical skills course during the transition from student to nursing professional. Copyright © 2017. Published by Elsevier Ltd.

  3. High confidence in falsely recognizing prototypical faces.

    Science.gov (United States)

    Sampaio, Cristina; Reinke, Victoria; Mathews, Jeffrey; Swart, Alexandra; Wallinger, Stephen

    2018-06-01

    We applied a metacognitive approach to investigate confidence in recognition of prototypical faces. Participants were presented with sets of faces constructed digitally as deviations from prototype/base faces. Participants were then tested with a simple recognition task (Experiment 1) or a multiple-choice task (Experiment 2) for old and new items plus new prototypes, and they showed a high rate of confident false alarms to the prototypes. Confidence and accuracy relationship in this face recognition paradigm was found to be positive for standard items but negative for the prototypes; thus, it was contingent on the nature of the items used. The data have implications for lineups that employ match-to-suspect strategies.

  4. Memory performance on the story recall test and prediction of cognitive dysfunction progression in mild cognitive impairment and Alzheimer's dementia.

    Science.gov (United States)

    Park, Jong-Hwan; Park, Hyuntae; Sohn, Sang Wuk; Kim, Sungjae; Park, Kyung Won

    2017-10-01

    To determine the factors that influence diagnosis and differentiation of patients with mild cognitive impairment (MCI) and Alzheimer's dementia (AD) by comparing memory test results at baseline with those at 1-2-year follow up. We consecutively recruited 23 healthy participants, 44 MCI patients and 27 patients with very mild AD according to the National Institute of Neurological and Communicative Diseases and Stroke/Alzheimer's Disease and Related Disorder Association criteria for probable Alzheimer's disease and Petersen's clinical diagnostic criteria. We carried out detailed neuropsychological tests, including the Story Recall Test (SRT) and the Seoul Verbal Learning Test, for all participants. We defined study participants as the "progression group" as follows: (i) participants who showed conversion to dementia from the MCI state; and (ii) those with dementia who showed more than a three-point decrement in their Mini-Mental State Examination scores with accompanying functional decline from baseline status, which were ascertained by physician's clinical judgment. The SRT delayed recall scores were significantly lower in the patients with mild AD than in those with MCI and after progression. Lower (relative risk 1.1, 95% confidence interval 0.1-1.6) and higher SRT delayed recall scores (relative risk 2.1, confidence interval 1.0-2.8), and two-test combined immediate and delayed recall scores (relative risk 2.0, confidence interval 0.9-2.3; and relative risk 2.8, confidence interval 1.1-4.2, respectively) were independent predictors of progression in a stepwise multiple adjusted Cox proportional hazards model, with age, sex, depression and educational level forced into the model. The present study suggests that the SRT delayed recall score independently predicts progression to dementia in patients with MCI. Geriatr Gerontol Int 2017; 17: 1603-1609. © 2016 Japan Geriatrics Society.

  5. Evaluating Measures of Optimism and Sport Confidence

    Science.gov (United States)

    Fogarty, Gerard J.; Perera, Harsha N.; Furst, Andrea J.; Thomas, Patrick R.

    2016-01-01

    The psychometric properties of the Life Orientation Test-Revised (LOT-R), the Sport Confidence Inventory (SCI), and the Carolina SCI (CSCI) were examined in a study involving 260 athletes. The study aimed to test the dimensional structure, convergent and divergent validity, and invariance over competition level of scores generated by these…

  6. Cytotoxicity testing of aqueous extract of bitter leaf ( Vernonia ...

    African Journals Online (AJOL)

    Cytotoxicity testing of aqueous extract of bitter leaf ( Vernonia amygdalina Del ) and sniper 1000EC (2,3 dichlorovinyl dimethyl phosphate) using the Alium cepa ... 96 hours and EC50 values at 95% confidence interval was determined from a plot of root length against sample concentrations using Microsoft Excel software.

  7. Testing Significance Testing

    Directory of Open Access Journals (Sweden)

    Joachim I. Krueger

    2018-04-01

    Full Text Available The practice of Significance Testing (ST remains widespread in psychological science despite continual criticism of its flaws and abuses. Using simulation experiments, we address four concerns about ST and for two of these we compare ST’s performance with prominent alternatives. We find the following: First, the 'p' values delivered by ST predict the posterior probability of the tested hypothesis well under many research conditions. Second, low 'p' values support inductive inferences because they are most likely to occur when the tested hypothesis is false. Third, 'p' values track likelihood ratios without raising the uncertainties of relative inference. Fourth, 'p' values predict the replicability of research findings better than confidence intervals do. Given these results, we conclude that 'p' values may be used judiciously as a heuristic tool for inductive inference. Yet, 'p' values cannot bear the full burden of inference. We encourage researchers to be flexible in their selection and use of statistical methods.

  8. Confidence and self-attribution bias in an artificial stock market

    Science.gov (United States)

    Bertella, Mario A.; Pires, Felipe R.; Rego, Henio H. A.; Vodenska, Irena; Stanley, H. Eugene

    2017-01-01

    Using an agent-based model we examine the dynamics of stock price fluctuations and their rates of return in an artificial financial market composed of fundamentalist and chartist agents with and without confidence. We find that chartist agents who are confident generate higher price and rate of return volatilities than those who are not. We also find that kurtosis and skewness are lower in our simulation study of agents who are not confident. We show that the stock price and confidence index—both generated by our model—are cointegrated and that stock price affects confidence index but confidence index does not affect stock price. We next compare the results of our model with the S&P 500 index and its respective stock market confidence index using cointegration and Granger tests. As in our model, we find that stock prices drive their respective confidence indices, but that the opposite relationship, i.e., the assumption that confidence indices drive stock prices, is not significant. PMID:28231255

  9. Learned helplessness: effects of response requirement and interval between treatment and testing.

    Science.gov (United States)

    Hunziker, M H L; Dos Santos, C V

    2007-11-01

    Three experiments investigated learned helplessness in rats manipulating response requirements, shock duration, and intervals between treatment and testing. In Experiment 1, rats previously exposed to uncontrollable or no shocks were tested under one of four different contingencies of negative reinforcement: FR 1 or FR 2 escape contingency for running, and FR1 escape contingency for jumping (differing for the maximum shock duration of 10s or 30s). The results showed that the uncontrollable shocks produced a clear operant learning deficit (learned helplessness effect) only when the animals were tested under the jumping FR 1 escape contingency with 10-s max shock duration. Experiment 2 isolated of the effects of uncontrollability from shock exposure per se and showed that the escape deficit observed using the FR 1 escape jumping response (10-s shock duration) was produced by the uncontrollability of shock. Experiment 3 showed that using the FR 1 jumping escape contingency in the test, the learned helplessness effect was observed one, 14 or 28 days after treatment. These results suggest that running may not be an appropriate test for learned helplessness, and that many diverging results found in the literature might be accounted for by the confounding effects of respondent and operant contingencies present when running is required of rats.

  10. A risk score for predicting coronary artery disease in women with angina pectoris and abnormal stress test finding.

    Science.gov (United States)

    Lo, Monica Y; Bonthala, Nirupama; Holper, Elizabeth M; Banks, Kamakki; Murphy, Sabina A; McGuire, Darren K; de Lemos, James A; Khera, Amit

    2013-03-15

    Women with angina pectoris and abnormal stress test findings commonly have no epicardial coronary artery disease (CAD) at catheterization. The aim of the present study was to develop a risk score to predict obstructive CAD in such patients. Data were analyzed from 337 consecutive women with angina pectoris and abnormal stress test findings who underwent cardiac catheterization at our center from 2003 to 2007. Forward selection multivariate logistic regression analysis was used to identify the independent predictors of CAD, defined by ≥50% diameter stenosis in ≥1 epicardial coronary artery. The independent predictors included age ≥55 years (odds ratio 2.3, 95% confidence interval 1.3 to 4.0), body mass index stress imaging (odds ratio 2.8, 95% confidence interval 1.5 to 5.5), and exercise capacity statistic of 0.745 (95% confidence interval 0.70 to 0.79), and an optimized cutpoint of a score of ≤2 included 62% of the subjects and had a negative predictive value of 80%. In conclusion, a simple clinical risk score of 7 characteristics can help differentiate those more or less likely to have CAD among women with angina pectoris and abnormal stress test findings. This tool, if validated, could help to guide testing strategies in women with angina pectoris. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. QT interval in healthy dogs: which method of correcting the QT interval in dogs is appropriate for use in small animal clinics?

    Directory of Open Access Journals (Sweden)

    Maira S. Oliveira

    2014-05-01

    Full Text Available The electrocardiography (ECG QT interval is influenced by fluctuations in heart rate (HR what may lead to misinterpretation of its length. Considering that alterations in QT interval length reflect abnormalities of the ventricular repolarisation which predispose to occurrence of arrhythmias, this variable must be properly evaluated. The aim of this work is to determine which method of correcting the QT interval is the most appropriate for dogs regarding different ranges of normal HR (different breeds. Healthy adult dogs (n=130; German Shepherd, Boxer, Pit Bull Terrier, and Poodle were submitted to ECG examination and QT intervals were determined in triplicates from the bipolar limb II lead and corrected for the effects of HR through the application of three published formulae involving quadratic, cubic or linear regression. The mean corrected QT values (QTc obtained using the diverse formulae were significantly different (ρ<0.05, while those derived according to the equation QTcV = QT + 0.087(1- RR were the most consistent (linear regression. QTcV values were strongly correlated (r=0.83 with the QT interval and showed a coefficient of variation of 8.37% and a 95% confidence interval of 0.22-0.23 s. Owing to its simplicity and reliability, the QTcV was considered the most appropriate to be used for the correction of QT interval in dogs.

  12. Trimester specific reference intervals for thyroid function tests in normal Indian pregnant women.

    Science.gov (United States)

    Sekhri, Tarun; Juhi, Juhi Agarwal; Wilfred, Reena; Kanwar, Ratnesh S; Sethi, Jyoti; Bhadra, Kuntal; Nair, Sirimavo; Singh, Satveer

    2016-01-01

    Accurate assessment of thyroid function during pregnancy is critical, for initiation of thyroid hormone therapy, as well as for adjustment of thyroid hormone dose in hypothyroid cases. We evaluated pregnant women who had no past history of thyroid disorders and studied their thyroid function in each trimester. 86 normal pregnant women in the first trimester of pregnancy were selected for setting reference intervals. All were healthy, euthyroid and negative for thyroid peroxidase antibody (TPOAb). These women were serially followed throughout pregnancy. 124 normal nonpregnant subjects were selected for comparison. Thyrotropin (TSH), free thyroxine (FT4), free triiodothyronine (FT3) and anti-TPO were measured using Roche Elecsys 1010 analyzer. Urinary iodine content was determined by simple microplate method. The 2.5th and 97.5th percentiles were calculated as the reference intervals for thyroid hormone levels during each trimester. SPSS (version 14.0, SPSS Inc., Chicago, IL, USA) was used for data processing and analysis. The reference intervals for the first, second and third trimesters for the following parameters: TSH 0.09-6.65, 0.51-6.66, 0.91-4.86 µIU/mL, FT4 9.81-18.53, 8.52-19.43, 7.39-18.28 pM/L and FT3 3.1-6.35, 2.39-5.12, 2.57-5.68 pM/L respectively. Thyroid hormone concentrations significantly differed during pregnancy at different stages of gestation. The pregnant women in the study had median urinary iodine concentration of 150-200 µg/l during each trimester. The trimester-specific reference intervals for thyroid tests during pregnancy have been established for pregnant Indian women serially followed during pregnancy using 2.5th and 97.5th percentiles.

  13. Exploring Self - Confidence Level of High School Students Doing Sport

    Directory of Open Access Journals (Sweden)

    Nurullah Emir Ekinci

    2014-10-01

    Full Text Available The aim of this study was to investigate self-confidence levels of high school students, who do sport, in the extent of their gender, sport branch (individual/team sports and aim for participating in sport (professional/amateur. 185 active high school students from Kutahya voluntarily participated for the study. In the study as data gathering tool self-confidence scale was used. In the evaluation of the data as a hypothesis test Mann Whitney U non parametric test was used. As a result self-confidence levels of participants showed significant differences according to their gender and sport branch but there was no significant difference according to aim for participating in sport.

  14. Confidence mediates the sex difference in mental rotation performance.

    Science.gov (United States)

    Estes, Zachary; Felker, Sydney

    2012-06-01

    On tasks that require the mental rotation of 3-dimensional figures, males typically exhibit higher accuracy than females. Using the most common measure of mental rotation (i.e., the Mental Rotations Test), we investigated whether individual variability in confidence mediates this sex difference in mental rotation performance. In each of four experiments, the sex difference was reliably elicited and eliminated by controlling or manipulating participants' confidence. Specifically, confidence predicted performance within and between sexes (Experiment 1), rendering confidence irrelevant to the task reliably eliminated the sex difference in performance (Experiments 2 and 3), and manipulating confidence significantly affected performance (Experiment 4). Thus, confidence mediates the sex difference in mental rotation performance and hence the sex difference appears to be a difference of performance rather than ability. Results are discussed in relation to other potential mediators and mechanisms, such as gender roles, sex stereotypes, spatial experience, rotation strategies, working memory, and spatial attention.

  15. Development and interval testing of a naturalistic driving methodology to evaluate driving behavior in clinical research.

    Science.gov (United States)

    Babulal, Ganesh M; Addison, Aaron; Ghoshal, Nupur; Stout, Sarah H; Vernon, Elizabeth K; Sellan, Mark; Roe, Catherine M

    2016-01-01

    Background : The number of older adults in the United States will double by 2056. Additionally, the number of licensed drivers will increase along with extended driving-life expectancy. Motor vehicle crashes are a leading cause of injury and death in older adults. Alzheimer's disease (AD) also negatively impacts driving ability and increases crash risk. Conventional methods to evaluate driving ability are limited in predicting decline among older adults. Innovations in GPS hardware and software can monitor driving behavior in the actual environments people drive in. Commercial off-the-shelf (COTS) devices are affordable, easy to install and capture large volumes of data in real-time. However, adapting these methodologies for research can be challenging. This study sought to adapt a COTS device and determine an interval that produced accurate data on the actual route driven for use in future studies involving older adults with and without AD.  Methods : Three subjects drove a single course in different vehicles at different intervals (30, 60 and 120 seconds), at different times of day, morning (9:00-11:59AM), afternoon (2:00-5:00PM) and night (7:00-10pm). The nine datasets were examined to determine the optimal collection interval. Results : Compared to the 120-second and 60-second intervals, the 30-second interval was optimal in capturing the actual route driven along with the lowest number of incorrect paths and affordability weighing considerations for data storage and curation. Discussion : Use of COTS devices offers minimal installation efforts, unobtrusive monitoring and discreet data extraction.  However, these devices require strict protocols and controlled testing for adoption into research paradigms.  After reliability and validity testing, these devices may provide valuable insight into daily driving behaviors and intraindividual change over time for populations of older adults with and without AD.  Data can be aggregated over time to look at changes

  16. Confidant Relations in Italy

    Directory of Open Access Journals (Sweden)

    Jenny Isaacs

    2015-02-01

    Full Text Available Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91% reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture.

  17. Self-care confidence may be more important than cognition to influence self-care behaviors in adults with heart failure: Testing a mediation model.

    Science.gov (United States)

    Vellone, Ercole; Pancani, Luca; Greco, Andrea; Steca, Patrizia; Riegel, Barbara

    2016-08-01

    Cognitive impairment can reduce the self-care abilities of heart failure patients. Theory and preliminary evidence suggest that self-care confidence may mediate the relationship between cognition and self-care, but further study is needed to validate this finding. The aim of this study was to test the mediating role of self-care confidence between specific cognitive domains and heart failure self-care. Secondary analysis of data from a descriptive study. Three out-patient sites in Pennsylvania and Delaware, USA. A sample of 280 adults with chronic heart failure, 62 years old on average and mostly male (64.3%). Data on heart failure self-care and self-care confidence were collected with the Self-Care of Heart Failure Index 6.2. Data on cognition were collected by trained research assistants using a neuropsychological test battery measuring simple and complex attention, processing speed, working memory, and short-term memory. Sociodemographic data were collected by self-report. Clinical information was abstracted from the medical record. Mediation analysis was performed with structural equation modeling and indirect effects were evaluated with bootstrapping. Most participants had at least 1 impaired cognitive domain. In mediation models, self-care confidence consistently influenced self-care and totally mediated the relationship between simple attention and self-care and between working memory and self-care (comparative fit index range: .929-.968; root mean squared error of approximation range: .032-.052). Except for short-term memory, which had a direct effect on self-care maintenance, the other cognitive domains were unrelated to self-care. Self-care confidence appears to be an important factor influencing heart failure self-care even in patients with impaired cognition. As few studies have successfully improved cognition, interventions addressing confidence should be considered as a way to improve self-care in this population. Copyright © 2016 Elsevier Ltd. All

  18. Assessing Confidence in Pliocene Sea Surface Temperatures to Evaluate Predictive Models

    Science.gov (United States)

    Dowsett, Harry J.; Robinson, Marci M.; Haywood, Alan M.; Hill, Daniel J.; Dolan, Aisling. M.; Chan, Wing-Le; Abe-Ouchi, Ayako; Chandler, Mark A.; Rosenbloom, Nan A.; Otto-Bliesner, Bette L.; hide

    2012-01-01

    In light of mounting empirical evidence that planetary warming is well underway, the climate research community looks to palaeoclimate research for a ground-truthing measure with which to test the accuracy of future climate simulations. Model experiments that attempt to simulate climates of the past serve to identify both similarities and differences between two climate states and, when compared with simulations run by other models and with geological data, to identify model-specific biases. Uncertainties associated with both the data and the models must be considered in such an exercise. The most recent period of sustained global warmth similar to what is projected for the near future occurred about 3.33.0 million years ago, during the Pliocene epoch. Here, we present Pliocene sea surface temperature data, newly characterized in terms of level of confidence, along with initial experimental results from four climate models. We conclude that, in terms of sea surface temperature, models are in good agreement with estimates of Pliocene sea surface temperature in most regions except the North Atlantic. Our analysis indicates that the discrepancy between the Pliocene proxy data and model simulations in the mid-latitudes of the North Atlantic, where models underestimate warming shown by our highest-confidence data, may provide a new perspective and insight into the predictive abilities of these models in simulating a past warm interval in Earth history.This is important because the Pliocene has a number of parallels to present predictions of late twenty-first century climate.

  19. Method for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations

    CSIR Research Space (South Africa)

    Kirton, A

    2010-08-01

    Full Text Available for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations A KIRTON B SCHOLES S ARCHIBALD CSIR Ecosystem Processes and Dynamics, Natural Resources and the Environment P.O. BOX 395, Pretoria, 0001, South... intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form...

  20. Haematological and biochemical reference intervals for free-ranging brown bears (Ursus arctos) in Sweden

    DEFF Research Database (Denmark)

    Græsli, Anne Randi; Fahlman, Åsa; Evans, Alina L.

    2014-01-01

    BackgroundEstablishment of haematological and biochemical reference intervals is important to assess health of animals on individual and population level. Reference intervals for 13 haematological and 34 biochemical variables were established based on 88 apparently healthy free-ranging brown bears...... and marking for ecological studies. For each of the variables, the reference interval was described based on the 95% confidence interval, and differences due to host characteristics sex and age were included if detected. To our knowledge, this is the first report of reference intervals for free-ranging brown...... and the differences due to host factors age and gender can be useful for evaluation of health status in free-ranging European brown bears....

  1. The effect of a daily quiz (TOPday) on self-confidence, enthusiasm, and test results for biomechanics.

    Science.gov (United States)

    Tanck, Esther; Maessen, Martijn F H; Hannink, Gerjon; van Kuppeveld, Sascha M H F; Bolhuis, Sanneke; Kooloos, Jan G M

    2014-01-01

    Many students in Biomedical Sciences have difficulty understanding biomechanics. In a second-year course, biomechanics is taught in the first week and examined at the end of the fourth week. Knowledge is retained longer if the subject material is repeated. However, how does one encourage students to repeat the subject matter? For this study, we developed 'two opportunities to practice per day (TOPday)', consisting of multiple-choice questions on biomechanics with immediate feedback, which were sent via e-mail. We investigated the effect of TOPday on self-confidence, enthusiasm, and test results for biomechanics. All second-year students (n = 95) received a TOPday of biomechanics on every regular course day with increasing difficulty during the course. At the end of the course, a non-anonymous questionnaire was conducted. The students were asked how many TOPday questions they completed (0-6 questions [group A]; 7-18 questions [group B]; 19-24 questions [group C]). Other questions included the appreciation for TOPday, and increase (no/yes) in self-confidence and enthusiasm for biomechanics. Seventy-eight students participated in the examination and completed the questionnaire. The appreciation for TOPday in group A (n = 14), B (n = 23) and C (n = 41) was 7.0 (95 % CI 6.5-7.5), 7.4 (95 % CI 7.0-7.8), and 7.9 (95 % CI 7.6-8.1), respectively (p biomechanics due to TOPday. In addition, they had a higher test result for biomechanics (p biomechanics on the other.

  2. Peak oxygen uptake in a sprint interval testing protocol vs. maximal oxygen uptake in an incremental testing protocol and their relationship with cross-country mountain biking performance.

    Science.gov (United States)

    Hebisz, Rafał; Hebisz, Paulina; Zatoń, Marek; Michalik, Kamil

    2017-04-01

    In the literature, the exercise capacity of cyclists is typically assessed using incremental and endurance exercise tests. The aim of the present study was to confirm whether peak oxygen uptake (V̇O 2peak ) attained in a sprint interval testing protocol correlates with cycling performance, and whether it corresponds to maximal oxygen uptake (V̇O 2max ) determined by an incremental testing protocol. A sample of 28 trained mountain bike cyclists executed 3 performance tests: (i) incremental testing protocol (ITP) in which the participant cycled to volitional exhaustion, (ii) sprint interval testing protocol (SITP) composed of four 30 s maximal intensity cycling bouts interspersed with 90 s recovery periods, (iii) competition in a simulated mountain biking race. Oxygen uptake, pulmonary ventilation, work, and power output were measured during the ITP and SITP with postexercise blood lactate and hydrogen ion concentrations collected. Race times were recorded. No significant inter-individual differences were observed in regards to any of the ITP-associated variables. However, 9 individuals presented significantly increased oxygen uptake, pulmonary ventilation, and work output in the SITP compared with the remaining cyclists. In addition, in this group of 9 cyclists, oxygen uptake in SITP was significantly higher than in ITP. After the simulated race, this group of 9 cyclists achieved significantly better competition times (99.5 ± 5.2 min) than the other cyclists (110.5 ± 6.7 min). We conclude that mountain bike cyclists who demonstrate higher peak oxygen uptake in a sprint interval testing protocol than maximal oxygen uptake attained in an incremental testing protocol demonstrate superior competitive performance.

  3. Decomposing the interaction between retention interval and study/test practice: the role of retrievability.

    Science.gov (United States)

    Jang, Yoonhee; Wixted, John T; Pecher, Diane; Zeelenberg, René; Huber, David E

    2012-01-01

    Even without feedback, test practice enhances delayed performance compared to study practice, but the size of the effect is variable across studies. We investigated the benefit of testing, separating initially retrievable items from initially nonretrievable items. In two experiments, an initial test determined item retrievability. Retrievable or nonretrievable items were subsequently presented for repeated study or test practice. Collapsing across items, in Experiment 1, we obtained the typical cross-over interaction between retention interval and practice type. For retrievable items, however, the cross-over interaction was quantitatively different, with a small study benefit for an immediate test and a larger testing benefit after a delay. For nonretrievable items, there was a large study benefit for an immediate test, but one week later there was no difference between the study and test practice conditions. In Experiment 2, initially nonretrievable items were given additional study followed by either an immediate test or even more additional study, and one week later performance did not differ between the two conditions. These results indicate that the effect size of study/test practice is due to the relative contribution of retrievable and nonretrievable items.

  4. A Reliability Test of a Complex System Based on Empirical Likelihood

    OpenAIRE

    Zhou, Yan; Fu, Liya; Zhang, Jun; Hui, Yongchang

    2016-01-01

    To analyze the reliability of a complex system described by minimal paths, an empirical likelihood method is proposed to solve the reliability test problem when the subsystem distributions are unknown. Furthermore, we provide a reliability test statistic of the complex system and extract the limit distribution of the test statistic. Therefore, we can obtain the confidence interval for reliability and make statistical inferences. The simulation studies also demonstrate the theorem results.

  5. Results from an Interval Management (IM) Flight Test and Its Potential Benefit to Air Traffic Management Operations

    Science.gov (United States)

    Baxley, Brian; Swieringa, Kurt; Berckefeldt, Rick; Boyle, Dan

    2017-01-01

    NASA's first Air Traffic Management Technology Demonstration (ATD-1) subproject successfully completed a 19-day flight test of an Interval Management (IM) avionics prototype. The prototype was built based on IM standards, integrated into two test aircraft, and then flown in real-world conditions to determine if the goals of improving aircraft efficiency and airport throughput during high-density arrival operations could be met. The ATD-1 concept of operation integrates advanced arrival scheduling, controller decision support tools, and the IM avionics to enable multiple time-based arrival streams into a high-density terminal airspace. IM contributes by calculating airspeeds that enable an aircraft to achieve a spacing interval behind the preceding aircraft. The IM avionics uses its data (route of flight, position, etc.) and Automatic Dependent Surveillance-Broadcast (ADS-B) state data from the Target aircraft to calculate this airspeed. The flight test demonstrated that the IM avionics prototype met the spacing accuracy design goal for three of the four IM operation types tested. The primary issue requiring attention for future IM work is the high rate of IM speed commands and speed reversals. In total, during this flight test, the IM avionics prototype showed significant promise in contributing to the goals of improving aircraft efficiency and airport throughput.

  6. "Normality of Residuals Is a Continuous Variable, and Does Seem to Influence the Trustworthiness of Confidence Intervals: A Response to, and Appreciation of, Williams, Grajales, and Kurkiewicz (2013"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2013-09-01

    Full Text Available Osborne and Waters (2002 focused on checking some of the assumptions of multiple linear.regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that.regression models estimated using ordinary least squares require the assumption of normally.distributed errors, but not the assumption of normally distributed response or predictor variables..They go on to discuss estimate bias and provide a helpful summary of the assumptions of multiple.regression when using ordinary least squares. While we were not as precise as we could have been.when discussing assumptions of normality, the critical issue of the 2002 paper remains -' researchers.often do not check on or report on the assumptions of their statistical methods. This response.expands on the points made by Williams, advocates a thorough examination of data prior to.reporting results, and provides an example of how incremental improvements in meeting the.assumption of normality of residuals incrementally improves the accuracy of confidence intervals.

  7. Confidence intervals and hypothesis testing for the Permutation Entropy with an application to epilepsy

    Science.gov (United States)

    Traversaro, Francisco; O. Redelico, Francisco

    2018-04-01

    In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.

  8. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  9. Performance of a rapid self-test for detection of Trichomonas vaginalis in South Africa and Brazil

    NARCIS (Netherlands)

    Jones, Heidi E.; Lippman, Sheri A.; Caiaffa-Filho, Helio H.; Young, Taryn; van de Wijgert, Janneke H. H. M.

    2013-01-01

    Women participating in studies in Brazil (n = 695) and South Africa (n = 230) performed rapid point-of-care tests for Trichomonas vaginalis on self-collected vaginal swabs. Using PCR as the gold standard, rapid self-testing achieved high specificity (99.1%; 95% confidence interval [CI], 98.2 to

  10. Nonverbal Communication of Confidence in Soccer Referees: An Experimental Test of Darwin's Leakage Hypothesis.

    Science.gov (United States)

    Furley, Philip; Schweizer, Geoffrey

    2016-12-01

    The goal of the present paper was to investigate whether soccer referees' nonverbal behavior (NVB) differed based on the difficulty of their decisions and whether perceivers could detect these systematic variations. On the one hand, communicating confidence via NVB is emphasized in referee training. On the other hand, it seems feasible from a theoretical point of view that particularly following relatively difficult decisions referees have problems controlling their NVB. We conducted three experiments to investigate this question. Experiment 1 (N = 40) and Experiment 2 (N = 60) provided evidence that perceivers regard referees' NVB as less confident following ambiguous decisions as compared with following unambiguous decisions. Experiment 3 (N = 58) suggested that perceivers were more likely to debate with the referee when referees nonverbally communicated less confidence. We discuss consequences for referee training.

  11. Statistical theory a concise introduction

    CERN Document Server

    Abramovich, Felix

    2013-01-01

    Introduction Preamble Likelihood Sufficiency Minimal sufficiency Completeness Exponential family of distributionsPoint Estimation Introduction Maximum likelihood estimation Method of moments Method of least squares Goodness-of-estimation. Mean squared error. Unbiased estimationConfidence Intervals, Bounds, and Regions Introduction Quoting the estimation error Confidence intervalsConfidence bounds Confidence regionsHypothesis Testing Introduction Simple hypothesesComposite hypothesesHypothesis testing and confidence intervals Sequential testingAsymptotic Analysis Introduction Convergence and consistency in MSE Convergence and consistency in probability Convergence in distribution The central limit theorem Asymptotically normal consistency Asymptotic confidence intervals Asymptotic normality of the MLE Multiparameter case Asymptotic distribution of the GLRT. Wilks' theorem.Bayesian Inference Introduction Choice of priors Point estimation Interval estimation. Credible sets. Hypothesis testingElements of Statisti...

  12. Local confidence limits for IMRT and VMAT techniques: a study based on TG119 test suite

    International Nuclear Information System (INIS)

    Thomas, M.; Chandroth, M.

    2014-01-01

    The aim of this study was to generate a local confidence limit (CL) for intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) techniques used at Waikato Regional Cancer Centre. This work was carried out based on the American Association of Physicists in Medicine (AAPM) Task Group (TG) 119 report. The AAPM TG 119 report recommends CLs as a bench mark for IMRT commissioning and delivery based on its multiple institutions planning and dosimetry comparisons. In this study the locally obtained CLs were compared to TG119 benchmarks. Furthermore, the same bench mark was used to test the capabilities and quality of the VMAT technique in our clinic. The TG 119 test suite consists of two primary and four clinical tests for evaluating the accuracy of IMRT planning and dose delivery systems. Pre defined structure sets contoured on computed tomography images were downloaded from AAPM website and were transferred to a locally designed phantom. For each test case two plans were generated using IMRT and VMAT optimisation. Dose prescriptions and planning objectives recommended by TG119 report were followed to generate the test plans in Eclipse Treatment Planning System. For each plan the point dose measurements were done using an ion chamber at high dose and low dose regions. The planar dose distribution was analysed for percentage of points passing the gamma criteria of 3 %/3 mm, for both the composite plan and individual fields of each plan. The CLs were generated based on the results from the gamma analysis and point dose measurements. For IMRT plans, the CLs obtained were (1) from point dose measurements: 2.49 % at high dose region and 2.95 % for the low dose region (2) from gamma analysis: 2.12 % for individual fields and 5.9 % for the composite plan. For VMAT plans, the CLs obtained were (1) from point dose measurements: 2.56 % at high dose region and 2.6 % for the low dose region (2) from gamma analysis: 1.46 % for individual fields and 0

  13. Validity and test-retest reliability of a novel simple back extensor muscle strength test.

    Science.gov (United States)

    Harding, Amy T; Weeks, Benjamin Kurt; Horan, Sean A; Little, Andrew; Watson, Steven L; Beck, Belinda Ruth

    2017-01-01

    To develop and determine convergent validity and reliability of a simple and inexpensive clinical test to quantify back extensor muscle strength. Two testing sessions were conducted, 7 days apart. Each session involved three trials of standing maximal isometric back extensor muscle strength using both the novel test and isokinetic dynamometry. Lumbar spine bone mineral density was examined by dual-energy X-ray absorptiometry. Validation was examined with Pearson correlations ( r ). Test-retest reliability was examined with intraclass correlation coefficients and limits of agreement. Pearson correlations and intraclass correlation coefficients are presented with corresponding 95% confidence intervals. Linear regression was used to examine the ability of peak back extensor muscle strength to predict indices of lumbar spine bone mineral density and strength. A total of 52 healthy adults (26 men, 26 women) aged 46.4 ± 20.4 years were recruited from the community. A strong positive relationship was observed between peak back extensor strength from hand-held and isokinetic dynamometry ( r  = 0.824, p  strength test, short- and long-term reliability was excellent (intraclass correlation coefficient = 0.983 (95% confidence interval, 0.971-0.990), p  strength measures with the novel back extensor strength protocol were -6.63 to 7.70 kg, with a mean bias of +0.71 kg. Back extensor strength predicted 11% of variance in lumbar spine bone mineral density ( p  strength ( p  strength is quick, relatively inexpensive, and reliable; demonstrates initial convergent validity in a healthy population; and is associated with bone mass at a clinically important site.

  14. Test-retest reliability and minimal detectable change scores for sit-to-stand-to-sit tests, the six-minute walk test, the one-leg heel-rise test, and handgrip strength in people undergoing hemodialysis.

    Science.gov (United States)

    Segura-Ortí, Eva; Martínez-Olmos, Francisco José

    2011-08-01

    Determining the relative and absolute reliability of outcomes of physical performance tests for people undergoing hemodialysis is necessary to discriminate between the true effects of exercise interventions and the inherent variability of this cohort. The aims of this study were to assess the relative reliability of sit-to-stand-to-sit tests (the STS-10, which measures the time [in seconds] required to complete 10 full stands from a sitting position, and the STS-60, which measures the number of repetitions achieved in 60 seconds), the Six-Minute Walk Test (6MWT), the one-leg heel-rise test, and the handgrip strength test and to calculate minimal detectable change (MDC) scores in people undergoing hemodialysis. This study was a prospective, nonexperimental investigation. Thirty-nine people undergoing hemodialysis at 2 clinics in Spain were contacted. Study participants performed the STS-10 (n=37), the STS-60 (n=37), and the 6MWT (n=36). At one of the settings, the participants also performed the one-leg heel-rise test (n=21) and the handgrip strength test (n=12) on both the right and the left sides. Participants attended 2 testing sessions 1 to 2 weeks apart. High intraclass correlation coefficients (≥.88) were found for all tests, suggesting good relative reliability. The MDC scores at 90% confidence intervals were as follows: 8.4 seconds for the STS-10, 4 repetitions for the STS-60, 66.3 m for the 6MWT, 3.4 kg for handgrip strength (force-generating capacity), 3.7 repetitions for the one-leg heel-rise test with the right leg, and 5.2 repetitions for the one-leg heel-rise test with the left leg. Limitations A limited sample of patients was used in this study. The STS-16, STS-60, 6MWT, one-leg heel rise test, and handgrip strength test are reliable outcome measures. The MDC scores at 90% confidence intervals for these tests will help to determine whether a change is due to error or to an intervention.

  15. HIV testing uptake and prevalence among adolescents and adults in a large home-based HIV testing program in Western Kenya.

    Science.gov (United States)

    Wachira, Juddy; Ndege, Samson; Koech, Julius; Vreeman, Rachel C; Ayuo, Paul; Braitstein, Paula

    2014-02-01

    To describe HIV testing uptake and prevalence among adolescents and adults in a home-based HIV counseling and testing program in western Kenya. Since 2007, the Academic Model Providing Access to Healthcare program has implemented home-based HIV counseling and testing on a large scale. All individuals aged ≥13 years were eligible for testing. Data from 5 of 8 catchments were included in this analysis. We used descriptive statistics and multivariate logistic regression to examine testing uptake and HIV prevalence among adolescents (13-18 years), younger adults (19-24 years), and older adults (≥25 years). There were 154,463 individuals eligible for analyses as follows: 22% adolescents, 19% younger adults, and 59% older adults. Overall mean age was 32.8 years and 56% were female. HIV testing was high (96%) across the following 3 groups: 99% in adolescents, 98% in younger adults, and 94% in older adults (P < 0.001). HIV prevalence was higher (11.0%) among older adults compared with younger adults (4.8%) and adolescents (0.8%) (P < 0.001). Those who had ever previously tested for HIV were less likely to accept HIV testing (adjusted odds ratio: 0.06, 95% confidence interval: 0.05 to 0.07) but more likely to newly test HIV positive (adjusted odds ratio: 1.30, 95% confidence interval: 1.21 to 1.40). Age group differences were evident in the sociodemographic and socioeconomic factors associated with testing uptake and HIV prevalence, particularly, gender, relationship status, and HIV testing history. Sociodemographic and socioeconomic factors were independently associated with HIV testing and prevalence among the age groups. Community-based treatment and prevention strategies will need to consider these factors.

  16. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  17. Component unavailability versus inservice test (IST) interval: Evaluations of component aging effects with applications to check valves

    International Nuclear Information System (INIS)

    Vesely, W.E.; Poole, A.B.

    1997-07-01

    Methods are presented for calculating component unavailabilities when inservice test (IST) intervals are changed and when component aging is explicitly included. The methods extend usual approaches for calculating unavailability and risk effects of changing IST intervals which utilize Probabilistic Risk Assessment (PRA) methods that do not explicitly include component aging. Different IST characteristics are handled including ISTs which are followed by corrective maintenances which completely renew or partially renew the component. ISTs which are not followed by maintenance activities needed to renew the component are also handled. Any downtime associated with IST, including the test downtime and the following maintenance downtime, is included in the unavailability evaluations. A range of component aging behaviors is studied including both linear and nonlinear aging behaviors. Based upon evaluations completed to date, pooled failure data on check valves show relatively small aging (e.g., less than 7% per year). However, data from some plant systems could be evidence for larger aging rates occurring in time periods less than 5 years. The methods are utilized in this report to carry out a range of sensitivity evaluations to evaluate aging effects for different possible applications. Based on the sensitivity evaluations, summary tables are constructed showing how optimal IST interval ranges for check valves can vary relative to different aging behaviors which might exist. The evaluations are also used to identify IST intervals for check valves which are robust to component aging effects. General insights on aging effects are also extracted. These sensitivity studies and extracted results provide useful information which can be supplemented or be updated with plant specific information. The models and results can also be input to PRAs to determine associated risk implications

  18. 46 CFR 61.20-17 - Examination intervals.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Examination intervals. 61.20-17 Section 61.20-17... INSPECTIONS Periodic Tests of Machinery and Equipment § 61.20-17 Examination intervals. (a) A lubricant that... examination interval. (b) Except as provided in paragraphs (c) through (f) of this section, each tailshaft on...

  19. Food skills confidence and household gatekeepers' dietary practices.

    Science.gov (United States)

    Burton, Melissa; Reid, Mike; Worsley, Anthony; Mavondo, Felix

    2017-01-01

    Household food gatekeepers have the potential to influence the food attitudes and behaviours of family members, as they are mainly responsible for food-related tasks in the home. The aim of this study was to determine the role of gatekeepers' confidence in food-related skills and nutrition knowledge on food practices in the home. An online survey was completed by 1059 Australian dietary gatekeepers selected from the Global Market Insite (GMI) research database. Participants responded to questions about food acquisition and preparation behaviours, the home eating environment, perceptions and attitudes towards food, and demographics. Two-step cluster analysis was used to identify groups based on confidence regarding food skills and nutrition knowledge. Chi-square tests and one-way ANOVAs were used to compare the groups on the dependent variables. Three groups were identified: low confidence, moderate confidence and high confidence. Gatekeepers in the highest confidence group were significantly more likely to report lower body mass index (BMI), and indicate higher importance of fresh food products, vegetable prominence in meals, product information use, meal planning, perceived behavioural control and overall diet satisfaction. Gatekeepers in the lowest confidence group were significantly more likely to indicate more perceived barriers to healthy eating, report more time constraints and more impulse purchasing practices, and higher convenience ingredient use. Other smaller associations were also found. Household food gatekeepers with high food skills confidence were more likely to engage in several healthy food practices, while those with low food skills confidence were more likely to engage in unhealthy food practices. Food education strategies aimed at building food-skills and nutrition knowledge will enable current and future gatekeepers to make healthier food decisions for themselves and for their families. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Interval selection with machine-dependent intervals

    OpenAIRE

    Bohmova K.; Disser Y.; Mihalak M.; Widmayer P.

    2013-01-01

    We study an offline interval scheduling problem where every job has exactly one associated interval on every machine. To schedule a set of jobs, exactly one of the intervals associated with each job must be selected, and the intervals selected on the same machine must not intersect.We show that deciding whether all jobs can be scheduled is NP-complete already in various simple cases. In particular, by showing the NP-completeness for the case when all the intervals associated with the same job...

  1. Distribution of the product confidence limits for the indirect effect: Program PRODCLIN

    Science.gov (United States)

    MacKinnon, David P.; Fritz, Matthew S.; Williams, Jason; Lockwood, Chondra M.

    2010-01-01

    This article describes a program, PRODCLIN (distribution of the PRODuct Confidence Limits for INdirect effects), written for SAS, SPSS, and R, that computes confidence limits for the product of two normal random variables. The program is important because it can be used to obtain more accurate confidence limits for the indirect effect, as demonstrated in several recent articles (MacKinnon, Lockwood, & Williams, 2004; Pituch, Whittaker, & Stapleton, 2005). Tests of the significance of and confidence limits for indirect effects based on the distribution of the product method have more accurate Type I error rates and more power than other, more commonly used tests. Values for the two paths involved in the indirect effect and their standard errors are entered in the PRODCLIN program, and distribution of the product confidence limits are computed. Several examples are used to illustrate the PRODCLIN program. The PRODCLIN programs in rich text format may be downloaded from www.psychonomic.org/archive. PMID:17958149

  2. Cardiopulmonary resuscitation; use, training and self-confidence in skills. A self-report study among hospital personnel

    Directory of Open Access Journals (Sweden)

    Hopstock Laila A

    2008-12-01

    Full Text Available Abstract Background Immediate start of basic cardiopulmonary resuscitation (CPR and early defibrillation have been highlighted as crucial for survival from cardiac arrest, but despite new knowledge, new technology and massive personnel training the survival rates from in-hospital cardiac arrest are still low. National guidelines recommend regular intervals of CPR training to make all hospital personnel able to perform basic CPR till advanced care is available. This study investigates CPR training, resuscitation experience and self-confidence in skills among hospital personnel outside critical care areas. Methods A cross-sectional study was performed at three Norwegian hospitals. Data on CPR training and CPR use were collected by self-reports from 361 hospital personnel. Results A total of 89% reported training in CPR, but only 11% had updated their skills in accordance with the time interval recommended by national guidelines. Real resuscitation experience was reported by one third of the respondents. Both training intervals and use of skills in resuscitation situations differed among the professions. Self-reported confidence decreased only after more than two years since last CPR training. Conclusion There is a gap between recommendations and reality in CPR training among hospital personnel working outside critical care areas.

  3. Raising Confident Kids

    Science.gov (United States)

    ... First Aid & Safety Doctors & Hospitals Videos Recipes for Kids Kids site Sitio para niños How the Body ... Videos for Educators Search English Español Raising Confident Kids KidsHealth / For Parents / Raising Confident Kids What's in ...

  4. Solar Alpha Rotary Joint (SARJ) Lubrication Interval Test and Evaluation (LITE). Post-Test Grease Analysis

    Science.gov (United States)

    Golden, Johnny L.; Martinez, James E.; Devivar, Rodrigo V.

    2015-01-01

    The Solar Alpha Rotary Joint (SARJ) is a mechanism of the International Space Station (ISS) that orients the solar power generating arrays toward the sun as the ISS orbits our planet. The orientation with the sun must be maintained to fully charge the ISS batteries and maintain all the other ISS electrical systems operating properly. In 2007, just a few months after full deployment, the starboard SARJ developed anomalies that warranted a full investigation including ISS Extravehicular Activity (EVA). The EVA uncovered unexpected debris that was due to degradation of a nitride layer on the SARJ bearing race. ISS personnel identified the failure root-cause and applied an aerospace grease to lubricate the area associated with the anomaly. The corrective action allowed the starboard SARJ to continue operating within the specified engineering parameters. The SARJ LITE (Lubrication Interval Test and Evaluation) program was initiated by NASA, Lockheed Martin, and Boeing to simulate the operation of the ISS SARJ for an extended time. The hardware was designed to test and evaluate the exact material components used aboard the ISS SARJ, but in a controlled area where engineers could continuously monitor the performance. After running the SARJ LITE test for an equivalent of 36+ years of continuous use, the test was opened to evaluate the metallography and lubrication. We have sampled the SARJ LITE rollers and plate to fully assess the grease used for lubrication. Chemical and thermal analysis of these samples has generated information that has allowed us to assess the location, migration, and current condition of the grease. The collective information will be key toward understanding and circumventing any performance deviations involving the ISS SARJ in the years to come.

  5. National Survey of Adult and Pediatric Reference Intervals in Clinical Laboratories across Canada: A Report of the CSCC Working Group on Reference Interval Harmonization.

    Science.gov (United States)

    Adeli, Khosrow; Higgins, Victoria; Seccombe, David; Collier, Christine P; Balion, Cynthia M; Cembrowski, George; Venner, Allison A; Shaw, Julie

    2017-11-01

    Reference intervals are widely used decision-making tools in laboratory medicine, serving as health-associated standards to interpret laboratory test results. Numerous studies have shown wide variation in reference intervals, even between laboratories using assays from the same manufacturer. Lack of consistency in either sample measurement or reference intervals across laboratories challenges the expectation of standardized patient care regardless of testing location. Here, we present data from a national survey conducted by the Canadian Society of Clinical Chemists (CSCC) Reference Interval Harmonization (hRI) Working Group that examines variation in laboratory reference sample measurements, as well as pediatric and adult reference intervals currently used in clinical practice across Canada. Data on reference intervals currently used by 37 laboratories were collected through a national survey to examine the variation in reference intervals for seven common laboratory tests. Additionally, 40 clinical laboratories participated in a baseline assessment by measuring six analytes in a reference sample. Of the seven analytes examined, alanine aminotransferase (ALT), alkaline phosphatase (ALP), and creatinine reference intervals were most variable. As expected, reference interval variation was more substantial in the pediatric population and varied between laboratories using the same instrumentation. Reference sample results differed between laboratories, particularly for ALT and free thyroxine (FT4). Reference interval variation was greater than test result variation for the majority of analytes. It is evident that there is a critical lack of harmonization in laboratory reference intervals, particularly for the pediatric population. Furthermore, the observed variation in reference intervals across instruments cannot be explained by the bias between the results obtained on instruments by different manufacturers. Copyright © 2017 The Canadian Society of Clinical Chemists

  6. Study on the thyroid function of thoroughbred horses by means of 'in vitro' /sup 125/I-T/sub 3/ modified and /sup 125/I-T/sub 4/ tests

    Energy Technology Data Exchange (ETDEWEB)

    de Martin, B W [Sao Paulo Univ. (Brazil). Faculdade de Medicina Veterinaria e Zootecnia

    1975-01-01

    Sera of 71 animals, divided in groups of males and females, in repose and after activity were studied. The method to establish the percentage of the /sup 125/I-lyothyronine retention in resin (Test /sup 125/I-T/sub 3/ or T/sub 3/) was modified by the use of 0.2 ml of serum on the resin column, after addition of the marked hormone. This modification served to prove that thoroughbred equines show binding of the I-lyothyronine to the serum four times reduced, indicating, therefore, that these animals have four times more ligation sites of triidothyronin saturation in the serum, when compared with the results obtained from human beings. The variance analysis applied to the T/sub 3/ Test showed no significant results at the 95% level as regards to activity. For the 71 animals, the author has found an average of 50.30% of the /sup 125/I-Lyothyronine in resin retention, being the confidence interval for this group between 48.75% and 51.85% to a 95% confidence coefficient. Evaluating the results of the T/sub 4/ Test by means of the variance analysis, we noticed that the male and female groups in repose differed statistically from the groups after activity to a 95% confidence coefficient. The author has grouped the results of the T/sub 4/ Test of 32 equines, 18 males and 14 females, in repose, obtaining an average of 0.61 mcg and 0.51 mcg and 0.71 mcg T/sub 4//100 ml as confidence interval to a 95% confidence coefficient. We have listed 39 results of T/sub 4/ Test, being 23 males and 61 Females, after activity, obtaining an average of 2.01 mcg of thyroxin by 100 ml of serum and 1.72 mcg and 2.30 T/sub 4//100 ml as confidence interval to a 95% confidence coefficient.

  7. Understanding public confidence in government to prevent terrorist attacks.

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, T. E.; Ramaprasad, A,; Samsa, M. E.; Decision and Information Sciences; Univ. of Illinois at Chicago

    2008-04-02

    A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode its confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the principal metrics used to assess the consequences of terrorist attacks. Hence, a detailed understanding of the variations in public confidence among individuals, terrorist event types, and as a function of time is critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data was collected from three groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery explosion attack, and cyber intrusions on financial institutions, resulting in identity theft. Our findings are: (a) although the aggregate confidence level is low, there are optimists and pessimists; (b) the subjects are discriminating in interpreting the nature of a terrorist attack, the time horizon, and its impact; (c) confidence recovery after a terrorist event has an incubation period; and (d) the patterns of recovery of confidence of the optimists and the pessimists are different. These findings can affect the strategy and policies to manage public confidence after a terrorist event.

  8. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  9. Gait in children with cerebral palsy : observer reliability of Physician Rating Scale and Edinburgh Visual Gait Analysis Interval Testing scale

    NARCIS (Netherlands)

    Maathuis, KGB; van der Schans, CP; van Iperen, A; Rietman, HS; Geertzen, JHB

    2005-01-01

    The aim of this study was to test the inter- and intra-observer reliability of the Physician Rating Scale (PRS) and the Edinburgh Visual Gait Analysis Interval Testing (GAIT) scale for use in children with cerebral palsy (CP). Both assessment scales are quantitative observational scales, evaluating

  10. Normative values for the unipedal stance test with eyes open and closed.

    Science.gov (United States)

    Springer, Barbara A; Marin, Raul; Cyhan, Tamara; Roberts, Holly; Gill, Norman W

    2007-01-01

    Limited normative data are available for the unipedal stance test (UPST), making it difficult for clinicians to use it confidently to detect subtle balance impairments. The purpose of this study was to generate normative values for repeated trials of the UPST with eyes opened and eyes closed across age groups and gender. This prospective, mixed-model design was set in a tertiary care medical center. Healthy subjects (n= 549), 18 years or older, performed the UPST with eyes open and closed. Mean and best of 3 UPST times for males and females of 6 age groups (18-39, 40-49, 50-59, 60-69, 70-79, and 80+) were documented and inter-rater reliability was tested. There was a significant age dependent decrease in UPST time during both conditions. Inter-rater reliability for the best of 3 trials was determined to be excellent with an intra-class correlation coefficient of 0.994 (95% confidence interval 0.989-0.996) for eyes open and 0.998 (95% confidence interval 0.996-0.999) for eyes closed. This study adds to the understanding of typical performance on the UPST. Performance is age-specific and not related to gender. Clinicians now have more extensive normative values to which individuals can be compared.

  11. A study on assessment methodology of surveillance test interval and allowed outage time

    International Nuclear Information System (INIS)

    Che, Moo Seong; Cheong, Chang Hyeon; Lee, Byeong Cheol

    1996-07-01

    The objectives of this study is the development of methodology by which assessing the optimizes Surveillance Test Interval(STI) and Allowed Outage Time(AOT) using PSA method that can supplement the current deterministic methods and the improvement of Korea nuclear power plants safety. In the first year of this study, the survey about the assessment methodologies, modeling and results performed by domestic and international researches is performed as the basic step before developing the assessment methodology of this study. The assessment methodology that supplement the revealed problems in many other studies is presented and the application of new methodology into the example system assures the feasibility of this method

  12. A study on assessment methodology of surveillance test interval and allowed outage time

    Energy Technology Data Exchange (ETDEWEB)

    Che, Moo Seong; Cheong, Chang Hyeon; Lee, Byeong Cheol [Seoul Nationl Univ., Seoul (Korea, Republic of)] (and others)

    1996-07-15

    The objectives of this study is the development of methodology by which assessing the optimizes Surveillance Test Interval(STI) and Allowed Outage Time(AOT) using PSA method that can supplement the current deterministic methods and the improvement of Korea nuclear power plants safety. In the first year of this study, the survey about the assessment methodologies, modeling and results performed by domestic and international researches is performed as the basic step before developing the assessment methodology of this study. The assessment methodology that supplement the revealed problems in many other studies is presented and the application of new methodology into the example system assures the feasibility of this method.

  13. De-labelling self-reported penicillin allergy within the emergency department through the use of skin tests and oral drug provocation testing.

    Science.gov (United States)

    Marwood, Joseph; Aguirrebarrena, Gonzalo; Kerr, Stephen; Welch, Susan A; Rimmer, Janet

    2017-10-01

    Self-reported penicillin allergy is common among patients attending the ED, but is a poor predictor of true immunoglobulin E-mediated hypersensitivity to penicillin. We hypothesise that with a combination of skin testing and drug provocation testing, selected patients can be safely de-labelled of their allergy. This prospective study enrolled a sample of patients presenting to an urban academic ED between 2011 and 2016 with a self-reported allergy to penicillin. Standardised skin prick and intradermal testing with amoxicillin and both major and minor determinants of penicillin was performed in the department. If negative, testing was followed by a graded oral challenge of amoxicillin over 9 days. The primary end point was the allergy status of participants at the end of the study. A total of 100 patients (mean age 42; standard deviation 14 years; 54% women) completed the testing. Of these, 81% (95% confidence interval 71.9-88.2) showed no hypersensitivity to penicillin and were labelled non-allergic. The majority (16/19) of allergies were confirmed by skin testing, with three suspected allergies detected by the oral challenge. Women were more likely than men to have a true penicillin allergy, with odds ratio of 4.0 (95% confidence interval 1.23-13.2). There were no serious adverse events. Selected patients in the ED who self-report an allergy to penicillin can be safely tested there for penicillin allergy, using skin tests and oral drug provocation testing. This testing allows a significant de-labelling of penicillin allergy, with the majority of these patients able to tolerate penicillin without incident. © 2017 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  14. Testing equality and interval estimation in binary responses when high dose cannot be used first under a three-period crossover design.

    Science.gov (United States)

    Lui, Kung-Jong; Chang, Kuang-Chao

    2015-01-01

    When comparing two doses of a new drug with a placebo, we may consider using a crossover design subject to the condition that the high dose cannot be administered before the low dose. Under a random-effects logistic regression model, we focus our attention on dichotomous responses when the high dose cannot be used first under a three-period crossover trial. We derive asymptotic test procedures for testing equality between treatments. We further derive interval estimators to assess the magnitude of the relative treatment effects. We employ Monte Carlo simulation to evaluate the performance of these test procedures and interval estimators in a variety of situations. We use the data taken as a part of trial comparing two different doses of an analgesic with a placebo for the relief of primary dysmenorrhea to illustrate the use of the proposed test procedures and estimators.

  15. Clinical tests of ankle plantarflexor strength do not predict ankle power generation during walking.

    Science.gov (United States)

    Kahn, Michelle; Williams, Gavin

    2015-02-01

    The aim of this study was to investigate the relationship between a clinical test of ankle plantarflexor strength and ankle power generation (APG) at push-off during walking. This is a prospective cross-sectional study of 102 patients with traumatic brain injury. Handheld dynamometry was used to measure ankle plantarflexor strength. Three-dimensional gait analysis was performed to quantify ankle power generation at push-off during walking. Ankle plantarflexor strength was only moderately correlated with ankle power generation at push-off (r = 0.43, P < 0.001; 95% confidence interval, 0.26-0.58). There was also a moderate correlation between ankle plantarflexor strength and self-selected walking velocity (r = 0.32, P = 0.002; 95% confidence interval, 0.13-0.48). Handheld dynamometry measures of ankle plantarflexor strength are only moderately correlated with ankle power generation during walking. This clinical test of ankle plantarflexor strength is a poor predictor of calf muscle function during gait in people with traumatic brain injury.

  16. The cognitive approach on self-confidence of the girl students with academic failure

    Directory of Open Access Journals (Sweden)

    Ommolbanin Sheibani

    2011-01-01

    Full Text Available Background: The consequences of behavior attributed to external factors as chance, luck or help of other people are effective on one's self confidence and educational improvement. The aim of this study is to assess the effect of teaching “locus of control” on the basis of cognitive approach on Yazd high school students' self confidence.Materials and Method: This descriptive analytic research is done as an experimental project by using pre-test and post-test method on 15 first-grade high school students in Yazd city during 1387-88 educational year. The participants were chosen by the multistage cluster random sampling. Fifteen students were also chosen as the control group. The instruments used in this research were Eysenk self confidence test and also the “locus of control” teaching program. A t-test was used for the statistical analysis of the present study.Results: The results of the statistical analyses showed that there is a significant difference between the experimental and control group (p=0.01 in increasing the self confidence. Also the results of the t-test revealed that there is no significant difference in the educational improvement of the experimental group before and after teaching “locus of control”.Conclusion: According to this study, teaching “locus of control” on the basis of the cognitive approach has a significant effect on the self confidence but it doesn't have any positive effect on educational improvement

  17. The application of deep confidence network in the problem of image recognition

    Directory of Open Access Journals (Sweden)

    Chumachenko О.І.

    2016-12-01

    Full Text Available In order to study the concept of deep learning, in particular the substitution of multilayer perceptron on the corresponding network of deep confidence, computer simulations of the learning process to test voters was carried out. Multi-layer perceptron has been replaced by a network of deep confidence, consisting of successive limited Boltzmann machines. After training of a network of deep confidence algorithm of layer-wise training it was found that the use of networks of deep confidence greatly improves the accuracy of multilayer perceptron training by method of reverse distribution errors.

  18. Diverse interpretations of confidence building

    International Nuclear Information System (INIS)

    Macintosh, J.

    1998-01-01

    This paper explores the variety of operational understandings associated with the term 'confidence building'. Collectively, these understandings constitute what should be thought of as a 'family' of confidence building approaches. This unacknowledged and generally unappreciated proliferation of operational understandings that function under the rubric of confidence building appears to be an impediment to effective policy. The paper's objective is to analyze these different understandings, stressing the important differences in their underlying assumptions. In the process, the paper underlines the need for the international community to clarify its collective thinking about what it means when it speaks of 'confidence building'. Without enhanced clarity, it will be unnecessarily difficult to employ the confidence building approach effectively due to the lack of consistent objectives and common operating assumptions. Although it is not the intention of this paper to promote a particular account of confidence building, dissecting existing operational understandings should help to identify whether there are fundamental elements that define what might be termed 'authentic' confidence building. Implicit here is the view that some operational understandings of confidence building may diverge too far from consensus models to count as meaningful members of the confidence building family. (author)

  19. Chinese Management Research Needs Self-Confidence but not Over-confidence

    DEFF Research Database (Denmark)

    Li, Xin; Ma, Li

    2018-01-01

    Chinese management research aims to contribute to global management knowledge by offering rigorous and innovative theories and practical recommendations both for managing in China and outside. However, two seemingly opposite directions that researchers are taking could prove detrimental......-confidence, limiting theoretical innovation and practical relevance. Yet going in the other direction of overly indigenous research reflects over-confidence, often isolating the Chinese management research from the mainstream academia and at times, even becoming anti-science. A more integrated approach of conducting...... to the healthy development of Chinese management research. We argue that the two directions share a common ground that lies in the mindset regarding the confidence in the work on and from China. One direction of simply following the American mainstream on academic rigor demonstrates a lack of self...

  20. Bootstrap confidence intervals for principal response curves

    NARCIS (Netherlands)

    Timmerman, Marieke E.; Ter Braak, Cajo J. F.

    2008-01-01

    The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

  1. Bootstrap Confidence Intervals for Principal Response Curves

    NARCIS (Netherlands)

    Timmerman, M.E.; Braak, ter C.J.F.

    2008-01-01

    The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

  2. Is it possible to develop a cross-country test of social interaction?

    Science.gov (United States)

    Berg, Brett; Atler, Karen; Fisher, Anne G

    2017-11-01

    The Evaluation of Social Interaction (ESI) is used in Asia, Australia, North America and Europe. What is considered to be appropriate social interaction, however, differs amongst countries. If social interaction varies, the relative difficulty of the ESI items and types of social exchange also could vary, resulting in differential item functioning (DIF) and test bias in the form of differential test functioning (DTF). Yet, because the ESI scoring criteria are designed to account for culture, the ESI should be free of DIF and DTF. The purpose, therefore, was to determine whether the ESI demonstrates DIF or DTF related to country. A retrospective, descriptive, cross-sectional study of 9811 participants 2-102 years, 55% female, from 12 countries was conducted using many-facet Rasch analyses. DIF analyses compared paired item and social exchange type values by country against a critical effect size (±0.55 logit). DTF analyses compared paired ESI measures by country to 95% confidence intervals. All paired social exchange types and 98.3% of paired items differed by less than ±0.55 logit. All persons fell within 95% confidence intervals. Minimal DIF resulted in no test bias, supporting the cross-country validity of the ESI.

  3. Autoimmune antibodies and recurrence-free interval in melanoma patients treated with adjuvant interferon

    DEFF Research Database (Denmark)

    Bouwhuis, Marna G; Suciu, Stefan; Collette, Sandra

    2009-01-01

    relapse-free interval in both trials (EORTC 18952, hazard ratio [HR] = 0.41, 95% confidence interval [CI] = 0.25 to 0.68, P P ... (model 2: EORTC 18952, HR = 0.81, 95% CI = 0.46 to 1.40, P = .44; and Nordic IFN, HR = 0.85, 95% CI = 0.55 to 1.30, P = .45; model 3: EORTC 18952, HR = 1.05, 95% CI = 0.59 to 1.87, P = .88; and Nordic IFN, HR = 0.78, 95% CI = 0.49 to 1.24, P = .30). CONCLUSIONS: In two randomized trials of IFN...

  4. A new perspective in the estimation of postmortem interval (PMI) based on vitreous.

    Science.gov (United States)

    Muñoz, J I; Suárez-Peñaranda, J M; Otero, X L; Rodríguez-Calvo, M S; Costas, E; Miguéns, X; Concheiro, L

    2001-03-01

    The relation between the potassium concentration in the vitreous humor, [K+], and the postmortem interval has been studied by several authors. Many formulae are available and they are based on a correlation test and linear regression using the PMI as the independent variable and [K+] as the dependent variable. The estimation of the confidence interval is based on this formulation. However, in forensic work, it is necessary to use [K+] as the independent variable to estimate the PMI. Although all authors have obtained the PMI by direct use of these formulae, it is, nevertheless, an inexact approach, which leads to false estimations. What is required is to change the variables, obtaining a new equation in which [K+] is considered as the independent variable and the PMI as the dependent. The regression line obtained from our data is [K+] = 5.35 + 0.22 PMI, by changing the variables we get PMI = 2.58[K+] - 9.30. When only nonhospital deaths are considered, the results are considerably improved. In this case, we get [K+] = 5.60 + 0.17 PMI and, consequently, PMI = 3.92[K+] - 19.04.

  5. PedGenie: meta genetic association testing in mixed family and case-control designs

    Directory of Open Access Journals (Sweden)

    Allen-Brady Kristina

    2007-11-01

    Full Text Available Abstract Background- PedGenie software, introduced in 2006, includes genetic association testing of cases and controls that may be independent or related (nuclear families or extended pedigrees or mixtures thereof using Monte Carlo significance testing. Our aim is to demonstrate that PedGenie, a unique and flexible analysis tool freely available in Genie 2.4 software, is significantly enhanced by incorporating meta statistics for detecting genetic association with disease using data across multiple study groups. Methods- Meta statistics (chi-squared tests, odds ratios, and confidence intervals were calculated using formal Cochran-Mantel-Haenszel techniques. Simulated data from unrelated individuals and individuals in families were used to illustrate meta tests and their empirically-derived p-values and confidence intervals are accurate, precise, and for independent designs match those provided by standard statistical software. Results- PedGenie yields accurate Monte Carlo p-values for meta analysis of data across multiple studies, based on validation testing using pedigree, nuclear family, and case-control data simulated under both the null and alternative hypotheses of a genotype-phenotype association. Conclusion- PedGenie allows valid combined analysis of data from mixtures of pedigree-based and case-control resources. Added meta capabilities provide new avenues for association analysis, including pedigree resources from large consortia and multi-center studies.

  6. An Interval Estimation Method of Patent Keyword Data for Sustainable Technology Forecasting

    Directory of Open Access Journals (Sweden)

    Daiho Uhm

    2017-11-01

    Full Text Available Technology forecasting (TF is forecasting the future state of a technology. It is exciting to know the future of technologies, because technology changes the way we live and enhances the quality of our lives. In particular, TF is an important area in the management of technology (MOT for R&D strategy and new product development. Consequently, there are many studies on TF. Patent analysis is one method of TF because patents contain substantial information regarding developed technology. The conventional methods of patent analysis are based on quantitative approaches such as statistics and machine learning. The most traditional TF methods based on patent analysis have a common problem. It is the sparsity of patent keyword data structured from collected patent documents. After preprocessing with text mining techniques, most frequencies of technological keywords in patent data have values of zero. This problem creates a disadvantage for the performance of TF, and we have trouble analyzing patent keyword data. To solve this problem, we propose an interval estimation method (IEM. Using an adjusted Wald confidence interval called the Agresti–Coull confidence interval, we construct our IEM for efficient TF. In addition, we apply the proposed method to forecast the technology of an innovative company. To show how our work can be applied in the real domain, we conduct a case study using Apple technology.

  7. The construction of categorization judgments: using subjective confidence and response latency to test a distributed model.

    Science.gov (United States)

    Koriat, Asher; Sorka, Hila

    2015-01-01

    The classification of objects to natural categories exhibits cross-person consensus and within-person consistency, but also some degree of between-person variability and within-person instability. What is more, the variability in categorization is also not entirely random but discloses systematic patterns. In this study, we applied the Self-Consistency Model (SCM, Koriat, 2012) to category membership decisions, examining the possibility that confidence judgments and decision latency track the stable and variable components of categorization responses. The model assumes that category membership decisions are constructed on the fly depending on a small set of clues that are sampled from a commonly shared population of pertinent clues. The decision and confidence are based on the balance of evidence in favor of a positive or a negative response. The results confirmed several predictions derived from SCM. For each participant, consensual responses to items were more confident than non-consensual responses, and for each item, participants who made the consensual response tended to be more confident than those who made the nonconsensual response. The difference in confidence between consensual and nonconsensual responses increased with the proportion of participants who made the majority response for the item. A similar pattern was observed for response speed. The pattern of results obtained for cross-person consensus was replicated by the results for response consistency when the responses were classified in terms of within-person agreement across repeated presentations. These results accord with the sampling assumption of SCM, that confidence and response speed should be higher when the decision is consistent with what follows from the entire population of clues than when it deviates from it. Results also suggested that the context for classification can bias the sample of clues underlying the decision, and that confidence judgments mirror the effects of context on

  8. Change in Breast Cancer Screening Intervals Since the 2009 USPSTF Guideline.

    Science.gov (United States)

    Wernli, Karen J; Arao, Robert F; Hubbard, Rebecca A; Sprague, Brian L; Alford-Teaster, Jennifer; Haas, Jennifer S; Henderson, Louise; Hill, Deidre; Lee, Christoph I; Tosteson, Anna N A; Onega, Tracy

    2017-08-01

    In 2009, the U.S. Preventive Services Task Force (USPSTF) recommended biennial mammography for women aged 50-74 years and shared decision-making for women aged 40-49 years for breast cancer screening. We evaluated changes in mammography screening interval after the 2009 recommendations. We conducted a prospective cohort study of women aged 40-74 years who received 821,052 screening mammograms between 2006 and 2012 using data from the Breast Cancer Surveillance Consortium. We compared changes in screening intervals and stratified intervals based on whether the mammogram at the end of the interval occurred before or after the 2009 recommendation. Differences in mean interval length by woman-level characteristics were compared using linear regression. The mean interval (in months) minimally decreased after the 2009 USPSTF recommendations. Among women aged 40-49 years, the mean interval decreased from 17.2 months to 17.1 months (difference -0.16%, 95% confidence interval [CI] -0.30 to -0.01). Similar small reductions were seen for most age groups. The largest change in interval length in the post-USPSTF period was declines among women with a first-degree family history of breast cancer (difference -0.68%, 95% CI -0.82 to -0.54) or a 5-year breast cancer risk ≥2.5% (difference -0.58%, 95% CI -0.73 to -0.44). The 2009 USPSTF recommendation did not lengthen the average mammography interval among women routinely participating in mammography screening. Future studies should evaluate whether breast cancer screening intervals lengthen toward biennial intervals following new national 2016 breast cancer screening recommendations, particularly among women less than 50 years of age.

  9. Confidence in Forced-Choice Recognition: What Underlies the Ratings?

    Science.gov (United States)

    Zawadzka, Katarzyna; Higham, Philip A.; Hanczakowski, Maciej

    2017-01-01

    Two-alternative forced-choice recognition tests are commonly used to assess recognition accuracy that is uncontaminated by changes in bias. In such tests, participants are asked to endorse the studied item out of 2 presented alternatives. Participants may be further asked to provide confidence judgments for their recognition decisions. It is often…

  10. To compare the accuracy of Prayer's sign and Mallampatti test in predicting difficult intubation in Diabetic patients

    International Nuclear Information System (INIS)

    Baig, M. M. A.; Khan, F. H.

    2014-01-01

    Objective: To determine the accuracy of Prayer's sign and Mallampatti test in predicting difficult endotracheal intubation in diabetic patients. Methods: The cross-sectional study was performed at Aga Khan University Hospital, Karachi, over a period from January 2009 to April 2010, and comprised 357 patients who required endotracheal intubation for elective surgical procedures. Prayer's sign and Mallampatti tests were performed for the assessment of airway by trained observers. Ease or difficulty of laryngoscopy after the patient was fully anaesthetised with standard technique were observed and laryngoscopic view of first attempt was rated according to Cormack-Lehan grade of intubation. SPSS 15 was used for statistical analysis. Results: Of the 357 patients, 125(35%) were classified as difficult to intubate. Prayer's sign showed significantly lower accuracy, positive and negative predictive values than Mallampatti test. The sensitivity of Prayer's sign was lower 29.6 (95% Confidence Interval, 21.9-38.5) than Mallampatti test 79.3 (95% confidence interval, 70.8-85.7) while specificity of both the tests was not found to be significantly different. Conclusion: Prayer's sign is not acceptable as a single best bedside test for prediction of difficult intubation. (author)

  11. Sequential Interval Estimation of a Location Parameter with Fixed Width in the Nonregular Case

    OpenAIRE

    Koike, Ken-ichi

    2007-01-01

    For a location-scale parameter family of distributions with a finite support, a sequential confidence interval with a fixed width is obtained for the location parameter, and its asymptotic consistency and efficiency are shown. Some comparisons with the Chow-Robbins procedure are also done.

  12. Thought confidence as a determinant of persuasion: the self-validation hypothesis.

    Science.gov (United States)

    Petty, Richard E; Briñol, Pablo; Tormala, Zakary L

    2002-05-01

    Previous research in the domain of attitude change has described 2 primary dimensions of thinking that impact persuasion processes and outcomes: the extent (amount) of thinking and the direction (valence) of issue-relevant thought. The authors examined the possibility that another, more meta-cognitive aspect of thinking is also important-the degree of confidence people have in their own thoughts. Four studies test the notion that thought confidence affects the extent of persuasion. When positive thoughts dominate in response to a message, increasing confidence in those thoughts increases persuasion, but when negative thoughts dominate, increasing confidence decreases persuasion. In addition, using self-reported and manipulated thought confidence in separate studies, the authors provide evidence that the magnitude of the attitude-thought relationship depends on the confidence people have in their thoughts. Finally, the authors also show that these self-validation effects are most likely in situations that foster high amounts of information processing activity.

  13. Study on risk insight for additional ILRT interval extension

    International Nuclear Information System (INIS)

    Seo, M. R.; Hong, S. Y.; Kim, M. K.; Chung, B. S.; Oh, H. C.

    2005-01-01

    In U.S., the containment Integrated Leakage Rate Test (ILRT) interval was extended from 3 times per 10 years to once per 10 years based on NUREG-1493 'Performance-Based Containment Leak-Test Program' in 1995. In September, 2001, ILRT interval was extended up to once per 15 years based on Nuclear Energy Industry (NEI) provisional guidance 'Interim Guidance for Performing Risk Impact Assessments In Support of One-Time Extensions for Containment Integrated Leakage Rate Test Surveillance Intervals'. In Korea, the containment ILRT was performed with 5 year interval. But, in MOST(Ministry of Science and Technology) Notice 2004-15 'Standard for the Leak- Rate Test of the Nuclear Reactor Containment', the extension of the ILRT interval to once per 10 year can be allowed if some conditions are met. So, the safety analysis for the extension of Yonggwang Nuclear (YGN) Unit 1 and 2 ILRT interval extension to once per 10 years was completed based on the methodology in NUREG-1493. But, during review process by regulatory body, KINS, it was required that some various risk insight or index for risk analysis should be developed. So, we began to study NEI interim report for 15 year ILRT interval extension. As previous analysis based on NUREG-1493, MACCS II (MELCOR Accident Consequence Code System) computer code was used for the risk analysis of the population, and the population dose was selected as a reference index for the risk evaluation

  14. Confidence in Numerical Simulations

    International Nuclear Information System (INIS)

    Hemez, Francois M.

    2015-01-01

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to ''forecast,'' that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists ''think.'' This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. ''Confidence'' derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  15. Detection of lung cancer through low-dose CT screening (NELSON): a prespecified analysis of screening test performance and interval cancers.

    Science.gov (United States)

    Horeweg, Nanda; Scholten, Ernst Th; de Jong, Pim A; van der Aalst, Carlijn M; Weenink, Carla; Lammers, Jan-Willem J; Nackaerts, Kristiaan; Vliegenthart, Rozemarijn; ten Haaf, Kevin; Yousaf-Khan, Uraujh A; Heuvelmans, Marjolein A; Thunnissen, Erik; Oudkerk, Matthijs; Mali, Willem; de Koning, Harry J

    2014-11-01

    Low-dose CT screening is recommended for individuals at high risk of developing lung cancer. However, CT screening does not detect all lung cancers: some might be missed at screening, and others can develop in the interval between screens. The NELSON trial is a randomised trial to assess the effect of screening with increasing screening intervals on lung cancer mortality. In this prespecified analysis, we aimed to assess screening test performance, and the epidemiological, radiological, and clinical characteristics of interval cancers in NELSON trial participants assigned to the screening group. Eligible participants in the NELSON trial were those aged 50-75 years, who had smoked 15 or more cigarettes per day for more than 25 years or ten or more cigarettes for more than 30 years, and were still smoking or had quit less than 10 years ago. We included all participants assigned to the screening group who had attended at least one round of screening. Screening test results were based on volumetry using a two-step approach. Initially, screening test results were classified as negative, indeterminate, or positive based on nodule presence and volume. Subsequently, participants with an initial indeterminate result underwent follow-up screening to classify their final screening test result as negative or positive, based on nodule volume doubling time. We obtained information about all lung cancer diagnoses made during the first three rounds of screening, plus an additional 2 years of follow-up from the national cancer registry. We determined epidemiological, radiological, participant, and tumour characteristics by reassessing medical files, screening CTs, and clinical CTs. The NELSON trial is registered at www.trialregister.nl, number ISRCTN63545820. 15,822 participants were enrolled in the NELSON trial, of whom 7915 were assigned to low-dose CT screening with increasing interval between screens, and 7907 to no screening. We included 7155 participants in our study, with

  16. Effect of Remote Back-Up Protection System Failure on the Optimum Routine Test Time Interval of Power System Protection

    Directory of Open Access Journals (Sweden)

    Y Damchi

    2013-12-01

    Full Text Available Appropriate operation of protection system is one of the effective factors to have a desirable reliability in power systems, which vitally needs routine test of protection system. Precise determination of optimum routine test time interval (ORTTI plays a vital role in predicting the maintenance costs of protection system. In the most previous studies, ORTTI has been determined while remote back-up protection system was considered fully reliable. This assumption is not exactly correct since remote back-up protection system may operate incorrectly or fail to operate, the same as the primary protection system. Therefore, in order to determine the ORTTI, an extended Markov model is proposed in this paper considering failure probability for remote back-up protection system. In the proposed Markov model of the protection systems, monitoring facility is taken into account. Moreover, it is assumed that the primary and back-up protection systems are maintained simultaneously. Results show that the effect of remote back-up protection system failures on the reliability indices and optimum routine test intervals of protection system is considerable.

  17. Measuring the Confidence of 8th Grade Taiwanese Students' Knowledge of Acids and Bases

    Science.gov (United States)

    Jack, Brady Michael; Liu, Chia-Ju; Chiu, Houn-Lin; Tsai, Chun-Yen

    2012-01-01

    The present study investigated whether gender differences were present on the confidence judgments made by 8th grade Taiwanese students on the accuracy of their responses to acid-base test items. A total of 147 (76 male, 71 female) students provided item-specific confidence judgments during a test of their knowledge of acids and bases. Using the…

  18. Technical Report: Algorithm and Implementation for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    Energy Technology Data Exchange (ETDEWEB)

    McLoughlin, Kevin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-11

    This report describes the design and implementation of an algorithm for estimating relative microbial abundances, together with confidence limits, using data from metagenomic DNA sequencing. For the background behind this project and a detailed discussion of our modeling approach for metagenomic data, we refer the reader to our earlier technical report, dated March 4, 2014. Briefly, we described a fully Bayesian generative model for paired-end sequence read data, incorporating the effects of the relative abundances, the distribution of sequence fragment lengths, fragment position bias, sequencing errors and variations between the sampled genomes and the nearest reference genomes. A distinctive feature of our modeling approach is the use of a Chinese restaurant process (CRP) to describe the selection of genomes to be sampled, and thus the relative abundances. The CRP component is desirable for fitting abundances to reads that may map ambiguously to multiple targets, because it naturally leads to sparse solutions that select the best representative from each set of nearly equivalent genomes.

  19. The Effect of Learning Method and Confidence Level on the Ability of Interpreting Religious Poem

    Directory of Open Access Journals (Sweden)

    Kinayati Djojosuroto

    2017-11-01

    Full Text Available This research aims to determine the effect of the learning method (expository and authentic and the level of confidence in the ability of religious poetry interpretation of the students of the third semester, majoring in the Indonesian Language and Literature Education of Universitas Negeri Manado. The method used is the quasi-experimental method with 2 x 2 factorial designs. The measurement of Y variable (ability to interpret the religious poetry uses the writing test and the level of confidence uses a questionnaire. Data analysis technique in this study is analysis of variance (ANOVA followed by two lanes and Tuckey test to look at the interaction of the group. Before the test, the hypothesis is that analysis requirements normality data test using Liliefors test and homogeneity test data using Bartlett test. The results show differences in the ability to explain the religious poetry among students who study with the expository method and the students who study with the authentic method. That is, overall, the expository method is better than the authentic method to improve the ability of the students. To improve the ability of the students to interpret the religious poetry, it is better to use the authentic method for the group that has a lower level of confidence. There is the influence of the interaction between learning method (expository and authentic and the level of confidence in the ability of religious poetry interpretation. Based on these results, it can be concluded that: First, lecturers can determine what materials and method that can be used to enhance the ability to interpret religious poetry when the level of confidence of the students has been known. Second, expository teaching methods and authentic teaching method for group of students with different level of confidence will give you different result on the ability of that group of students to interpret religious poetry as well. Third, the increase of the ability to interpret

  20. Stability in the metamemory realism of eyewitness confidence judgments.

    Science.gov (United States)

    Buratti, Sandra; Allwood, Carl Martin; Johansson, Marcus

    2014-02-01

    The stability of eyewitness confidence judgments over time in regard to their reported memory and accuracy of these judgments is of interest in forensic contexts because witnesses are often interviewed many times. The present study investigated the stability of the confidence judgments of memory reports of a witnessed event and of the accuracy of these judgments over three occasions, each separated by 1 week. Three age groups were studied: younger children (8-9 years), older children (10-11 years), and adults (19-31 years). A total of 93 participants viewed a short film clip and were asked to answer directed two-alternative forced-choice questions about the film clip and to confidence judge each answer. Different questions about details in the film clip were used on each of the three test occasions. Confidence as such did not exhibit stability over time on an individual basis. However, the difference between confidence and proportion correct did exhibit stability across time, in terms of both over/underconfidence and calibration. With respect to age, the adults and older children exhibited more stability than the younger children for calibration. Furthermore, some support for instability was found with respect to the difference between the average confidence level for correct and incorrect answers (slope). Unexpectedly, however, the younger children's slope was found to be more stable than the adults. Compared to the previous research, the present study's use of more advanced statistical methods provides a more nuanced understanding of the stability of confidence judgments in the eyewitness reports of children and adults.

  1. A systematic review of maternal confidence for physiologic birth: characteristics of prenatal care and confidence measurement.

    Science.gov (United States)

    Avery, Melissa D; Saftner, Melissa A; Larson, Bridget; Weinfurter, Elizabeth V

    2014-01-01

    Because a focus on physiologic labor and birth has reemerged in recent years, care providers have the opportunity in the prenatal period to help women increase confidence in their ability to give birth without unnecessary interventions. However, most research has only examined support for women during labor. The purpose of this systematic review was to examine the research literature for information about prenatal care approaches that increase women's confidence for physiologic labor and birth and tools to measure that confidence. Studies were reviewed that explored any element of a pregnant woman's interaction with her prenatal care provider that helped build confidence in her ability to labor and give birth. Timing of interaction with pregnant women included during pregnancy, labor and birth, and the postpartum period. In addition, we looked for studies that developed a measure of women's confidence related to labor and birth. Outcome measures included confidence or similar concepts, descriptions of components of prenatal care contributing to maternal confidence for birth, and reliability and validity of tools measuring confidence. The search of MEDLINE, CINAHL, PsycINFO, and Scopus databases provided a total of 893 citations. After removing duplicates and articles that did not meet inclusion criteria, 6 articles were included in the review. Three relate to women's confidence for labor during the prenatal period, and 3 describe tools to measure women's confidence for birth. Research about enhancing women's confidence for labor and birth was limited to qualitative studies. Results suggest that women desire information during pregnancy and want to use that information to participate in care decisions in a relationship with a trusted provider. Further research is needed to develop interventions to help midwives and physicians enhance women's confidence in their ability to give birth and to develop a tool to measure confidence for use during prenatal care. © 2014 by

  2. Sources of sport confidence, imagery type and performance among competitive athletes: the mediating role of sports confidence.

    Science.gov (United States)

    Levy, A R; Perry, J; Nicholls, A R; Larkin, D; Davies, J

    2015-01-01

    This study explored the mediating role of sport confidence upon (1) sources of sport confidence-performance relationship and (2) imagery-performance relationship. Participants were 157 competitive athletes who completed state measures of confidence level/sources, imagery type and performance within one hour after competition. Among the current sample, confirmatory factor analysis revealed appropriate support for the nine-factor SSCQ and the five-factor SIQ. Mediational analysis revealed that sport confidence had a mediating influence upon the achievement source of confidence-performance relationship. In addition, both cognitive and motivational imagery types were found to be important sources of confidence, as sport confidence mediated imagery type- performance relationship. Findings indicated that athletes who construed confidence from their own achievements and report multiple images on a more frequent basis are likely to benefit from enhanced levels of state sport confidence and subsequent performance.

  3. Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.

    Science.gov (United States)

    Lo, Y C; Armbruster, David A

    2012-04-01

    Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.

  4. Effects of Training and Feedback on Accuracy of Predicting Rectosigmoid Neoplastic Lesions and Selection of Surveillance Intervals by Endoscopists Performing Optical Diagnosis of Diminutive Polyps.

    Science.gov (United States)

    Vleugels, Jasper L A; Dijkgraaf, Marcel G W; Hazewinkel, Yark; Wanders, Linda K; Fockens, Paul; Dekker, Evelien

    2018-05-01

    Real-time differentiation of diminutive polyps (1-5 mm) during endoscopy could replace histopathology analysis. According to guidelines, implementation of optical diagnosis into routine practice would require it to identify rectosigmoid neoplastic lesions with a negative predictive value (NPV) of more than 90%, using histologic findings as a reference, and agreement with histology-based surveillance intervals for more than 90% of cases. We performed a prospective study with 39 endoscopists accredited to perform colonoscopies on participants with positive results from fecal immunochemical tests in the Bowel Cancer Screening Program at 13 centers in the Netherlands. Endoscopists were trained in optical diagnosis using a validated module (Workgroup serrAted polypS and Polyposis). After meeting predefined performance thresholds in the training program, the endoscopists started a 1-year program (continuation phase) in which they performed narrow band imaging analyses during colonoscopies of participants in the screening program and predicted histological findings with confidence levels. The endoscopists were randomly assigned to groups that received feedback or no feedback on the accuracy of their predictions. Primary outcome measures were endoscopists' abilities to identify rectosigmoid neoplastic lesions (using histology as a reference) with NPVs of 90% or more, and selecting surveillance intervals that agreed with those determined by histology for at least 90% of cases. Of 39 endoscopists initially trained, 27 (69%) completed the training program. During the continuation phase, these 27 endoscopists performed 3144 colonoscopies in which 4504 diminutive polyps were removed. The endoscopists identified neoplastic lesions with a pooled NPV of 90.8% (95% confidence interval 88.6-92.6); their proposed surveillance intervals agreed with those determined by histologic analysis for 95.4% of cases (95% confidence interval 94.0-96.6). Findings did not differ between the group

  5. Integration testing through reusing representative unit test cases for high-confidence medical software.

    Science.gov (United States)

    Shin, Youngsul; Choi, Yunja; Lee, Woo Jin

    2013-06-01

    As medical software is getting larger-sized, complex, and connected with other devices, finding faults in integrated software modules gets more difficult and time consuming. Existing integration testing typically takes a black-box approach, which treats the target software as a black box and selects test cases without considering internal behavior of each software module. Though it could be cost-effective, this black-box approach cannot thoroughly test interaction behavior among integrated modules and might leave critical faults undetected, which should not happen in safety-critical systems such as medical software. This work anticipates that information on internal behavior is necessary even for integration testing to define thorough test cases for critical software and proposes a new integration testing method by reusing test cases used for unit testing. The goal is to provide a cost-effective method to detect subtle interaction faults at the integration testing phase by reusing the knowledge obtained from unit testing phase. The suggested approach notes that the test cases for the unit testing include knowledge on internal behavior of each unit and extracts test cases for the integration testing from the test cases for the unit testing for a given test criteria. The extracted representative test cases are connected with functions under test using the state domain and a single test sequence to cover the test cases is produced. By means of reusing unit test cases, the tester has effective test cases to examine diverse execution paths and find interaction faults without analyzing complex modules. The produced test sequence can have test coverage as high as the unit testing coverage and its length is close to the length of optimal test sequences. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Decision time and confidence predict choosers' identification performance in photographic showups

    Science.gov (United States)

    Sagana, Anna; Sporer, Siegfried L.; Wixted, John T.

    2018-01-01

    In vast contrast to the multitude of lineup studies that report on the link between decision time, confidence, and identification accuracy, only a few studies looked at these associations for showups, with results varying widely across studies. We therefore set out to test the individual and combined value of decision time and post-decision confidence for diagnosing the accuracy of positive showup decisions using confidence-accuracy characteristic curves and Bayesian analyses. Three-hundred-eighty-four participants viewed a stimulus event and were subsequently presented with two showups which could be target-present or target-absent. As expected, we found a negative decision time-accuracy and a positive post-decision confidence-accuracy correlation for showup selections. Confidence-accuracy characteristic curves demonstrated the expected additive effect of combining both postdictors. Likewise, Bayesian analyses, taking into account all possible target-presence base rate values showed that fast and confident identification decisions were more diagnostic than slow or less confident decisions, with the combination of both being most diagnostic for postdicting accurate and inaccurate decisions. The postdictive value of decision time and post-decision confidence was higher when the prior probability that the suspect is the perpetrator was high compared to when the prior probability that the suspect is the perpetrator was low. The frequent use of showups in practice emphasizes the importance of these findings for court proceedings. Overall, these findings support the idea that courts should have most trust in showup identifications that were made fast and confidently, and least in showup identifications that were made slowly and with low confidence. PMID:29346394

  7. Reference Value Advisor: a new freeware set of macroinstructions to calculate reference intervals with Microsoft Excel.

    Science.gov (United States)

    Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine

    2011-03-01

    International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.

  8. High-intensity interval training: Modulating interval duration in overweight/obese men.

    Science.gov (United States)

    Smith-Ryan, Abbie E; Melvin, Malia N; Wingfield, Hailee L

    2015-05-01

    High-intensity interval training (HIIT) is a time-efficient strategy shown to induce various cardiovascular and metabolic adaptations. Little is known about the optimal tolerable combination of intensity and volume necessary for adaptations, especially in clinical populations. In a randomized controlled pilot design, we evaluated the effects of two types of interval training protocols, varying in intensity and interval duration, on clinical outcomes in overweight/obese men. Twenty-five men [body mass index (BMI) > 25 kg · m(2)] completed baseline body composition measures: fat mass (FM), lean mass (LM) and percent body fat (%BF) and fasting blood glucose, lipids and insulin (IN). A graded exercise cycling test was completed for peak oxygen consumption (VO2peak) and power output (PO). Participants were randomly assigned to high-intensity short interval (1MIN-HIIT), high-intensity interval (2MIN-HIIT) or control groups. 1MIN-HIIT and 2MIN-HIIT completed 3 weeks of cycling interval training, 3 days/week, consisting of either 10 × 1 min bouts at 90% PO with 1 min rests (1MIN-HIIT) or 5 × 2 min bouts with 1 min rests at undulating intensities (80%-100%) (2MIN-HIIT). There were no significant training effects on FM (Δ1.06 ± 1.25 kg) or %BF (Δ1.13% ± 1.88%), compared to CON. Increases in LM were not significant but increased by 1.7 kg and 2.1 kg for 1MIN and 2MIN-HIIT groups, respectively. Increases in VO2peak were also not significant for 1MIN (3.4 ml·kg(-1) · min(-1)) or 2MIN groups (2.7 ml · kg(-1) · min(-1)). IN sensitivity (HOMA-IR) improved for both training groups (Δ-2.78 ± 3.48 units; p < 0.05) compared to CON. HIIT may be an effective short-term strategy to improve cardiorespiratory fitness and IN sensitivity in overweight males.

  9. COVAR: Computer Program for Multifactor Relative Risks and Tests of Hypotheses Using a Variance-Covariance Matrix from Linear and Log-Linear Regression

    Directory of Open Access Journals (Sweden)

    Leif E. Peterson

    1997-11-01

    Full Text Available A computer program for multifactor relative risks, confidence limits, and tests of hypotheses using regression coefficients and a variance-covariance matrix obtained from a previous additive or multiplicative regression analysis is described in detail. Data used by the program can be stored and input from an external disk-file or entered via the keyboard. The output contains a list of the input data, point estimates of single or joint effects, confidence intervals and tests of hypotheses based on a minimum modified chi-square statistic. Availability of the program is also discussed.

  10. Increasing Product Confidence-Shifting Paradigms.

    Science.gov (United States)

    Phillips, Marla; Kashyap, Vishal; Cheung, Mee-Shew

    2015-01-01

    Leaders in the pharmaceutical, medical device, and food industries expressed a unilateral concern over product confidence throughout the total product lifecycle, an unsettling fact for these leaders to manage given that their products affect the lives of millions of people each year. Fueled by the heparin incident of intentional adulteration in 2008, initial efforts for increasing product confidence were focused on improving the confidence of incoming materials, with a belief that supplier performance must be the root cause. As in the heparin case, concern over supplier performance extended deep into the supply chain to include suppliers of the suppliers-which is often a blind spot for pharmaceutical, device, and food manufacturers. Resolved to address the perceived lack of supplier performance, these U.S. Food and Drug Administration (FDA)-regulated industries began to adopt the supplier relationship management strategy, developed by the automotive industry, that emphasizes "management" of suppliers for the betterment of the manufacturers. Current product and supplier management strategies, however, have not led to a significant improvement in product confidence. As a result of the enduring concern by industry leaders over the lack of product confidence, Xavier University launched the Integrity of Supply Initiative in 2012 with a team of industry leaders and FDA officials. Through a methodical research approach, data generated by the pharmaceutical, medical device, and food manufacturers surprisingly pointed to themselves as a source of the lack of product confidence, and revealed that manufacturers either unknowingly increase the potential for error or can control/prevent many aspects of product confidence failure. It is only through this paradigm shift that manufacturers can work collaboratively with their suppliers as equal partners, instead of viewing their suppliers as "lesser" entities needing to be controlled. The basis of this shift provides manufacturers

  11. Kangaroo Care Education Effects on Nurses' Knowledge and Skills Confidence.

    Science.gov (United States)

    Almutairi, Wedad Matar; Ludington-Hoe, Susan M

    2016-11-01

    Less than 20% of the 996 NICUs in the United States routinely practice kangaroo care, due in part to the inadequate knowledge and skills confidence of nurses. Continuing education improves knowledge and skills acquisition, but the effects of a kangaroo care certification course on nurses' knowledge and skills confidence are unknown. A pretest-posttest quasi-experiment was conducted. The Kangaroo Care Knowledge and Skills Confidence Tool was administered to 68 RNs at a 2.5-day course about kangaroo care evidence and skills. Measures of central tendency, dispersion, and paired t tests were conducted on 57 questionnaires. The nurses' characteristics were varied. The mean posttest Knowledge score (M = 88.54, SD = 6.13) was significantly higher than the pretest score (M = 78.7, SD = 8.30), t [54] = -9.1, p = .000), as was the posttest Skills Confidence score (pretest M = 32.06, SD = 3.49; posttest M = 26.80, SD = 5.22), t [53] = -8.459, p = .000). The nurses' knowledge and skills confidence of kangaroo care improved following continuing education, suggesting a need for continuing education in this area. J Contin Educ Nurs. 2016;47(11):518-524. Copyright 2016, SLACK Incorporated.

  12. Immunochromatographic Strip Test for Rapid Detection of Diphtheria Toxin: Description and Multicenter Evaluation in Areas of Low and High Prevalence of Diphtheria

    Science.gov (United States)

    Engler, K. H.; Efstratiou, A.; Norn, D.; Kozlov, R. S.; Selga, I.; Glushkevich, T. G.; Tam, M.; Melnikov, V. G.; Mazurova, I. K.; Kim, V. E.; Tseneva, G. Y.; Titov, L. P.; George, R. C.

    2002-01-01

    An immunochromatographic strip (ICS) test was developed for the detection of diphtheria toxin by using an equine polyclonal antibody as the capture antibody and colloidal gold-labeled monoclonal antibodies specific for fragment A of the diphtheria toxin molecule as the detection antibody. The ICS test has been fully optimized for the detection of toxin from bacterial cultures; the limit of detection was approximately 0.5 ng of diphtheria toxin per ml within 10 min. In a comparative study with 915 pure clinical isolates of Corynebacterium spp., the results of the ICS test were in complete agreement with those of the conventional Elek test. The ICS test was also evaluated for its ability to detect toxigenicity from clinical specimens (throat swabs) in two field studies conducted within areas of the former USSR where diphtheria is epidemic. Eight hundred fifty throat swabs were examined by conventional culture and by use of directly inoculated broth cultures for the ICS test. The results showed 99% concordance (848 of 850 specimens), and the sensitivity and specificity of the ICS test were 98% (95% confidence interval, 91 to 99%) and 99% (95% confidence interval, 99 to 100%), respectively. PMID:11773096

  13. The meaning of diagnostic test results: A spreadsheet for swift data analysis

    International Nuclear Information System (INIS)

    MacEneaney, Peter M.; Malone, Dermot E.

    2000-01-01

    AIMS: To design a spreadsheet program to: (a) analyse rapidly diagnostic test result data produced in local research or reported in the literature; (b) correct reported predictive values for disease prevalence in any population; (c) estimate the post-test probability of disease in individual patients. MATERIALS AND METHODS: Microsoft Excel TM was used. Section A: a contingency (2 x 2) table was incorporated into the spreadsheet. Formulae for standard calculations [sample size, disease prevalence, sensitivity and specificity with 95% confidence intervals, predictive values and likelihood ratios (LRs)] were linked to this table. The results change automatically when the data in the true or false negative and positive cells are changed. Section B: this estimates predictive values in any population, compensating for altered disease prevalence. Sections C-F: Bayes' theorem was incorporated to generate individual post-test probabilities. The spreadsheet generates 95% confidence intervals, LRs and a table and graph of conditional probabilities once the sensitivity and specificity of the test are entered. The latter shows the expected post-test probability of disease for any pre-test probability when a test of known sensitivity and specificity is positive or negative. RESULTS: This spreadsheet can be used on desktop and palmtop computers. The MS Excel TM version can be downloaded via the Internet from the URL ftp://radiography.com/pub/Rad-data99.xls CONCLUSION: A spreadsheet is useful for contingency table data analysis and assessment of the clinical meaning of diagnostic test results. MacEneaney, P.M., Malone, D.E. (2000)

  14. Effects of parental divorce on marital commitment and confidence.

    Science.gov (United States)

    Whitton, Sarah W; Rhoades, Galena K; Stanley, Scott M; Markman, Howard J

    2008-10-01

    Research on the intergenerational transmission of divorce has demonstrated that compared with offspring of nondivorced parents, those of divorced parents generally have more negative attitudes toward marriage as an institution and are less optimistic about the feasibility of a long-lasting, healthy marriage. It is also possible that when entering marriage themselves, adults whose parents divorced have less personal relationship commitment to their own marriages and less confidence in their own ability to maintain a happy marriage with their spouse. However, this prediction has not been tested. In the current study, we assessed relationship commitment and relationship confidence, as well as parental divorce and retrospectively reported interparental conflict, in a sample of 265 engaged couples prior to their first marriage. Results demonstrated that women's, but not men's, parental divorce was associated with lower relationship commitment and lower relationship confidence. These effects persisted when controlling for the influence of recalled interparental conflict and premarital relationship adjustment. The current findings suggest that women whose parents divorced are more likely to enter marriage with relatively lower commitment to, and confidence in, the future of those marriages, potentially raising their risk for divorce. Copyright 2008 APA, all rights reserved.

  15. Recurrence interval analysis of trading volumes.

    Science.gov (United States)

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  16. High intensity aerobic interval training improves peak oxygen consumption in patients with metabolic syndrome: CAT

    Directory of Open Access Journals (Sweden)

    Alexis Espinoza Salinas

    2014-06-01

    Full Text Available Introduction A number of cardiovascular risk factors characterizes the metabolic syndrome: insulin resistance (IR, low HDL cholesterol and high triglycerides. The aforementioned risk factors lead to elevated levels of abdominal adipose tissue, resulting in oxygen consumption deficiency. Purpose To verify the validity and applicability of using high intensity interval training (HIIT in subjects with metabolic syndrome and to answer the following question: Can HIIT improve peak oxygen consumption? Method The systematic review "Effects of aerobic interval training on exercise capacity and metabolic risk factors in individuals with cardiometabolic disorders" was analyzed. Results Data suggests high intensity aerobic interval training increases peak oxygen consumption by a standardized mean difference of 3.60 mL/kg-1/min-1 (95% confidence interval, 0.28-4.91. Conclusion In spite of the methodological shortcomings of the primary studies included in the systematic review, we reasonably conclude that implementation of high intensity aerobic interval training in subjects with metabolic syndrome, leads to increases in peak oxygen consumption.

  17. Haemostatic reference intervals in pregnancy

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna

    2010-01-01

    Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age-specific refe......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-specific reference intervals for coagulation tests during normal pregnancy. Eight hundred one women with expected normal pregnancies were included in the study. Of these women, 391 had no complications during pregnancy, vaginal delivery, or postpartum period. Plasma samples were obtained at gestational weeks 13......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

  18. The Cross-Correlation and Reshuffling Tests in Discerning Induced Seismicity

    Science.gov (United States)

    Schultz, Ryan; Telesca, Luciano

    2018-05-01

    In recent years, cases of newly emergent induced clusters have increased seismic hazard and risk in locations with social, environmental, and economic consequence. Thus, the need for a quantitative and robust means to discern induced seismicity has become a critical concern. This paper reviews a Matlab-based algorithm designed to quantify the statistical confidence between two time-series datasets. Similar to prior approaches, our method utilizes the cross-correlation to delineate the strength and lag of correlated signals. In addition, use of surrogate reshuffling tests allows for the dynamic testing against statistical confidence intervals of anticipated spurious correlations. We demonstrate the robust nature of our algorithm in a suite of synthetic tests to determine the limits of accurate signal detection in the presence of noise and sub-sampling. Overall, this routine has considerable merit in terms of delineating the strength of correlated signals, one of which includes the discernment of induced seismicity from natural.

  19. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence

    OpenAIRE

    Chua, Elizabeth F.; Hannula, Deborah E.; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one’s memory are related, but there are many instances when they diverge. Accordingly, it is important to disentangle the factors which contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movem...

  20. Development, validity and reliability testing of the East Midlands Evaluation Tool (EMET) for measuring impacts on trainees' confidence and competence following end of life care training.

    Science.gov (United States)

    Whittaker, B; Parry, R; Bird, L; Watson, S; Faull, C

    2017-02-02

    To develop, test and validate a versatile questionnaire, the East Midlands Evaluation Tool (EMET), for measuring effects of end of life care training events on trainees' self-reported confidence and competence. A paper-based questionnaire was designed on the basis of the English Department of Health's core competences for end of life care, with sections for completion pretraining, immediately post-training and also for longer term follow-up. Preliminary versions were field tested at 55 training events delivered by 13 organisations to 1793 trainees working in diverse health and social care backgrounds. Iterative rounds of development aimed to maximise relevance to events and trainees. Internal consistency was assessed by calculating interitem correlations on questionnaire responses during field testing. Content validity was assessed via qualitative content analysis of (1) responses to questionnaires completed by field tester trainers and (2) field notes from a workshop with a separate cohort of experienced trainers. Test-retest reliability was assessed via repeat administration to a cohort of student nurses. The EMET comprises 27 items with Likert-scaled responses supplemented with questions seeking free-text responses. It measures changes in self-assessed confidence and competence on 5 subscales: communication skills; assessment and care planning; symptom management; advance care planning; overarching values and knowledge. Test-retest reliability was found to be good, as was internal consistency: the questions successfully assess different aspects of the same underlying concept. The EMET provides a time-efficient, reliable and flexible means of evaluating effects of training on self-reported confidence and competence in the key elements of end of life care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  1. The accuracy of meta-metacognitive judgments: regulating the realism of confidence.

    Science.gov (United States)

    Buratti, Sandra; Allwood, Carl Martin

    2012-08-01

    Can people improve the realism of their confidence judgments about the correctness of their episodic memory reports by deselecting the least realistic judgments? An assumption of Koriat and Goldsmith's (Psychol Rev 103:490-517, 1996) model is that confidence judgments regulate the reporting of memory reports. We tested whether this assumption generalizes to the regulation of the realism (accuracy) of confidence judgments. In two experiments, 270 adults in separate conditions answered 50 recognition and recall questions about the contents of a just-seen video. After each answer, they made confidence judgments about the answer's correctness. In Experiment 1, the participants in the recognition conditions significantly increased their absolute bias when they excluded 15 questions. In Experiment 2, the participants in the recall condition significantly improved their calibration. The results indicate that recall, more than recognition, offers valid cues for participants to increase the realism of their report. However, the effects were small with only weak support for the conclusion that people have some ability to regulate the realism in their confidence judgments.

  2. Reviewing interval cancers: Time well spent?

    International Nuclear Information System (INIS)

    Gower-Thomas, Kate; Fielder, Hilary M.P.; Branston, Lucy; Greening, Sarah; Beer, Helen; Rogers, Cerilan

    2002-01-01

    OBJECTIVES: To categorize interval cancers, and thus identify false-negatives, following prevalent and incident screens in the Welsh breast screening programme. SETTING: Breast Test Wales (BTW) Llandudno, Cardiff and Swansea breast screening units. METHODS: Five hundred and sixty interval breast cancers identified following negative mammographic screening between 1989 and 1997 were reviewed by eight screening radiologists. The blind review was achieved by mixing the screening films of women who subsequently developed an interval cancer with screen negative films of women who did not develop cancer, in a ratio of 4:1. Another radiologist used patients' symptomatic films to record a reference against which the reviewers' reports of the screening films were compared. Interval cancers were categorized as 'true', 'occult', 'false-negative' or 'unclassified' interval cancers or interval cancers with minimal signs, based on the National Health Service breast screening programme (NHSBSP) guidelines. RESULTS: Of the classifiable interval films, 32% were false-negatives, 55% were true intervals and 12% occult. The proportion of false-negatives following incident screens was half that following prevalent screens (P = 0.004). Forty percent of the seed films were recalled by the panel. CONCLUSIONS: Low false-negative interval cancer rates following incident screens (18%) versus prevalent screens (36%) suggest that lower cancer detection rates at incident screens may have resulted from fewer cancers than expected being present, rather than from a failure to detect tumours. The panel method for categorizing interval cancers has significant flaws as the results vary markedly with different protocol and is no more accurate than other, quicker and more timely methods. Gower-Thomas, K. et al. (2002)

  3. Interval Solution for Nonlinear Programming of Maximizing the Fatigue Life of V-Belt under Polymorphic Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Zhong Wan

    2013-01-01

    Full Text Available In accord with the practical engineering design conditions, a nonlinear programming model is constructed for maximizing the fatigue life of V-belt drive in which some polymorphic uncertainties are incorporated. For a given satisfaction level and a confidence level, an equivalent formulation of this uncertain optimization model is obtained where only interval parameters are involved. Based on the concepts of maximal and minimal range inequalities for describing interval inequality, the interval parameter model is decomposed into two standard nonlinear programming problems, and an algorithm, called two-step based sampling algorithm, is developed to find an interval optimal solution for the original problem. Case study is employed to demonstrate the validity and practicability of the constructed model and the algorithm.

  4. Interpregnancy intervals: impact of postpartum contraceptive effectiveness and coverage.

    Science.gov (United States)

    Thiel de Bocanegra, Heike; Chang, Richard; Howell, Mike; Darney, Philip

    2014-04-01

    The purpose of this study was to determine the use of contraceptive methods, which was defined by effectiveness, length of coverage, and their association with short interpregnancy intervals, when controlling for provider type and client demographics. We identified a cohort of 117,644 women from the 2008 California Birth Statistical Master file with second or higher order birth and at least 1 Medicaid (Family Planning, Access, Care, and Treatment [Family PACT] program or Medi-Cal) claim within 18 months after index birth. We explored the effect of contraceptive method provision on the odds of having an optimal interpregnancy interval and controlled for covariates. The average length of contraceptive coverage was 3.81 months (SD = 4.84). Most women received user-dependent hormonal contraceptives as their most effective contraceptive method (55%; n = 65,103 women) and one-third (33%; n = 39,090 women) had no contraceptive claim. Women who used long-acting reversible contraceptive methods had 3.89 times the odds and women who used user-dependent hormonal methods had 1.89 times the odds of achieving an optimal birth interval compared with women who used barrier methods only; women with no method had 0.66 times the odds. When user-dependent methods are considered, the odds of having an optimal birth interval increased for each additional month of contraceptive coverage by 8% (odds ratio, 1.08; 95% confidence interval, 1.08-1.09). Women who were seen by Family PACT or by both Family PACT and Medi-Cal providers had significantly higher odds of optimal birth intervals compared with women who were served by Medi-Cal only. To achieve optimal birth spacing and ultimately to improve birth outcomes, attention should be given to contraceptive counseling and access to contraceptive methods in the postpartum period. Copyright © 2014 Mosby, Inc. All rights reserved.

  5. Globalization of consumer confidence

    Directory of Open Access Journals (Sweden)

    Çelik Sadullah

    2017-01-01

    Full Text Available The globalization of world economies and the importance of nowcasting analysis have been at the core of the recent literature. Nevertheless, these two strands of research are hardly coupled. This study aims to fill this gap through examining the globalization of the consumer confidence index (CCI by applying conventional and unconventional econometric methods. The US CCI is used as the benchmark in tests of comovement among the CCIs of several developing and developed countries, with the data sets divided into three sub-periods: global liquidity abundance, the Great Recession, and postcrisis. The existence and/or degree of globalization of the CCIs vary according to the period, whereas globalization in the form of coherence and similar paths is observed only during the Great Recession and, surprisingly, stronger in developing/emerging countries.

  6. Special physical examination tests for superior labrum anterior posterior shoulder tears are clinically limited and invalid: a diagnostic systematic review.

    Science.gov (United States)

    Calvert, Eric; Chambers, Gordon Keith; Regan, William; Hawkins, Robert H; Leith, Jordan M

    2009-05-01

    The diagnosis of a superior labrum anterior posterior (SLAP) lesion through physical examination has been widely reported in the literature. Most of these studies report high sensitivities and specificities, and claim to be accurate, valid, and reliable. The purpose of this study was to critically evaluate these studies to determine if there was sufficient evidence to support the use of the SLAP physical examination tests as valid and reliable diagnostic test procedures. Strict epidemiologic methodology was used to obtain and collate all relevant articles. Sackett's guidelines were applied to all articles. Confidence intervals and likelihood ratios were determined. Fifteen of 29 relevant studies met the criteria for inclusion. Only one article met all of Sackett's critical appraisal criteria. Confidence intervals for both the positive and negative likelihood ratios contained the value 1. The current literature being used as a resource for teaching in medical schools and continuing education lacks the validity necessary to be useful. There are no good physical examination tests that exist for effectively diagnosing a SLAP lesion.

  7. Inferring high-confidence human protein-protein interactions

    Directory of Open Access Journals (Sweden)

    Yu Xueping

    2012-05-01

    Full Text Available Abstract Background As numerous experimental factors drive the acquisition, identification, and interpretation of protein-protein interactions (PPIs, aggregated assemblies of human PPI data invariably contain experiment-dependent noise. Ascertaining the reliability of PPIs collected from these diverse studies and scoring them to infer high-confidence networks is a non-trivial task. Moreover, a large number of PPIs share the same number of reported occurrences, making it impossible to distinguish the reliability of these PPIs and rank-order them. For example, for the data analyzed here, we found that the majority (>83% of currently available human PPIs have been reported only once. Results In this work, we proposed an unsupervised statistical approach to score a set of diverse, experimentally identified PPIs from nine primary databases to create subsets of high-confidence human PPI networks. We evaluated this ranking method by comparing it with other methods and assessing their ability to retrieve protein associations from a number of diverse and independent reference sets. These reference sets contain known biological data that are either directly or indirectly linked to interactions between proteins. We quantified the average effect of using ranked protein interaction data to retrieve this information and showed that, when compared to randomly ranked interaction data sets, the proposed method created a larger enrichment (~134% than either ranking based on the hypergeometric test (~109% or occurrence ranking (~46%. Conclusions From our evaluations, it was clear that ranked interactions were always of value because higher-ranked PPIs had a higher likelihood of retrieving high-confidence experimental data. Reducing the noise inherent in aggregated experimental PPIs via our ranking scheme further increased the accuracy and enrichment of PPIs derived from a number of biologically relevant data sets. These results suggest that using our high-confidence

  8. Can follow-up controls improve the confidence of MR of the breast? A retrospective analysis of follow-up MR images of the breast

    International Nuclear Information System (INIS)

    Betsch, A.; Arndt, E.; Stern, W.; Claussen, C.D.; Mueller-Schimpfle, M.; Wallwiener, D.

    2001-01-01

    Purpose: To assess the change in diagnostic confidence between first and follow-up dynamic MR examination of the breast (MRM). Methods: The reports of a total of 175 MRM in 77 patients (mean age 50 years; 36-76) with 98 follow-up MRM were analyzed. All examinations were performed as a dynamic study (Gd-DTPA, 0.16 mmol/kg; 6-7 repetitive studies). The change in diagnostic confidence was retrospectively classified as follows: Controlled lesion vanished during follow-up (category I); diagnostic confidence increases during follow-up (II), more likely benign (IIa), more suspicious (IIb); no difference in diagnostic confidence (III). Long-term follow-up over an average of four years was obtained for 57 patients with category IIa/III findings. Results: In 98 follow-up examinations, only two lesions vanished (2%). In 77/98 cases a category IIa lesion was diagnosed, in 11 cases a category IIb lesion. In 8 cases (8%) there was no change in diagnostic confidence during follow-up. Lesions in category IIb underwent biopsy in 10/11 cases, in one case long-term follow-up proved a completely regredient inflammatory change. In 8/11 suspicious findings (IIb) a malignant tumor was detected. The mean time interval between first and follow-up MRM was 8 months for I-IIb lesions, and 4 months for category III lesions. In the longterm follow-up two patients with a category II a lesion developed a carcinoma in a different breast area after four and five years. Conclusion: MRM follow up increases the diagnostic confidence if the time interval is adequate (>4 months). A persistently or increasingly suspicious finding warrants biopsy. (orig.) [de

  9. Confidence in critical care nursing.

    Science.gov (United States)

    Evans, Jeanne; Bell, Jennifer L; Sweeney, Annemarie E; Morgan, Jennifer I; Kelly, Helen M

    2010-10-01

    The purpose of the study was to gain an understanding of the nursing phenomenon, confidence, from the experience of nurses in the nursing subculture of critical care. Leininger's theory of cultural care diversity and universality guided this qualitative descriptive study. Questions derived from the sunrise model were used to elicit nurses' perspectives about cultural and social structures that exist within the critical care nursing subculture and the influence that these factors have on confidence. Twenty-eight critical care nurses from a large Canadian healthcare organization participated in semistructured interviews about confidence. Five themes arose from the descriptions provided by the participants. The three themes, tenuously navigating initiation rituals, deliberately developing holistic supportive relationships, and assimilating clinical decision-making rules were identified as social and cultural factors related to confidence. The remaining two themes, preserving a sense of security despite barriers and accommodating to diverse challenges, were identified as environmental factors related to confidence. Practice and research implications within the culture of critical care nursing are discussed in relation to each of the themes.

  10. Professional confidence: a concept analysis.

    Science.gov (United States)

    Holland, Kathlyn; Middleton, Lyn; Uys, Leana

    2012-03-01

    Professional confidence is a concept that is frequently used and or implied in occupational therapy literature, but often without specifying its meaning. Rodgers's Model of Concept Analysis was used to analyse the term "professional confidence". Published research obtained from a federated search in four health sciences databases was used to inform the concept analysis. The definitions, attributes, antecedents, and consequences of professional confidence as evidenced in the literature are discussed. Surrogate terms and related concepts are identified, and a model case of the concept provided. Based on the analysis, professional confidence can be described as a dynamic, maturing personal belief held by a professional or student. This includes an understanding of and a belief in the role, scope of practice, and significance of the profession, and is based on their capacity to competently fulfil these expectations, fostered through a process of affirming experiences. Developing and fostering professional confidence should be nurtured and valued to the same extent as professional competence, as the former underpins the latter, and both are linked to professional identity.

  11. CLSI-based transference and verification of CALIPER pediatric reference intervals for 29 Ortho VITROS 5600 chemistry assays.

    Science.gov (United States)

    Higgins, Victoria; Truong, Dorothy; Woroch, Amy; Chan, Man Khun; Tahmasebi, Houman; Adeli, Khosrow

    2018-03-01

    Evidence-based reference intervals (RIs) are essential to accurately interpret pediatric laboratory test results. To fill gaps in pediatric RIs, the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER) project developed an age- and sex-specific pediatric RI database based on healthy pediatric subjects. Originally established for Abbott ARCHITECT assays, CALIPER RIs were transferred to assays on Beckman, Roche, Siemens, and Ortho analytical platforms. This study provides transferred reference intervals for 29 biochemical assays for the Ortho VITROS 5600 Chemistry System (Ortho). Based on Clinical Laboratory Standards Institute (CLSI) guidelines, a method comparison analysis was performed by measuring approximately 200 patient serum samples using Abbott and Ortho assays. The equation of the line of best fit was calculated and the appropriateness of the linear model was assessed. This equation was used to transfer RIs from Abbott to Ortho assays. Transferred RIs were verified using 84 healthy pediatric serum samples from the CALIPER cohort. RIs for most chemistry analytes successfully transferred from Abbott to Ortho assays. Calcium and CO 2 did not meet statistical criteria for transference (r 2 reference intervals, 29 successfully verified with approximately 90% of results from reference samples falling within transferred confidence limits. Transferred RIs for total bilirubin, magnesium, and LDH did not meet verification criteria and are not reported. This study broadens the utility of the CALIPER pediatric RI database to laboratories using Ortho VITROS 5600 biochemical assays. Clinical laboratories should verify CALIPER reference intervals for their specific analytical platform and local population as recommended by CLSI. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  12. Slug Test Characterization Results for Multi-Test/Depth Intervals Conducted During the Drilling of CERCLA Operable Unit OU UP-1 Wells 299-W19-48, 699-30-66, and 699-36-70B

    Energy Technology Data Exchange (ETDEWEB)

    Spane, Frank A.; Newcomer, Darrell R.

    2010-06-15

    This report presents test descriptions and analysis results for multiple, stress-level slug tests that were performed at selected test/depth intervals within three Operable Unit (OU) UP-1 wells: 299-W19-48 (C4300/Well K), 699-30-66 (C4298/Well R), and 699-36-70B (C4299/Well P). These wells are located within, adjacent to, and to the southeast of the Hanford Site 200-West Area. The test intervals were characterized as the individual boreholes were advanced to their final drill depths. The primary objective of the hydrologic tests was to provide information pertaining to the areal variability and vertical distribution of hydraulic conductivity with depth at these locations within the OU UP-1 area. This type of characterization information is important for predicting/simulating contaminant migration (i.e., numerical flow/transport modeling) and designing proper monitor well strategies for OU and Waste Management Area locations.

  13. Determining the confidence levels of sensor outputs using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Broten, G S; Wood, H C [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Electrical Engineering

    1996-12-31

    This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network`s ability to determine the confidence level is influenced by the complexity of the sensor`s response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

  14. Determining the confidence levels of sensor outputs using neural networks

    International Nuclear Information System (INIS)

    Broten, G.S.; Wood, H.C.

    1995-01-01

    This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network's ability to determine the confidence level is influenced by the complexity of the sensor's response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

  15. Using a fuzzy comprehensive evaluation method to determine product usability: A test case.

    Science.gov (United States)

    Zhou, Ronggang; Chan, Alan H S

    2017-01-01

    In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method.

  16. Safety of a rapid diagnostic protocol with accelerated stress testing.

    Science.gov (United States)

    Soremekun, Olan A; Hamedani, Azita; Shofer, Frances S; O'Conor, Katie J; Svenson, James; Hollander, Judd E

    2014-02-01

    Most patients at low to intermediate risk for an acute coronary syndrome (ACS) receive a 12- to 24-hour "rule out." Recently, trials have found that a coronary computed tomographic angiography-based strategy is more efficient. If stress testing were performed within the same time frame as coronary computed tomographic angiography, the 2 strategies would be more similar. We tested the hypothesis that stress testing can safely be performed within several hours of presentation. We performed a retrospective cohort study of patients presenting to a university hospital from January 1, 2009, to December 31, 2011, with potential ACS. Patients placed in a clinical pathway that performed stress testing after 2 negative troponin values 2 hours apart were included. We excluded patients with ST-elevation myocardial infarction or with an elevated initial troponin. The main outcome was safety of immediate stress testing defined as the absence of death or acute myocardial infarction (defined as elevated troponin within 24 hours after the test). A total of 856 patients who presented with potential ACS were enrolled in the clinical pathway and included in this study. Patients had a median age of 55.0 (interquartile range, 48-62) years. Chest pain was the chief concern in 86%, and pain was present on arrival in 73% of the patients. There were no complications observed during the stress test. There were 0 deaths (95% confidence interval, 0%-0.46%) and 4 acute myocardial infarctions within 24 hours (0.5%; 95% confidence interval, 0.14%-1.27%). The peak troponins were small (0.06, 0.07, 0.07, and 0.19 ng/mL). Patients who present to the ED with potential ACS can safely undergo a rapid diagnostic protocol with stress testing. © 2013.

  17. Psychometric testing on the NLN Student Satisfaction and Self-Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses.

    Science.gov (United States)

    Franklin, Ashley E; Burns, Paulette; Lee, Christopher S

    2014-10-01

    In 2006, the National League for Nursing published three measures related to novice nurses' beliefs about self-confidence, scenario design, and educational practices associated with simulation. Despite the extensive use of these measures, little is known about their reliability and validity. The psychometric properties of the Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire were studied among a sample of 2200 surveys completed by novice nurses from a liberal arts university in the southern United States. Psychometric tests included item analysis, confirmatory and exploratory factor analyses in randomly-split subsamples, concordant and discordant validity, and internal consistency. All three measures have sufficient reliability and validity to be used in education research. There is room for improvement in content validity with the Student Satisfaction and Self-Confidence in Learning and Simulation Design Scale. This work provides robust evidence to ensure that judgments made about self-confidence after simulation, simulation design and educational practices are valid and reliable. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Confidence Intervals for Omega Coefficient: Proposal for Calculus.

    Science.gov (United States)

    Ventura-León, José Luis

    2018-01-01

    La confiabilidad es entendida como una propiedad métrica de las puntuaciones de un instrumento de medida. Recientemente se viene utilizando el coeficiente omega (ω) para la estimación de la confiabilidad. No obstante, la medición nunca es exacta por la influencia del error aleatorio, por esa razón es necesario calcular y reportar el intervalo de confianza (IC) que permite encontrar en valor verdadero en un rango de medida. En ese contexto, el artículo plantea una forma de estimar el IC mediante el método de bootstrap para facilitar este procedimiento se brindan códigos de R (un software de acceso libre) para que puedan realizarse los cálculos de una forma amigable. Se espera que el artículo sirva de ayuda a los investigadores de ámbito de salud.

  19. Intervals of confidence: Uncertain accounts of global hunger

    NARCIS (Netherlands)

    Yates-Doerr, E.

    2015-01-01

    Global health policy experts tend to organize hunger through scales of ‘the individual’, ‘the community’ and ‘the global’. This organization configures hunger as a discrete, measurable object to be scaled up or down with mathematical certainty. This article offers a counter to this approach, using

  20. A quick method to calculate QTL confidence interval

    Indian Academy of Sciences (India)

    2011-08-19

    Aug 19, 2011 ... experimental design and analysis to reveal the real molecular nature of the ... strap sample form the bootstrap distribution of QTL location. The 2.5 and ..... ative probability to harbour a true QTL, hence x-LOD rule is not stable ... Darvasi A. and Soller M. 1997 A simple method to calculate resolv- ing power ...

  1. An approximate confidence interval for recombination fraction in ...

    African Journals Online (AJOL)

    user

    2011-02-14

    Feb 14, 2011 ... whose parents are not in the pedigree) and θ be the recombination fraction. ( )|. P x g is the penetrance probability, that is, the probability that an individual with genotype g has phenotype x . Let (. ) | , k k k f m. P g g g be the transmission probability, that is, the probability that an individual having genotype k.

  2. The insignificance of statistical significance testing

    Science.gov (United States)

    Johnson, Douglas H.

    1999-01-01

    Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.

  3. Confidence limits for regional cerebral blood flow values obtained with circular positron system, using krypton-77

    International Nuclear Information System (INIS)

    Meyer, E.; Yamamoto, Y.L.; Thompson, C.J.

    1978-01-01

    The 90% confidence limits have been determined for regional cerebral blood flow (rCBF) values obtained in each cm 2 of a cross section of the human head after inhalation of radioactive krypton-77, using the MNI circular positron emission tomography system (Positome). CBF values for small brain tissue elements are calculated by linear regression analysis on the semi-logarithmically transformed clearance curve. A computer program displays CBF values and their estimated error in numeric and gray scale forms. The following typical results have been obtained on a control subject: mean CBF in the entire cross section of the head: 54.6 + - 5 ml/min/100 g tissue, rCBF for small area of frontal gray matter: 75.8 + - 9 ml/min/100 g tissue. Confidence intervals for individual rCBF values varied between + - 13 and + - 55% except for areas pertaining to the ventricular system where particularly poor statistics have been obtained. Knowledge of confidence limits for rCBF values improves their diagnostic significance, particularly with respect to the assessment of reduced rCBF in stroke patients. A nomogram for convenient determination of 90% confidence limits for slope values obtained in linear regression analysis has been designed with the number of fitted points (n) and the correlation coefficient (r) as parameters. (author)

  4. Regional Competition for Confidence: Features of Formation

    Directory of Open Access Journals (Sweden)

    Irina Svyatoslavovna Vazhenina

    2016-09-01

    Full Text Available The increase in economic independence of the regions inevitably leads to an increase in the quality requirements of the regional economic policy. The key to successful regional policy, both during its development and implementation, is the understanding of the necessity of gaining confidence (at all levels, and the inevitable participation in the competition for confidence. The importance of confidence in the region is determined by its value as a competitive advantage in the struggle for partners, resources and tourists, and attracting investments. In today’s environment the focus of governments, regions and companies on long-term cooperation is clearly expressed, which is impossible without a high level of confidence between partners. Therefore, the most important competitive advantages of territories are intangible assets such as an attractive image and a good reputation, which builds up confidence of the population and partners. The higher the confidence in the region is, the broader is the range of potential partners, the larger is the planning horizon of long-term concerted action, the better are the chances of acquiring investment, the higher is the level of competitive immunity of the territories. The article defines competition for confidence as purposeful behavior of a market participant in economic environment, aimed at acquiring specific intangible competitive advantage – the confidence of the largest possible number of other market actors. The article also highlights the specifics of confidence as a competitive goal, presents factors contributing to the destruction of confidence, proposes a strategy to fight for confidence as a program of four steps, considers the factors which integrate regional confidence and offers several recommendations for the establishment of effective regional competition for confidence

  5. Interleukin-1β gene variants are associated with QTc interval prolongation following cardiac surgery: a prospective observational study.

    Science.gov (United States)

    Kertai, Miklos D; Ji, Yunqi; Li, Yi-Ju; Mathew, Joseph P; Daubert, James P; Podgoreanu, Mihai V

    2016-04-01

    We characterized cardiac surgery-induced dynamic changes of the corrected QT (QTc) interval and tested the hypothesis that genetic factors are associated with perioperative QTc prolongation independent of clinical and procedural factors. All study subjects were ascertained from a prospective study of patients who underwent elective cardiac surgery during August 1999 to April 2002. We defined a prolonged QTc interval as > 440 msec, measured from 24-hr pre- and postoperative 12-lead electrocardiograms. The association of 37 single nucleotide polymorphisms (SNPs) in 21 candidate genes -involved in modulating arrhythmia susceptibility pathways with postoperative QTc changes- was investigated in a two-stage design with a stage I cohort (n = 497) nested within a stage II cohort (n = 957). Empirical P values (Pemp) were obtained by permutation tests with 10,000 repeats. After adjusting for clinical and procedural risk factors, we selected four SNPs (P value range, 0.03-0.1) in stage I, which we then tested in the stage II cohort. Two functional SNPs in the pro-inflammatory cytokine interleukin-1β (IL1β), rs1143633 (odds ratio [OR], 0.71; 95% confidence interval [CI], 0.53 to 0.95; Pemp = 0.02) and rs16944 (OR, 1.31; 95% CI, 1.01 to 1.70; Pemp = 0.04), remained independent predictors of postoperative QTc prolongation. The ability of a clinico-genetic model incorporating the two IL1B polymorphisms to classify patients at risk for developing prolonged postoperative QTc was superior to a clinical model alone, with a net reclassification improvement of 0.308 (P = 0.0003) and an integrated discrimination improvement of 0.02 (P = 0.000024). The results suggest a contribution of IL1β in modulating susceptibility to postoperative QTc prolongation after cardiac surgery.

  6. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    Science.gov (United States)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  7. Clinimetric properties of the Tinetti Mobility Test, Four Square Step Test, Activities-specific Balance Confidence Scale, and spatiotemporal gait measures in individuals with Huntington's disease.

    Science.gov (United States)

    Kloos, Anne D; Fritz, Nora E; Kostyk, Sandra K; Young, Gregory S; Kegelmeyer, Deb A

    2014-09-01

    Individuals with Huntington's disease (HD) experience balance and gait problems that lead to falls. Clinicians currently have very little information about the reliability and validity of outcome measures to determine the efficacy of interventions that aim to reduce balance and gait impairments in HD. This study examined the reliability and concurrent validity of spatiotemporal gait measures, the Tinetti Mobility Test (TMT), Four Square Step Test (FSST), and Activities-specific Balance Confidence (ABC) Scale in individuals with HD. Participants with HD [n = 20; mean age ± SD=50.9 ± 13.7; 7 male] were tested on spatiotemporal gait measures and the TMT, FSST, and ABC Scale before and after a six week period to determine test-retest reliability and minimal detectable change (MDC) values. Linear relationships between gait and clinical measures were estimated using Pearson's correlation coefficients. Spatiotemporal gait measures, the TMT total and the FSST showed good to excellent test-retest reliability (ICC > 0.75). MDC values were 0.30 m/s and 0.17 m/s for velocity in forward and backward walking respectively, four points for the TMT, and 3s for the FSST. The TMT and FSST were highly correlated with most spatiotemporal measures. The ABC Scale demonstrated lower reliability and less concurrent validity than other measures. The high test-retest reliability over a six week period and concurrent validity between the TMT, FSST, and spatiotemporal gait measures suggest that the TMT and FSST may be useful outcome measures for future intervention studies in ambulatory individuals with HD. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. False memories and memory confidence in borderline patients.

    Science.gov (United States)

    Schilling, Lisa; Wingenfeld, Katja; Spitzer, Carsten; Nagel, Matthias; Moritz, Steffen

    2013-12-01

    Mixed results have been obtained regarding memory in patients with borderline personality disorder (BPD). Prior reports and anecdotal evidence suggests that patients with BPD are prone to false memories but this assumption has to been put to firm empirical test, yet. Memory accuracy and confidence was assessed in 20 BPD patients and 22 healthy controls using a visual variant of the false memory (Deese-Roediger-McDermott) paradigm which involved a negative and a positive-valenced picture. Groups did not differ regarding veridical item recognition. Importantly, patients did not display more false memories than controls. At trend level, borderline patients rated more items as new with high confidence compared to healthy controls. The results tentatively suggest that borderline patients show uncompromised visual memory functions and display no increased susceptibility for distorted memories. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Decoded fMRI neurofeedback can induce bidirectional confidence changes within single participants.

    Science.gov (United States)

    Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Lau, Hakwan; Kawato, Mitsuo

    2017-04-01

    Neurofeedback studies using real-time functional magnetic resonance imaging (rt-fMRI) have recently incorporated the multi-voxel pattern decoding approach, allowing for fMRI to serve as a tool to manipulate fine-grained neural activity embedded in voxel patterns. Because of its tremendous potential for clinical applications, certain questions regarding decoded neurofeedback (DecNef) must be addressed. Specifically, can the same participants learn to induce neural patterns in opposite directions in different sessions? If so, how does previous learning affect subsequent induction effectiveness? These questions are critical because neurofeedback effects can last for months, but the short- to mid-term dynamics of such effects are unknown. Here we employed a within-subjects design, where participants underwent two DecNef training sessions to induce behavioural changes of opposing directionality (up or down regulation of perceptual confidence in a visual discrimination task), with the order of training counterbalanced across participants. Behavioral results indicated that the manipulation was strongly influenced by the order and the directionality of neurofeedback training. We applied nonlinear mathematical modeling to parametrize four main consequences of DecNef: main effect of change in confidence, strength of down-regulation of confidence relative to up-regulation, maintenance of learning effects, and anterograde learning interference. Modeling results revealed that DecNef successfully induced bidirectional confidence changes in different sessions within single participants. Furthermore, the effect of up- compared to down-regulation was more prominent, and confidence changes (regardless of the direction) were largely preserved even after a week-long interval. Lastly, the effect of the second session was markedly diminished as compared to the effect of the first session, indicating strong anterograde learning interference. These results are interpreted in the framework

  10. Improving medical student toxicology knowledge and self-confidence using mannequin simulation.

    Science.gov (United States)

    Halm, Brunhild M; Lee, Meta T; Franke, Adrian A

    2010-01-01

    Learning medicine without placing patients at increased risk of complications is of utmost importance in the medical profession. High-fidelity patient simulators can potentially achieve this and are therefore increasingly used in the training of medical students. Preclinical medical students have minimal exposure to clinical rotations and commonly feel anxious and apprehensive when starting their clinical years. The objective of this pilot study was to determine if toxicology knowledge and confidence of preclinical second-year medical students could be augmented with simulation training. We designed and implemented a simulation exercise for second-year medical students to enhance learning of Basic Life Support, toxidromes, and management of a semiconscious overdose victim. Groups of 5-6 students were tasked to identify abnormal findings, order tests, and initiate treatment on a mannequin. Faculty observers provided video-assisted feedback immediately afterwards. On-line pre- and posttests were completed in the simulation lab before and after the exercise. This simulation exercise, completed by 52 students, increased test scores on average from 60% to 71% compared to a pre-test. Among the topics tested, students scored worst in identifying normal/abnormal vital signs. Mean confidence increased from 2.0 to 2.6 using a 5-point Likert scale (1-very low to 5-very high). This study suggests that simulation exercises for second-year medical students may be a valuable tool to increase knowledge and student self-confidence at a key transition period prior to beginning clerkship experiences. Further research is needed to prove long-term educational benefits of simulation interventions in the preclinical setting.

  11. A probabilistic method for testing and estimating selection differences between populations.

    Science.gov (United States)

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-12-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.

  12. Balance confidence and falls in nondemented essential tremor patients: the role of cognition.

    Science.gov (United States)

    Rao, Ashwini K; Gilman, Arthur; Louis, Elan D

    2014-10-01

    To examine (1) the effect of cognitive ability on balance confidence and falls, (2) the relationship of balance confidence and falls with quantitative measures of gait, and (3) measures that predict falls, in people with essential tremor (ET). Cross-sectional study. General community. People with ET (n=132) and control subjects (n=48). People with ET were divided into 2 groups based on the median score on the Modified Mini-Mental State Examination: those with lower cognitive test scores (ET-LCS) and those with higher cognitive test scores (ET-HCS). Not applicable. Six-item Activities of Balance Confidence (ABC-6) Scale and falls in the previous year. Participants with ET-LCS had lower ABC-6 scores and a greater number of falls than those with ET-HCS (Pcontrol subjects (Pfalls. Gait speed (Pfalls. Receiver operating characteristic curve analysis revealed that gait speed balance confidence and a higher number of falls than their counterparts (ET-HCS) and than control subjects. We have identified assessments that are easily administered (gait speed, ABC-6 Scale) and are associated with falls in ET. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  13. Slug Test Characterization Results for Multi-Test/Depth Intervals Conducted During the Drilling of CERCLA Operable Unit OU ZP-1 Wells 299-W11-43, 299-W15-50, and 299-W18-16

    Energy Technology Data Exchange (ETDEWEB)

    Spane, Frank A.; Newcomer, Darrell R.

    2010-06-21

    The following report presents test descriptions and analysis results for multiple, stress level slug tests that were performed at selected test/depth intervals within three Operable Unit (OU) ZP-1 wells: 299-W11-43 (C4694/Well H), 299-W15-50 (C4302/Well E), and 299-W18-16 (C4303/Well D). These wells are located within south-central region of the Hanford Site 200-West Area (Figure 1.1). The test intervals were characterized as the individual boreholes were advanced to their final drill depths. The primary objective of the hydrologic tests was to provide information pertaining to the areal variability and vertical distribution of hydraulic conductivity with depth at these locations within the OU ZP-1 area. This type of characterization information is important for predicting/simulating contaminant migration (i.e., numerical flow/transport modeling) and designing proper monitor well strategies for OU and Waste Management Area locations.

  14. Self-confidence in financial analysis: a study of younger and older male professional analysts.

    Science.gov (United States)

    Webster, R L; Ellis, T S

    2001-06-01

    Measures of reported self-confidence in performing financial analysis by 59 professional male analysts, 31 born between 1946 and 1964 and 28 born between 1965 and 1976, were investigated and reported. Self-confidence in one's ability is important in the securities industry because it affects recommendations and decisions to buy, sell, and hold securities. The respondents analyzed a set of multiyear corporate financial statements and reported their self-confidence in six separate financial areas. Data from the 59 male financial analysts were tallied and analyzed using both univariate and multivariate statistical tests. Rated self-confidence was not significantly different for the younger and the older men. These results are not consistent with a similar prior study of female analysts in which younger women showed significantly higher self-confidence than older women.

  15. [Sources of leader's confidence in organizations].

    Science.gov (United States)

    Ikeda, Hiroshi; Furukawa, Hisataka

    2006-04-01

    The purpose of this study was to examine the sources of confidence that organization leaders had. As potential sources of the confidence, we focused on fulfillment of expectations made by self and others, reflection on good as well as bad job experiences, and awareness of job experiences in terms of commonality, differentiation, and multiple viewpoints. A questionnaire was administered to 170 managers of Japanese companies. Results were as follows: First, confidence in leaders was more strongly related to fulfillment of expectations made by self and others than reflection on and awareness of job experiences. Second, the confidence was weakly related to internal processing of job experiences, in the form of commonality awareness and reflection on good job experiences. And finally, years of managerial experiences had almost no relation to the confidence. These findings suggested that confidence in leaders was directly acquired from fulfillment of expectations made by self and others, rather than indirectly through internal processing of job experiences. Implications of the findings for leadership training were also discussed.

  16. Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods

    Science.gov (United States)

    MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason

    2010-01-01

    The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal distribution. This article uses a simulation study to demonstrate that confidence limits are imbalanced because the distribution of the indirect effect is normal only in special cases. Two alternatives for improving the performance of confidence limits for the indirect effect are evaluated: (a) a method based on the distribution of the product of two normal random variables, and (b) resampling methods. In Study 1, confidence limits based on the distribution of the product are more accurate than methods based on an assumed normal distribution but confidence limits are still imbalanced. Study 2 demonstrates that more accurate confidence limits are obtained using resampling methods, with the bias-corrected bootstrap the best method overall. PMID:20157642

  17. A study on assessment methodology of surveillance test interval and Allowed Outage Time

    Energy Technology Data Exchange (ETDEWEB)

    Che, Moo Seong; Cheong, Chang Hyeon; Ryu, Yeong Woo; Cho, Jae Seon; Heo, Chang Wook; Kim, Do Hyeong; Kim, Joo Yeol; Kim, Yun Ik; Yang, Hei Chang [Seoul National Univ., Seoul (Korea, Republic of)

    1997-07-15

    Objectives of this study is the development of methodology by which assesses the optimization of Surveillance Test Interval(STI) and Allowed Outage Time(AOT) using PSA method that can supplement the current deterministic methods and the improvement of Korean nuclear power plants safety. In the first year of this study, the survey about the assessment methodologies, modeling and results performed by domestic and international researches are performed as the basic step before developing the assessment methodology of this study. The assessment methodology that supplement the revealed problems in many other studies is presented and the application of new methodology into the example system assures the feasibility of this method. In the second year of this study, the sensitivity analyses about the failure factors of the components are performed in the bases of the assessment methodologies of the first study, the interaction modeling of the STI and AOT is quantified. And the reliability assessment methodology about the diesel generator is reviewed and applied to the PSA code.

  18. A study on assessment methodology of surveillance test interval and Allowed Outage Time

    International Nuclear Information System (INIS)

    Che, Moo Seong; Cheong, Chang Hyeon; Ryu, Yeong Woo; Cho, Jae Seon; Heo, Chang Wook; Kim, Do Hyeong; Kim, Joo Yeol; Kim, Yun Ik; Yang, Hei Chang

    1997-07-01

    Objectives of this study is the development of methodology by which assesses the optimization of Surveillance Test Interval(STI) and Allowed Outage Time(AOT) using PSA method that can supplement the current deterministic methods and the improvement of Korean nuclear power plants safety. In the first year of this study, the survey about the assessment methodologies, modeling and results performed by domestic and international researches are performed as the basic step before developing the assessment methodology of this study. The assessment methodology that supplement the revealed problems in many other studies is presented and the application of new methodology into the example system assures the feasibility of this method. In the second year of this study, the sensitivity analyses about the failure factors of the components are performed in the bases of the assessment methodologies of the first study, the interaction modeling of the STI and AOT is quantified. And the reliability assessment methodology about the diesel generator is reviewed and applied to the PSA code

  19. Quality specifications for the extra-analytical phase of laboratory testing: Reference intervals and decision limits.

    Science.gov (United States)

    Ceriotti, Ferruccio

    2017-07-01

    Reference intervals and decision limits are a critical part of the clinical laboratory report. The evaluation of their correct use represents a tool to verify the post analytical quality. Four elements are identified as indicators. 1. The use of decision limits for lipids and glycated hemoglobin. 2. The use, whenever possible, of common reference values. 3. The presence of gender-related reference intervals for at least the following common serum measurands (besides obviously the fertility relate hormones): alkaline phosphatase (ALP), alanine aminotransferase (ALT), creatine kinase (CK), creatinine, gamma-glutamyl transferase (GGT), IgM, ferritin, iron, transferrin, urate, red blood cells (RBC), hemoglobin (Hb) and hematocrit (Hct). 4. The presence of age-related reference intervals. The problem of specific reference intervals for elderly people is discussed, but their use is not recommended; on the contrary it is necessary the presence of pediatric age-related reference intervals at least for the following common serum measurands: ALP, amylase, creatinine, inorganic phosphate, lactate dehydrogenase, aspartate aminotransferase, urate, insulin like growth factor 1, white blood cells, RBC, Hb, Hct, alfa-fetoprotein and fertility related hormones. The lack of such reference intervals may imply significant risks for the patients. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  20. Risk-based evaluation of allowed outage time and surveillance test interval extensions for nuclear power plants

    International Nuclear Information System (INIS)

    Gibelli, Sonia Maria Orlando

    2008-03-01

    The main goal of this work is, through the use of Probabilistic Safety Analysis (PSA), to evaluate Technical Specification (TS) Allowed Outage Times (AOT) and Surveillance Test Intervals (STI) extensions for Angra 1 nuclear power plant. PSA has been incorporated as an additional tool, required as part of NPP licensing process. The risk measure used in this work is the Core Damage Frequency (CDF), obtained from the Angra 1 PSA Level 1. AOT and STI extensions are calculated for the Safety Injection System (SIS), Service water System (SAS) and Auxiliary Feedwater System (AFS) through the use of SAPHIRE code. In order to compensate for the risk increase caused by the extensions, compensatory measures as test of redundant train prior to entering maintenance and staggered test strategy are proposed. Results have shown that the proposed AOT extensions are acceptable for the SIS and SAS with the implementation of compensatory measures. The proposed AOT extension is not acceptable for the AFS. The STI extensions are acceptable for all three systems. (author)

  1. Mother-Son Communication About Sex and Routine Human Immunodeficiency Virus Testing Among Younger Men of Color Who Have Sex With Men.

    Science.gov (United States)

    Bouris, Alida; Hill, Brandon J; Fisher, Kimberly; Erickson, Greg; Schneider, John A

    2015-11-01

    The purposes of this study were to document the HIV testing behaviors and serostatus of younger men of color who have sex with men (YMSM) and to explore sociodemographic, behavioral, and maternal correlates of HIV testing in the past 6 months. A total of 135 YMSM aged 16-19 years completed a close-ended survey on HIV testing and risk behaviors, mother-son communication, and sociodemographic characteristics. Youth were offered point-of-care HIV testing, with results provided at survey end. Multivariate logistic regression analyzed the sociodemographic, behavioral, and maternal factors associated with routine HIV testing. A total of 90.3% of YMSM had previously tested for HIV, and 70.9% had tested in the past 6 months. In total, 11.7% of youth reported being HIV positive, and 3.3% reported unknown serostatus. When offered an HIV test, 97.8% accepted. Of these, 14.7% had a positive oral test result, and 31.58% of HIV-positive YMSM (n = 6) were seropositive unaware. Logistic regression results indicated that maternal communication about sex with males was positively associated with routine testing (odds ratio = 2.36; 95% confidence interval = 1.13-4.94). Conversely, communication about puberty and general human sexuality was negatively associated (odds ratio = .45; 95% confidence interval = .24-.86). Condomless anal intercourse and positive sexually transmitted infection history were negatively associated with routine testing; however, frequency of alcohol use was positively associated. Despite high rates of testing, we found high rates of HIV infection, with 31.58% of HIV-positive YMSM being seropositive unaware. Mother-son communication about sex needs to address same-sex behavior as this appears to be more important than other topics. YMSM with known risk factors for HIV are not testing at the recommended time intervals. Copyright © 2015 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  2. The effect of Think Pair Share (TPS) using scientific approach on students’ self-confidence and mathematical problem-solving

    Science.gov (United States)

    Rifa’i, A.; Lestari, H. P.

    2018-03-01

    This study was designed to know the effects of Think Pair Share using Scientific Approach on students' self-confidence and mathematical problem-solving. Quasi-experimental with pre-test post-test non-equivalent group method was used as a basis for design this study. Self-confidence questionnaire and problem-solving test have been used for measurement of the two variables. Two classes of the first grade in religious senior high school (MAN) in Indonesia were randomly selected for this study. Teaching sequence and series from mathematics book at control group in the traditional way and at experiment group has been in TPS using scientific approach learning method. For data analysis regarding students’ problem-solving skill and self-confidence, One-Sample t-Test, Independent Sample t-Test, and Multivariate of Variance (MANOVA) were used. The results showed that (1) TPS using a scientific approach and traditional learning had positive effects (2) TPS using scientific approach learning in comparative with traditional learning had a more significant effect on students’ self-confidence and problem-solving skill.

  3. Confidence-building and Canadian leadership

    International Nuclear Information System (INIS)

    Cleminson, F.R.

    1998-01-01

    Confidence-building has come into its own as a 'tool of choice' in facilitating the non-proliferation, arms control and disarmament (NACD) agenda, whether regional or global. From the Middle East Peace Process (MEPP) to the ASEAN Intersessional Group on Confidence-Building (ARF ISG on CBMS), confidence-building has assumed a central profile in regional terms. In the Four Power Talks begun in Geneva on December 9, 1997, the United States identified confidence-building as one of two subject areas for initial discussion as part of a structured peace process between North and South Korea. Thus, with CBMs assuming such a high profile internationally, it seems prudent for Canadians to pause and take stock of the significant role which Canada has already played in the conceptual development of the process over the last two decades. Since the Helsinki accords of 1975, Canada has developed a significant expertise in this area through an unbroken series of original, basic research projects. These have contributed to defining the process internationally from concept to implementation. Today, these studies represent a solid and unique Departmental investment in basic research from which to draw in meeting Canada's current commitments to multilateral initiatives in the area of confidence-building and to provide a 'step up' in terms of future-oriented leadership. (author)

  4. The use of regression analysis in determining reference intervals for low hematocrit and thrombocyte count in multiple electrode aggregometry and platelet function analyzer 100 testing of platelet function.

    Science.gov (United States)

    Kuiper, Gerhardus J A J M; Houben, Rik; Wetzels, Rick J H; Verhezen, Paul W M; Oerle, Rene van; Ten Cate, Hugo; Henskens, Yvonne M C; Lancé, Marcus D

    2017-11-01

    Low platelet counts and hematocrit levels hinder whole blood point-of-care testing of platelet function. Thus far, no reference ranges for MEA (multiple electrode aggregometry) and PFA-100 (platelet function analyzer 100) devices exist for low ranges. Through dilution methods of volunteer whole blood, platelet function at low ranges of platelet count and hematocrit levels was assessed on MEA for four agonists and for PFA-100 in two cartridges. Using (multiple) regression analysis, 95% reference intervals were computed for these low ranges. Low platelet counts affected MEA in a positive correlation (all agonists showed r 2 ≥ 0.75) and PFA-100 in an inverse correlation (closure times were prolonged with lower platelet counts). Lowered hematocrit did not affect MEA testing, except for arachidonic acid activation (ASPI), which showed a weak positive correlation (r 2 = 0.14). Closure time on PFA-100 testing was inversely correlated with hematocrit for both cartridges. Regression analysis revealed different 95% reference intervals in comparison with originally established intervals for both MEA and PFA-100 in low platelet or hematocrit conditions. Multiple regression analysis of ASPI and both tests on the PFA-100 for combined low platelet and hematocrit conditions revealed that only PFA-100 testing should be adjusted for both thrombocytopenia and anemia. 95% reference intervals were calculated using multiple regression analysis. However, coefficients of determination of PFA-100 were poor, and some variance remained unexplained. Thus, in this pilot study using (multiple) regression analysis, we could establish reference intervals of platelet function in anemia and thrombocytopenia conditions on PFA-100 and in thrombocytopenia conditions on MEA.

  5. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 2. Assessment of MCNP Statistical Analysis of keff Eigenvalue Convergence with an Analytical Criticality Verification Test Set

    International Nuclear Information System (INIS)

    Sood, Avnet; Forster, R. Arthur; Parsons, D. Kent

    2001-01-01

    Monte Carlo simulations of nuclear criticality eigenvalue problems are often performed by general purpose radiation transport codes such as MCNP. MCNP performs detailed statistical analysis of the criticality calculation and provides feedback to the user with warning messages, tables, and graphs. The purpose of the analysis is to provide the user with sufficient information to assess spatial convergence of the eigenfunction and thus the validity of the criticality calculation. As a test of this statistical analysis package in MCNP, analytic criticality verification benchmark problems have been used for the first time to assess the performance of the criticality convergence tests in MCNP. The MCNP statistical analysis capability has been recently assessed using the 75 multigroup criticality verification analytic problem test set. MCNP was verified with these problems at the 10 -4 to 10 -5 statistical error level using 40 000 histories per cycle and 2000 active cycles. In all cases, the final boxed combined k eff answer was given with the standard deviation and three confidence intervals that contained the analytic k eff . To test the effectiveness of the statistical analysis checks in identifying poor eigenfunction convergence, ten problems from the test set were deliberately run incorrectly using 1000 histories per cycle, 200 active cycles, and 10 inactive cycles. Six problems with large dominance ratios were chosen from the test set because they do not achieve the normal spatial mode in the beginning of the calculation. To further stress the convergence tests, these problems were also started with an initial fission source point 1 cm from the boundary thus increasing the likelihood of a poorly converged initial fission source distribution. The final combined k eff confidence intervals for these deliberately ill-posed problems did not include the analytic k eff value. In no case did a bad confidence interval go undetected. Warning messages were given signaling that

  6. On-line confidence monitoring during decision making.

    Science.gov (United States)

    Dotan, Dror; Meyniel, Florent; Dehaene, Stanislas

    2018-02-01

    Humans can readily assess their degree of confidence in their decisions. Two models of confidence computation have been proposed: post hoc computation using post-decision variables and heuristics, versus online computation using continuous assessment of evidence throughout the decision-making process. Here, we arbitrate between these theories by continuously monitoring finger movements during a manual sequential decision-making task. Analysis of finger kinematics indicated that subjects kept separate online records of evidence and confidence: finger deviation continuously reflected the ongoing accumulation of evidence, whereas finger speed continuously reflected the momentary degree of confidence. Furthermore, end-of-trial finger speed predicted the post-decisional subjective confidence rating. These data indicate that confidence is computed on-line, throughout the decision process. Speed-confidence correlations were previously interpreted as a post-decision heuristics, whereby slow decisions decrease subjective confidence, but our results suggest an adaptive mechanism that involves the opposite causality: by slowing down when unconfident, participants gain time to improve their decisions. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Development of free statistical software enabling researchers to calculate confidence levels, clinical significance curves and risk-benefit contours

    International Nuclear Information System (INIS)

    Shakespeare, T.P.; Mukherjee, R.K.; Gebski, V.J.

    2003-01-01

    Confidence levels, clinical significance curves, and risk-benefit contours are tools improving analysis of clinical studies and minimizing misinterpretation of published results, however no software has been available for their calculation. The objective was to develop software to help clinicians utilize these tools. Excel 2000 spreadsheets were designed using only built-in functions, without macros. The workbook was protected and encrypted so that users can modify only input cells. The workbook has 4 spreadsheets for use in studies comparing two patient groups. Sheet 1 comprises instructions and graphic examples for use. Sheet 2 allows the user to input the main study results (e.g. survival rates) into a 2-by-2 table. Confidence intervals (95%), p-value and the confidence level for Treatment A being better than Treatment B are automatically generated. An additional input cell allows the user to determine the confidence associated with a specified level of benefit. For example if the user wishes to know the confidence that Treatment A is at least 10% better than B, 10% is entered. Sheet 2 automatically displays clinical significance curves, graphically illustrating confidence levels for all possible benefits of one treatment over the other. Sheet 3 allows input of toxicity data, and calculates the confidence that one treatment is more toxic than the other. It also determines the confidence that the relative toxicity of the most effective arm does not exceed user-defined tolerability. Sheet 4 automatically calculates risk-benefit contours, displaying the confidence associated with a specified scenario of minimum benefit and maximum risk of one treatment arm over the other. The spreadsheet is freely downloadable at www.ontumor.com/professional/statistics.htm A simple, self-explanatory, freely available spreadsheet calculator was developed using Excel 2000. The incorporated decision-making tools can be used for data analysis and improve the reporting of results of any

  8. Allergy Testing in Children With Low-Risk Penicillin Allergy Symptoms.

    Science.gov (United States)

    Vyles, David; Adams, Juan; Chiu, Asriani; Simpson, Pippa; Nimmer, Mark; Brousseau, David C

    2017-08-01

    Penicillin allergy is commonly reported in the pediatric emergency department (ED). True penicillin allergy is rare, yet the diagnosis results from the denial of first-line antibiotics. We hypothesize that all children presenting to the pediatric ED with symptoms deemed to be low-risk for immunoglobulin E-mediated hypersensitivity will return negative results for true penicillin allergy. Parents of children aged 4 to 18 years old presenting to the pediatric ED with a history of parent-reported penicillin allergy completed an allergy questionnaire. A prespecified 100 children categorized as low-risk on the basis of reported symptoms completed penicillin allergy testing by using a standard 3-tier testing process. The percent of children with negative allergy testing results was calculated with a 95% confidence interval. Five hundred ninety-seven parents completed the questionnaire describing their child's reported allergy symptoms. Three hundred two (51%) children had low-risk symptoms and were eligible for testing. Of those, 100 children were tested for penicillin allergy. The median (interquartile range) age at testing was 9 years (5-12). The median (interquartile range) age at allergy diagnosis was 1 year (9 months-3 years). Rash (97 [97%]) and itching (63 [63%]) were the most commonly reported allergy symptoms. Overall, 100 children (100%; 95% confidence interval 96.4%-100%) were found to have negative results for penicillin allergy and had their labeled penicillin allergy removed from their medical record. All children categorized as low-risk by our penicillin allergy questionnaire were found to have negative results for true penicillin allergy. The utilization of this questionnaire in the pediatric ED may facilitate increased use of first-line penicillin antibiotics. Copyright © 2017 by the American Academy of Pediatrics.

  9. Recommended Changes to Interval Management to Achieve Operational Implementation

    Science.gov (United States)

    Baxley, Brian; Swieringa, Kurt; Roper, Roy; Hubbs, Clay; Goess, Paul; Shay, Richard

    2017-01-01

    A 19-day flight test of an Interval Management (IM) avionics prototype was conducted in Washington State using three aircraft to precisely achieve and maintain a spacing interval behind the preceding aircraft. NASA contracted with Boeing, Honeywell, and United Airlines to build this prototype, and then worked closely with them, the FAA, and other industry partners to test this prototype in flight. Four different IM operation types were investigated during this test in the en route, arrival, and final approach phases of flight. Many of the IM operations met or exceeded the design goals established prior to the test. However, there were issues discovered throughout the flight test, including the rate and magnitude of IM commanded speed changes and the difference between expected and actual aircraft deceleration rates.

  10. Testing transferability of willingness to pay for forest fire prevention among three states of California, Florida and Montana

    Science.gov (United States)

    John B. Loomis; Hung Trong Le; Armando Gonzalez-Caban

    2005-01-01

    The equivalency of willingness to pay between the states of California, Florida and Montana is tested. Residents in California, Florida and Montana have an average willingness to pay of $417, $305, and $382 for prescribed burning program, and $403, $230, and $208 for mechanical fire fuel reduction program, respectively. Due to wide confidence intervals, household WTP...

  11. Serum prolactin revisited: parametric reference intervals and cross platform evaluation of polyethylene glycol precipitation-based methods for discrimination between hyperprolactinemia and macroprolactinemia.

    Science.gov (United States)

    Overgaard, Martin; Pedersen, Susanne Møller

    2017-10-26

    Hyperprolactinemia diagnosis and treatment is often compromised by the presence of biologically inactive and clinically irrelevant higher-molecular-weight complexes of prolactin, macroprolactin. The objective of this study was to evaluate the performance of two macroprolactin screening regimes across commonly used automated immunoassay platforms. Parametric total and monomeric gender-specific reference intervals were determined for six immunoassay methods using female (n=96) and male sera (n=127) from healthy donors. The reference intervals were validated using 27 hyperprolactinemic and macroprolactinemic sera, whose presence of monomeric and macroforms of prolactin were determined using gel filtration chromatography (GFC). Normative data for six prolactin assays included the range of values (2.5th-97.5th percentiles). Validation sera (hyperprolactinemic and macroprolactinemic; n=27) showed higher discordant classification [mean=2.8; 95% confidence interval (CI) 1.2-4.4] for the monomer reference interval method compared to the post-polyethylene glycol (PEG) recovery cutoff method (mean=1.8; 95% CI 0.8-2.8). The two monomer/macroprolactin discrimination methods did not differ significantly (p=0.089). Among macroprolactinemic sera evaluated by both discrimination methods, the Cobas and Architect/Kryptor prolactin assays showed the lowest and the highest number of misclassifications, respectively. Current automated immunoassays for prolactin testing require macroprolactin screening methods based on PEG precipitation in order to discriminate truly from falsely elevated serum prolactin. While the recovery cutoff and monomeric reference interval macroprolactin screening methods demonstrate similar discriminative ability, the latter method also provides the clinician with an easy interpretable monomeric prolactin concentration along with a monomeric reference interval.

  12. The Great Recession and confidence in homeownership

    OpenAIRE

    Anat Bracha; Julian Jamison

    2013-01-01

    Confidence in homeownership shifts for those who personally experienced real estate loss during the Great Recession. Older Americans are confident in the value of homeownership. Younger Americans are less confident.

  13. Permanent pacemaker implantation in octogenarians with unexplained syncope and positive electrophysiologic testing.

    Science.gov (United States)

    Giannopoulos, Georgios; Kossyvakis, Charalampos; Panagopoulou, Vasiliki; Tsiachris, Dimitrios; Doudoumis, Konstantinos; Mavri, Maria; Vrachatis, Dimitrios; Letsas, Konstantinos; Efremidis, Michael; Katsivas, Apostolos; Lekakis, John; Deftereos, Spyridon

    2017-05-01

    Syncope is a common problem in the elderly, and a permanent pacemaker is a therapeutic option when a bradycardic etiology is revealed. However, the benefit of pacing when no association of symptoms to bradycardia has been shown is not clear, especially in the elderly. The aim of this study was to evaluate the effect of pacing on syncope-free mortality in patients aged 80 years or older with unexplained syncope and "positive" invasive electrophysiologic testing (EPT). This was an observational study. A positive EPT for the purposes of this study was defined by at least 1 of the following: a corrected sinus node recovery time of >525 ms, a basic HV interval of >55 ms, detection of infra-Hisian block, or appearance of second-degree atrioventricular block on atrial decremental pacing at a paced cycle length of >400 ms. Among the 2435 screened patients, 228 eligible patients were identified, 145 of whom were implanted with a pacemaker. Kaplan-Meier analysis determined that time to event (syncope or death) was 50.1 months (95% confidence interval 45.4-54.8 months) with a pacemaker vs 37.8 months (95% confidence interval 31.3-44.4 months) without a pacemaker (log-rank test, P = .001). The 4-year time-dependent estimate of the rate of syncope was 12% vs 44% (P pacemaker implantation was independently associated with longer syncope-free survival. Significant differences were also shown in the individual components of the primary outcome measure (syncope and death from any cause). Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  14. Convex Interval Games

    NARCIS (Netherlands)

    Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

    2008-01-01

    In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for

  15. Confidence-building and Canadian leadership

    Energy Technology Data Exchange (ETDEWEB)

    Cleminson, F.R. [Dept. of Foreign Affairs and International Trade, Verification, Non-Proliferation, Arms Control and Disarmament Div (IDA), Ottawa, Ontario (Canada)

    1998-07-01

    Confidence-building has come into its own as a 'tool of choice' in facilitating the non-proliferation, arms control and disarmament (NACD) agenda, whether regional or global. From the Middle East Peace Process (MEPP) to the ASEAN Intersessional Group on Confidence-Building (ARF ISG on CBMS), confidence-building has assumed a central profile in regional terms. In the Four Power Talks begun in Geneva on December 9, 1997, the United States identified confidence-building as one of two subject areas for initial discussion as part of a structured peace process between North and South Korea. Thus, with CBMs assuming such a high profile internationally, it seems prudent for Canadians to pause and take stock of the significant role which Canada has already played in the conceptual development of the process over the last two decades. Since the Helsinki accords of 1975, Canada has developed a significant expertise in this area through an unbroken series of original, basic research projects. These have contributed to defining the process internationally from concept to implementation. Today, these studies represent a solid and unique Departmental investment in basic research from which to draw in meeting Canada's current commitments to multilateral initiatives in the area of confidence-building and to provide a 'step up' in terms of future-oriented leadership. (author)

  16. The relationship between fundamental movement skill proficiency and physical self-confidence among adolescents.

    Science.gov (United States)

    McGrane, Bronagh; Belton, Sarahjane; Powell, Danielle; Issartel, Johann

    2017-09-01

    This study aims to assess fundamental movement skill (FMS) proficiency, physical self-confidence levels, and the relationship between these variables and gender differences among adolescents. Three hundred and ninety five adolescents aged 13.78 years (SD = ±1.2) from 20 schools were involved in this study. The Test of Gross Motor Development-2nd Edition (TGMD), TGMD-2 and Victorian Skills Manual were used to assess 15 FMS. Participants' physical self-confidence was also assessed using a valid skill-specific scale. A significant correlation was observed between FMS proficiency and physical self-confidence for females only (r = 0.305, P < 0.001). Males rated themselves as having significantly higher physical self-confidence levels than females (P = 0.001). Males scored significantly higher than females in FMS proficiency (P < 0.05), and the lowest physical self-confidence group were significantly less proficient at FMS than the medium (P < 0.001) and high physical self-confidence groups (P < 0.05). This information not only highlights those in need of assistance to develop their FMS but will also facilitate in the development of an intervention which aims to improve physical self-confidence and FMS proficiency.

  17. The effectiveness of collaborative problem based physics learning (CPBPL) model to improve student’s self-confidence on physics learning

    Science.gov (United States)

    Prahani, B. K.; Suprapto, N.; Suliyanah; Lestari, N. A.; Jauhariyah, M. N. R.; Admoko, S.; Wahyuni, S.

    2018-03-01

    In the previous research, Collaborative Problem Based Physic Learning (CPBPL) model has been developed to improve student’s science process skills, collaborative problem solving, and self-confidence on physics learning. This research is aimed to analyze the effectiveness of CPBPL model towards the improvement of student’s self-confidence on physics learning. This research implemented quasi experimental design on 140 senior high school students who were divided into 4 groups. Data collection was conducted through questionnaire, observation, and interview. Self-confidence measurement was conducted through Self-Confidence Evaluation Sheet (SCES). The data was analyzed using Wilcoxon test, n-gain, and Kruskal Wallis test. Result shows that: (1) There is a significant score improvement on student’s self-confidence on physics learning (α=5%), (2) n-gain value student’s self-confidence on physics learning is high, and (3) n-gain average student’s self-confidence on physics learning was consistent throughout all groups. It can be concluded that CPBPL model is effective to improve student’s self-confidence on physics learning.

  18. Tumor phenotype and breast density in distinct categories of interval cancer: results of population-based mammography screening in Spain.

    Science.gov (United States)

    Domingo, Laia; Salas, Dolores; Zubizarreta, Raquel; Baré, Marisa; Sarriugarte, Garbiñe; Barata, Teresa; Ibáñez, Josefa; Blanch, Jordi; Puig-Vives, Montserrat; Fernández, Ana; Castells, Xavier; Sala, Maria

    2014-01-10

    Interval cancers are tumors arising after a negative screening episode and before the next screening invitation. They can be classified into true interval cancers, false-negatives, minimal-sign cancers, and occult tumors based on mammographic findings in screening and diagnostic mammograms. This study aimed to describe tumor-related characteristics and the association of breast density and tumor phenotype within four interval cancer categories. We included 2,245 invasive tumors (1,297 screening-detected and 948 interval cancers) diagnosed from 2000 to 2009 among 645,764 women aged 45 to 69 who underwent biennial screening in Spain. Interval cancers were classified by a semi-informed retrospective review into true interval cancers (n = 455), false-negatives (n = 224), minimal-sign (n = 166), and occult tumors (n = 103). Breast density was evaluated using Boyd's scale and was conflated into: 75%. Tumor-related information was obtained from cancer registries and clinical records. Tumor phenotype was defined as follows: luminal A: ER+/HER2- or PR+/HER2-; luminal B: ER+/HER2+ or PR+/HER2+; HER2: ER-/PR-/HER2+; triple-negative: ER-/PR-/HER2-. The association of tumor phenotype and breast density was assessed using a multinomial logistic regression model. Adjusted odds ratios (OR) and 95% confidence intervals (95% CI) were calculated. All statistical tests were two-sided. Forty-eight percent of interval cancers were true interval cancers and 23.6% false-negatives. True interval cancers were associated with HER2 and triple-negative phenotypes (OR = 1.91 (95% CI:1.22-2.96), OR = 2.07 (95% CI:1.42-3.01), respectively) and extremely dense breasts (>75%) (OR = 1.67 (95% CI:1.08-2.56)). However, among true interval cancers a higher proportion of triple-negative tumors was observed in predominantly fatty breasts (breasts (28.7%, 21.4%, 11.3% and 14.3%, respectively; cancers, extreme breast density being strongly associated with occult tumors (OR

  19. Definition and taxonomy of interval colorectal cancers: a proposal for standardising nomenclature

    NARCIS (Netherlands)

    Sanduleanu, S.; le Clercq, C. M. C.; Dekker, E.; Meijer, G. A.; Rabeneck, L.; Rutter, M. D.; Valori, R.; Young, G. P.; Schoen, R. E.

    2015-01-01

    Interval colorectal cancers (interval CRCs), that is, cancers occurring after a negative screening test or examination, are an important indicator of the quality and effectiveness of CRC screening and surveillance. In order to compare incidence rates of interval CRCs across screening programmes, a

  20. Asymptotically Honest Confidence Regions for High Dimensional

    DEFF Research Database (Denmark)

    Caner, Mehmet; Kock, Anders Bredahl

    While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

  1. Restricted Interval Valued Neutrosophic Sets and Restricted Interval Valued Neutrosophic Topological Spaces

    Directory of Open Access Journals (Sweden)

    Anjan Mukherjee

    2016-08-01

    Full Text Available In this paper we introduce the concept of restricted interval valued neutrosophic sets (RIVNS in short. Some basic operations and properties of RIVNS are discussed. The concept of restricted interval valued neutrosophic topology is also introduced together with restricted interval valued neutrosophic finer and restricted interval valued neutrosophic coarser topology. We also define restricted interval valued neutrosophic interior and closer of a restricted interval valued neutrosophic set. Some theorems and examples are cites. Restricted interval valued neutrosophic subspace topology is also studied.

  2. The radiographic acromiohumeral interval is affected by arm and radiographic beam position

    Energy Technology Data Exchange (ETDEWEB)

    Fehringer, Edward V.; Rosipal, Charles E.; Rhodes, David A.; Lauder, Anthony J.; Feschuk, Connie A.; Mormino, Matthew A.; Hartigan, David E. [University of Nebraska Medical Center, Department of Orthopaedic Surgery and Rehabilitation, Omaha, NE (United States); Puumala, Susan E. [Nebraska Medical Center, Department of Preventive and Societal Medicine, Omaha, NE (United States)

    2008-06-15

    The objective was to determine whether arm and radiographic beam positional changes affect the acromiohumeral interval (AHI) in radiographs of healthy shoulders. Controlling for participant's height and position as well as radiographic beam height and angle, from 30 right shoulders of right-handed males without shoulder problems four antero-posterior (AP) radiographic views each were obtained in defined positions. Three independent, blinded physicians measured the AHI to the nearest millimeter in 120 randomized radiographs. Mean differences between measurements were calculated, along with a 95% confidence interval. Controlling for observer effect, there was a significant difference between AHI measurements on different views (p<0.01). All pair-wise differences were statistically significant after adjusting for multiple comparisons (all p values <0.01). Even in healthy shoulders, small changes in arm position and radiographic beam orientation affect the AHI in radiographs. (orig.)

  3. The effect of terrorism on public confidence : an exploratory study.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, M. S.; Baldwin, T. E.; Samsa, M. E.; Ramaprasad, A.; Decision and Information Sciences

    2008-10-31

    A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the metrics it uses to assess the consequences of terrorist attacks. Hence, several factors--including a detailed understanding of the variations in public confidence among individuals, by type of terrorist event, and as a function of time--are critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data were collected from the groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery bombing, and cyber intrusions on financial institutions that resulted in identity theft and financial losses. Our findings include the following: (a) the subjects can be classified into at least three distinct groups on the basis of their baseline outlook--optimistic, pessimistic, and unaffected; (b) the subjects make discriminations in their interpretations of an event on the basis of the nature of a terrorist attack, the time horizon, and its impact; (c) the recovery of confidence after a terrorist event has an incubation period and typically does not return to its initial level in the long-term; (d) the patterns of recovery of confidence differ between the optimists and the pessimists; and (e) individuals are able to associate a monetary value with a loss or gain in confidence, and the value associated with a loss is greater than the value associated with a gain. These findings illustrate the importance the public places in their confidence in government

  4. T(peak)T(end) interval in long QT syndrome

    DEFF Research Database (Denmark)

    Kanters, Jørgen Kim; Haarmark, Christian; Vedel-Larsen, Esben

    2008-01-01

    BACKGROUND: The T(peak)T(end) (T(p)T(e)) interval is believed to reflect the transmural dispersion of repolarization. Accordingly, it should be a risk factor in long QT syndrome (LQTS). The aim of the study was to determine the effect of genotype on T(p)T(e) interval and test whether it was relat...

  5. Practicing the Test Produces Strength Equivalent to Higher Volume Training.

    Science.gov (United States)

    Mattocks, Kevin T; Buckner, Samuel L; Jessee, Matthew B; Dankel, Scott J; Mouser, J Grant; Loenneke, Jeremy P

    2017-09-01

    To determine if muscle growth is important for increasing muscle strength or if changes in strength can be entirely explained from practicing the strength test. Thirty-eight untrained individuals performed knee extension and chest press exercise for 8 wk. Individuals were randomly assigned to either a high-volume training group (HYPER) or a group just performing the one repetition maximum (1RM) strength test (TEST). The HYPER group performed four sets to volitional failure (~8RM-12RM), whereas the TEST group performed up to five attempts to lift as much weight as possible one time each visit. Data are presented as mean (90% confidence interval). The change in muscle size was greater in the HYPER group for both the upper and lower bodies at most but not all sites. The change in 1RM strength for both the upper body (difference of -1.1 [-4.8, 2.4] kg) and lower body (difference of 1.0 [-0.7, 2.8] kg for dominant leg) was not different between groups (similar for nondominant). Changes in isometric and isokinetic torque were not different between groups. The HYPER group observed a greater change in muscular endurance (difference of 2 [1,4] repetitions) only in the dominant leg. There were no differences in the change between groups in upper body endurance. There were between-group differences for exercise volume (mean [95% confidence interval]) of the dominant (difference of 11,049.3 [9254.6-12,844.0] kg) leg (similar for nondominant) and chest press with the HYPER group completing significantly more total volume (difference of 13259.9 [9632.0-16,887.8] kg). These findings suggest that neither exercise volume nor the change in muscle size from training contributed to greater strength gains compared with just practicing the test.

  6. Pediatrics Residents' Confidence and Performance Following a Longitudinal Quality Improvement Curriculum.

    Science.gov (United States)

    Courtlandt, Cheryl; Noonan, Laura; Koricke, Maureen Walsh; Zeskind, Philip Sanford; Mabus, Sarah; Feld, Leonard

    2016-02-01

    Quality improvement (QI) training is an integral part of residents' education. Understanding the educational value of a QI curriculum facilitates understanding of its impact. The purpose of this study was to evaluate the effects of a longitudinal QI curriculum on pediatrics residents' confidence and competence in the acquisition and application of QI knowledge and skills. Three successive cohorts of pediatrics residents (N = 36) participated in a longitudinal curriculum designed to increase resident confidence in QI knowledge and skills. Key components were a succession of progressive experiential projects, QI coaching, and resident team membership culminating in leadership of the project. Residents completed precurricular and postcurricular surveys and demonstrated QI competence by performance on the pediatric QI assessment scenario. Residents participating in the Center for Advancing Pediatric Excellence QI curriculum showed significant increases in pre-post measures of confidence in QI knowledge and skills. Coaching and team leadership were ranked by resident participants as having the most educational value among curriculum components. A pediatric QI assessment scenario, which correlated with resident-perceived confidence in acquisition of QI skills but not QI knowledge, is a tool available to test pediatrics residents' QI knowledge. A 3-year longitudinal, multimodal, experiential QI curriculum increased pediatrics residents' confidence in QI knowledge and skills, was feasible with faculty support, and was well-accepted by residents.

  7. Increasing Confidence and Ability in Implementing Kangaroo Mother Care Method Among Young Mothers.

    Science.gov (United States)

    Kenanga Purbasary, Eleni; Rustina, Yeni; Budiarti, Tri

    Mothers giving birth to low birth weight babies (LBWBs) have low confidence in caring for their babies because they are often still young and may lack the knowledge, experience, and ability to care for the baby. This research aims to determine the effect of education about kangaroo mother care (KMC) on the confidence and ability of young mothers to implement KMC. The research methodology used was a controlled-random experimental approach with pre- and post-test equivalent groups of 13 mothers and their LBWBs in the intervention group and 13 mothers and their LBWBs in the control group. Data were collected via an instrument measuring young mothers' confidence, the validity and reliability of which have been tested with a resulting r value of .941, and an observation sheet on KMC implementation. After conducting the education, the confidence score of young mothers and their ability to perform KMC increased meaningfully. The score of confidence of young mothers before education was 37 (p = .1555: and the ability score for KMC Implementation before education was 9 (p = .1555). The median score of confidence of young mothers after education in the intervention group was 87 and in the control group was 50 (p = .001, 95% CI 60.36-75.56), and ability median score for KMC implementation after education in the intervention group was 16 and in the control group was 12 (p = .001, 95% CI 1.50-1.88). KMC education should be conducted gradually, and it is necessary to involve the family, in order for KMC implementation to continue at home. A family visit can be done for LBWBs to evaluate the ability of the young mothers to implement KMC.

  8. Short-interval test-retest interrater reliability of the Dutch version of the structured clinical interview for DSM-IV personality disorders (SCID-II)

    NARCIS (Netherlands)

    Weertman, A; ArntZ, A; Dreessen, L; van Velzen, C; Vertommen, S

    2003-01-01

    This study examined the short-interval test-retest reliability of the Structured Clinical Interview (SCID-II: First, Spitzer, Gibbon, & Williams, 1995) for DSM-IV personality disorders (PDs). The SCID-II was administered to 69 in- and outpatients on two occasions separated by 1 to 6 weeks. The

  9. Nosewitness Identification: Effects of Lineup Size and Retention Interval.

    Science.gov (United States)

    Alho, Laura; Soares, Sandra C; Costa, Liliana P; Pinto, Elisa; Ferreira, Jacqueline H T; Sorjonen, Kimmo; Silva, Carlos F; Olsson, Mats J

    2016-01-01

    Although canine identification of body odor (BO) has been widely used as forensic evidence, the concept of nosewitness identification by human observers was only recently put to the test. The results indicated that BOs associated with male characters in authentic crime videos could later be identified in BO lineup tests well above chance. To further evaluate nosewitness memory, we assessed the effects of lineup size (Experiment 1) and retention interval (Experiment 2), using a forced-choice memory test. The results showed that nosewitness identification works for all lineup sizes (3, 5, and 8 BOs), but that larger lineups compromise identification performance in similarity to observations from eye- and earwitness studies. Also in line with previous eye- and earwitness studies, but in disagreement with some studies on odor memory, Experiment 2 showed significant forgetting between shorter retention intervals (15 min) and longer retention intervals (1-week) using lineups of five BOs. Altogether this study shows that identification of BO in a forensic setting is possible and has limits and characteristics in line with witness identification through other sensory modalities.

  10. Confidence Leak in Perceptual Decision Making.

    Science.gov (United States)

    Rahnev, Dobromir; Koizumi, Ai; McCurdy, Li Yan; D'Esposito, Mark; Lau, Hakwan

    2015-11-01

    People live in a continuous environment in which the visual scene changes on a slow timescale. It has been shown that to exploit such environmental stability, the brain creates a continuity field in which objects seen seconds ago influence the perception of current objects. What is unknown is whether a similar mechanism exists at the level of metacognitive representations. In three experiments, we demonstrated a robust intertask confidence leak-that is, confidence in one's response on a given task or trial influencing confidence on the following task or trial. This confidence leak could not be explained by response priming or attentional fluctuations. Better ability to modulate confidence leak predicted higher capacity for metacognition as well as greater gray matter volume in the prefrontal cortex. A model based on normative principles from Bayesian inference explained the results by postulating that observers subjectively estimate the perceptual signal strength in a stable environment. These results point to the existence of a novel metacognitive mechanism mediated by regions in the prefrontal cortex. © The Author(s) 2015.

  11. Short-interval and long-interval intracortical inhibition of TMS-evoked EEG potentials.

    Science.gov (United States)

    Premoli, Isabella; Király, Julia; Müller-Dahlhaus, Florian; Zipser, Carl M; Rossini, Pierre; Zrenner, Christoph; Ziemann, Ulf; Belardinelli, Paolo

    2018-03-15

    Inhibition in the human motor cortex can be probed by means of paired-pulse transcranial magnetic stimulation (ppTMS) at interstimulus intervals of 2-3 ms (short-interval intracortical inhibition, SICI) or ∼100 ms (long-interval intracortical inhibition, LICI). Conventionally, SICI and LICI are recorded as motor evoked potential (MEP) inhibition in the hand muscle. Pharmacological experiments indicate that they are mediated by GABAA and GABAB receptors, respectively. SICI and LICI of TMS-evoked EEG potentials (TEPs) and their pharmacological properties have not been systematically studied. Here, we sought to examine SICI by ppTMS-evoked compared to single-pulse TMS-evoked TEPs, to investigate its pharmacological manipulation and to compare SICI with our previous results on LICI. PpTMS-EEG was applied to the left motor cortex in 16 healthy subjects in a randomized, double-blind placebo-controlled crossover design, testing the effects of a single oral dose 20 mg of diazepam, a positive modulator at the GABAA receptor, vs. 50 mg of the GABAB receptor agonist baclofen on SICI of TEPs. We found significant SICI of the N100 and P180 TEPs prior to drug intake. Diazepam reduced SICI of the N100 TEP, while baclofen enhanced it. Compared to our previous ppTMS-EEG results on LICI, the SICI effects on TEPs, including their drug modulation, were largely analogous. Findings suggest a similar interaction of paired-pulse effects on TEPs irrespective of the interstimulus interval. Therefore, SICI and LICI as measured with TEPs cannot be directly derived from SICI and LICI measured with MEPs, but may offer novel insight into paired-pulse responses recorded directly from the brain rather than muscle. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. EFFECT OF INFORMATION SERVICES USING THE MEDIA FILM TO SELF-CONFIDENCE STUDENT OF CLASS VIII SMP NEGERI 8 METRO

    Directory of Open Access Journals (Sweden)

    MUDAIM MUDAIM

    2015-06-01

    Full Text Available Abstract: Pessimism and attitudes that consider themselves weak and does not have the ability when facing a problem will make individual impediment in the task of development. Problems confidence that underlies the research are: a Students are less confident in the ability it has, b Students feel pessimistic when faced with an issue, c Student perception subjectively, d Students still do not do the work independently, and e negative minded students with a state-owned. The problems of this study are whether there is an influence of information services using the medium of film to the confidence of eighth-grade students of SMP Negeri 8 Metro. The purpose of this study was to determine whether there is an influence of information services using the medium of film to the self-confidence of students in class VIII SMP Negeri 8 Metro. Subject participant is graders VIII-E total 30 students. Data were collected by questionnaire self-confidence and analyzed the data used is the t-test. The results of this study, shown by the difference in change scores of confidence from the pre-test and post-test of 17.1. Testing the hypothesis obtained calculation results thitung6,036> table = 1.699. The conclusion is that the information services implemented using the film medium can be a positive influence on self-esteem, especially students of class VIII. The advice given is to use the medium of film should be done intensively and more creative by BK teachers in giving information service. Keywords: Confidence, Service Information Using Media Film.

  13. Implementation of Cell Samples as Controls in National Proficiency Testing for Clopidogrel Therapy-Related CYP2C19 Genotyping in China: A Novel Approach.

    Directory of Open Access Journals (Sweden)

    Guigao Lin

    Full Text Available Laboratories are increasingly requested to perform CYP2C19 genetic testing when managing clopidogrel therapy, especially in patients with acute coronary syndrome undergoing percutaneous coronary intervention. To ensure high quality molecular testing and ascertain that the referring clinician has the correct information for CYP2C19 genotype-directed antiplatelet therapy, a proficiency testing scheme was set up to evaluate the laboratory performance for the entire testing process. Proficiency panels of 10 cell samples encompassing the common CYP2C19 genetic polymorphisms were distributed to 62 participating laboratories for routine molecular testing and the responses were analyzed for accuracy of genotyping and the reporting of results. Data including the number of samples tested, the accreditation/certification status, and test methodology of each individual laboratory were also reviewed. Fifty-seven of the 62 participants correctly identified the CYP2C19 variants in all samples. There were six genotyping errors, with a corresponding analytical sensitivity of 98.5% (333/338 challenges; 95% confidence interval: 96.5-99.5% and an analytic specificity of 99.6% (281/282; 95% confidence interval: 98.0-99.9%. Reports of the CYP2C19 genotyping results often lacked essential information. In conclusion, clinical laboratories demonstrated good analytical sensitivity and specificity; however, the pharmacogenetic testing community requires additional education regarding the correct reporting of CYP2C19 genetic test results.

  14. Long-Term Maintenance of Immediate or Delayed Extinction Is Determined by the Extinction-Test Interval

    Science.gov (United States)

    Johnson, Justin S.; Escobar, Martha; Kimble, Whitney L.

    2010-01-01

    Short acquisition-extinction intervals (immediate extinction) can lead to either more or less spontaneous recovery than long acquisition-extinction intervals (delayed extinction). Using rat subjects, we observed less spontaneous recovery following immediate than delayed extinction (Experiment 1). However, this was the case only if a relatively…

  15. Variation in Cancer Incidence among Patients with ESRD during Kidney Function and Nonfunction Intervals.

    Science.gov (United States)

    Yanik, Elizabeth L; Clarke, Christina A; Snyder, Jon J; Pfeiffer, Ruth M; Engels, Eric A

    2016-05-01

    Among patients with ESRD, cancer risk is affected by kidney dysfunction and by immunosuppression after transplant. Assessing patterns across periods of dialysis and kidney transplantation may inform cancer etiology. We evaluated 202,195 kidney transplant candidates and recipients from a linkage between the Scientific Registry of Transplant Recipients and cancer registries, and compared incidence in kidney function intervals (time with a transplant) with incidence in nonfunction intervals (waitlist or time after transplant failure), adjusting for demographic factors. Incidence of infection-related and immune-related cancer was higher during kidney function intervals than during nonfunction intervals. Incidence was most elevated for Kaposi sarcoma (hazard ratio [HR], 9.1; 95% confidence interval (95% CI), 4.7 to 18), non-Hodgkin's lymphoma (HR, 3.2; 95% CI, 2.8 to 3.7), Hodgkin's lymphoma (HR, 3.0; 95% CI, 1.7 to 5.3), lip cancer (HR, 3.4; 95% CI, 2.0 to 6.0), and nonepithelial skin cancers (HR, 3.8; 95% CI, 2.5 to 5.8). Conversely, ESRD-related cancer incidence was lower during kidney function intervals (kidney cancer: HR, 0.8; 95% CI, 0.7 to 0.8 and thyroid cancer: HR, 0.7; 95% CI, 0.6 to 0.8). With each successive interval, incidence changed in alternating directions for non-Hodgkin's lymphoma, melanoma, and lung, pancreatic, and nonepithelial skin cancers (higher during function intervals), and kidney and thyroid cancers (higher during nonfunction intervals). For many cancers, incidence remained higher than in the general population across all intervals. These data indicate strong short-term effects of kidney dysfunction and immunosuppression on cancer incidence in patients with ESRD, suggesting a need for persistent cancer screening and prevention. Copyright © 2016 by the American Society of Nephrology.

  16. Cost-effectiveness of using a molecular diagnostic test to improve preoperative diagnosis of thyroid cancer.

    Science.gov (United States)

    Najafzadeh, Mehdi; Marra, Carlo A; Lynd, Larry D; Wiseman, Sam M

    2012-12-01

    Fine-needle aspiration biopsy (FNAB) is a safe and inexpensive diagnostic procedure for evaluating thyroid nodules.Up to 25% of the results from an FNAB, however, may not be diagnostic or may be indeterminate, leading to a subsequent diagnostic thyroid surgery. A new molecularly based diagnostic test could potentially reduce indeterminate cytological results and, with high accuracy, provide a definitive diagnosis for cancer in thyroid nodules. The aim of the study was to estimate the cost-effectiveness of utilizing a molecular diagnostic (DX) test as an adjunct to FNAB, compared with NoDX, to improve the preoperative diagnosis of thyroid nodules. We constructed a patient-level simulation model to estimate the clinical and economic outcomes of using a DX test compared with current practice (NoDX) for the diagnosis of thyroid nodules. By using a cost-effectiveness framework, we measured incremental clinical benefits in terms of quality-adjusted life-years and incremental costs over a 10-year time horizon. Assuming 95% sensitivity and specificity of the Dx test when used as an adjunct to FNAB, the utilization of the DX test resulted in a gain of 0.046 quality-adjusted life-years (95% confidence interval 0.019-0.078) and a saving of $1087 (95% confidence interval $691-$1533) in direct costs per patient. If the cost of the Dx test is less than $1087 per test, we expect to save quality-adjusted life-years and reduce costs when it is utilized. Sensitivity of the DX test, compared with specificity, had a larger influence on the overall outcomes. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. Confidence Building Strategies in the Public Schools.

    Science.gov (United States)

    Achilles, C. M.; And Others

    1985-01-01

    Data from the Phi Delta Kappa Commission on Public Confidence in Education indicate that "high-confidence" schools make greater use of marketing and public relations strategies. Teacher attitudes were ranked first and administrator attitudes second by 409 respondents for both gain and loss of confidence in schools. (MLF)

  18. Confidence rating of marine eutrophication assessments

    DEFF Research Database (Denmark)

    Murray, Ciarán; Andersen, Jesper Harbo; Kaartokallio, Hermanni

    2011-01-01

    of the 'value' of the indicators on which the primary assessment is made. Such secondary assessment of confidence represents a first step towards linking status classification with information regarding their accuracy and precision and ultimately a tool for improving or targeting actions to improve the health......This report presents the development of a methodology for assessing confidence in eutrophication status classifications. The method can be considered as a secondary assessment, supporting the primary assessment of eutrophication status. The confidence assessment is based on a transparent scoring...

  19. A study of the correlation between self-confidence and professional achievement of designers

    Directory of Open Access Journals (Sweden)

    Chen Rain

    2017-01-01

    Full Text Available This study mainly investigated which mental state of designers, self-confidence or sense of inferiority, has positive effects on professional design achievement. This study attempted to find if there is correlation between designers’ self-confidence or sense of inferiority and their professional achievement. With regard to the tendency of designers’ psychological state, the Rosenberg Self-Esteem Scale was used to measure designers’ selfconfidence, and statistical computations were made based on gathered data. This study used correlation analysis to find if confidence level of 46 seniors of Design Department is relevant to their professional achievement. The results of the study showed that confidence level of designers has a slight correlation to professional achievement. Factors leading to the study’s findings may be the small amount of sample analyzed or may be the reason that the Rosenberg Self-Esteem Scale only detected the current level of self-confidence and could not give a proper feedback on designers’ self-confidence during the entire semester. In the future, the results of this study can be considered as a pre-test experiment for a more complete study, and it is expected that results of the study can be for design educators’ reference.

  20. Poor Positive Predictive Value of Lyme Disease Serologic Testing in an Area of Low Disease Incidence.

    Science.gov (United States)

    Lantos, Paul M; Branda, John A; Boggan, Joel C; Chudgar, Saumil M; Wilson, Elizabeth A; Ruffin, Felicia; Fowler, Vance; Auwaerter, Paul G; Nigrovic, Lise E

    2015-11-01

    Lyme disease is diagnosed by 2-tiered serologic testing in patients with a compatible clinical illness, but the significance of positive test results in low-prevalence regions has not been investigated. We reviewed the medical records of patients who tested positive for Lyme disease with standardized 2-tiered serologic testing between 2005 and 2010 at a single hospital system in a region with little endemic Lyme disease. Based on clinical findings, we calculated the positive predictive value of Lyme disease serology. Next, we reviewed the outcome of serologic testing in patients with select clinical syndromes compatible with disseminated Lyme disease (arthritis, cranial neuropathy, or meningitis). During the 6-year study period 4723 patients were tested for Lyme disease, but only 76 (1.6%) had positive results by established laboratory criteria. Among 70 seropositive patients whose medical records were available for review, 12 (17%; 95% confidence interval, 9%-28%) were found to have Lyme disease (6 with documented travel to endemic regions). During the same time period, 297 patients with a clinical illness compatible with disseminated Lyme disease underwent 2-tiered serologic testing. Six of them (2%; 95% confidence interval, 0.7%-4.3%) were seropositive, 3 with documented travel and 1 who had an alternative diagnosis that explained the clinical findings. In this low-prevalence cohort, fewer than 20% of positive Lyme disease tests are obtained from patients with clinically likely Lyme disease. Positive Lyme disease test results may have little diagnostic value in this setting. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Effect of Simulation on the Confidence of University Nursing Students in Applying Cardiopulmonary Assessment Skills: A Randomized Controlled Trial.

    Science.gov (United States)

    Tawalbeh, Loai I

    2017-08-01

    Simulation is an effective teaching strategy. However, no study in Jordan has examined the effect of simulation on the confidence of university nursing students in applying heart and lung physical examination skills. The current study aimed to test the effect of simulation on the confidence of university nursing students in applying heart and lung physical examination skills. A randomized controlled trial design was applied. The researcher introduced the simulation scenario regarding cardiopulmonary examination skills. This scenario included a 1-hour PowerPoint presentation and video for the experimental group (n= 35) and a PowerPoint presentation and a video showing a traditional demonstration in the laboratory for the control group (n = 34). Confidence in applying cardiopulmonary physical examination skills was measured for both groups at baseline and at 1 day and 3 months posttest. A paired t test showed that confidence was significantly higher in the posttest than in the pretest for both groups. An independent t test showed a statistically significant difference (t(67) = -42.95, p skills. Both simulation and traditional training in the laboratory significantly improved the confidence of participants in applying cardiopulmonary assessment skills. However, the simulation training had a more significant effect than usual training in enhancing the confidence of nursing students in applying physical examination skills.

  2. An appraisal of statistical procedures used in derivation of reference intervals.

    Science.gov (United States)

    Ichihara, Kiyoshi; Boyd, James C

    2010-11-01

    When conducting studies to derive reference intervals (RIs), various statistical procedures are commonly applied at each step, from the planning stages to final computation of RIs. Determination of the necessary sample size is an important consideration, and evaluation of at least 400 individuals in each subgroup has been recommended to establish reliable common RIs in multicenter studies. Multiple regression analysis allows identification of the most important factors contributing to variation in test results, while accounting for possible confounding relationships among these factors. Of the various approaches proposed for judging the necessity of partitioning reference values, nested analysis of variance (ANOVA) is the likely method of choice owing to its ability to handle multiple groups and being able to adjust for multiple factors. Box-Cox power transformation often has been used to transform data to a Gaussian distribution for parametric computation of RIs. However, this transformation occasionally fails. Therefore, the non-parametric method based on determination of the 2.5 and 97.5 percentiles following sorting of the data, has been recommended for general use. The performance of the Box-Cox transformation can be improved by introducing an additional parameter representing the origin of transformation. In simulations, the confidence intervals (CIs) of reference limits (RLs) calculated by the parametric method were narrower than those calculated by the non-parametric approach. However, the margin of difference was rather small owing to additional variability in parametrically-determined RLs introduced by estimation of parameters for the Box-Cox transformation. The parametric calculation method may have an advantage over the non-parametric method in allowing identification and exclusion of extreme values during RI computation.

  3. Training health professionals to recruit into challenging randomized controlled trials improved confidence: the development of the QuinteT randomized controlled trial recruitment training intervention.

    Science.gov (United States)

    Mills, Nicola; Gaunt, Daisy; Blazeby, Jane M; Elliott, Daisy; Husbands, Samantha; Holding, Peter; Rooshenas, Leila; Jepson, Marcus; Young, Bridget; Bower, Peter; Tudur Smith, Catrin; Gamble, Carrol; Donovan, Jenny L

    2018-03-01

    The objective of this study was to describe and evaluate a training intervention for recruiting patients to randomized controlled trials (RCTs), particularly for those anticipated to be difficult for recruitment. One of three training workshops was offered to surgeons and one to research nurses. Self-confidence in recruitment was measured through questionnaires before and up to 3 months after training; perceived impact of training on practice was assessed after. Data were analyzed using two-sample t-tests and supplemented with findings from the content analysis of free-text comments. Sixty-seven surgeons and 32 nurses attended. Self-confidence scores for all 10 questions increased after training [range of mean scores before 5.1-6.9 and after 6.9-8.2 (scale 0-10, all 95% confidence intervals are above 0 and all P-values recruitment following training was high-surgeons' mean score 8.8 [standard deviation (SD), 1.2] and nurses' 8.4 (SD, 1.3) (scale 0-10); 50% (19/38) of surgeons and 40% (10/25) of nurses reported on a 4-point Likert scale that training had made "a lot" of difference to their RCT discussions. Analysis of free text revealed this was mostly in relation to how to convey equipoise, explain randomization, and manage treatment preferences. Surgeons and research nurses reported increased self-confidence in discussing RCTs with patients, a raised awareness of hidden challenges and a positive impact on recruitment practice following QuinteT RCT Recruitment Training. Training will be made more available and evaluated in relation to recruitment rates and informed consent. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Evidence for a confidence-accuracy relationship in memory for same- and cross-race faces.

    Science.gov (United States)

    Nguyen, Thao B; Pezdek, Kathy; Wixted, John T

    2017-12-01

    Discrimination accuracy is usually higher for same- than for cross-race faces, a phenomenon known as the cross-race effect (CRE). According to prior research, the CRE occurs because memories for same- and cross-race faces rely on qualitatively different processes. However, according to a continuous dual-process model of recognition memory, memories that rely on qualitatively different processes do not differ in recognition accuracy when confidence is equated. Thus, although there are differences in overall same- and cross-race discrimination accuracy, confidence-specific accuracy (i.e., recognition accuracy at a particular level of confidence) may not differ. We analysed datasets from four recognition memory studies on same- and cross-race faces to test this hypothesis. Confidence ratings reliably predicted recognition accuracy when performance was above chance levels (Experiments 1, 2, and 3) but not when performance was at chance levels (Experiment 4). Furthermore, at each level of confidence, confidence-specific accuracy for same- and cross-race faces did not significantly differ when overall performance was above chance levels (Experiments 1, 2, and 3) but significantly differed when overall performance was at chance levels (Experiment 4). Thus, under certain conditions, high-confidence same-race and cross-race identifications may be equally reliable.

  5. Interval timing in genetically modified mice: a simple paradigm.

    Science.gov (United States)

    Balci, F; Papachristos, E B; Gallistel, C R; Brunner, D; Gibson, J; Shumyatsky, G P

    2008-04-01

    We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using d-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently.

  6. Technical Report: Benchmarking for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    Energy Technology Data Exchange (ETDEWEB)

    McLoughlin, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-22

    The software application “MetaQuant” was developed by our group at Lawrence Livermore National Laboratory (LLNL). It is designed to profile microbial populations in a sample using data from whole-genome shotgun (WGS) metagenomic DNA sequencing. Several other metagenomic profiling applications have been described in the literature. We ran a series of benchmark tests to compare the performance of MetaQuant against that of a few existing profiling tools, using real and simulated sequence datasets. This report describes our benchmarking procedure and results.

  7. The confidence in diabetes self-care scale

    DEFF Research Database (Denmark)

    Van Der Ven, Nicole C W; Weinger, Katie; Yi, Joyce

    2003-01-01

    evaluated in Dutch (n = 151) and U.S. (n = 190) outpatients with type 1 diabetes. In addition to the CIDS scale, assessment included HbA(1c), emotional distress, fear of hypoglycemia, self-esteem, anxiety, depression, and self-care behavior. The Dutch sample completed additional measures on perceived burden......OBJECTIVE: To examine psychometric properties of the Confidence in Diabetes Self-Care (CIDS) scale, a newly developed instrument assessing diabetes-specific self-efficacy in Dutch and U.S. patients with type 1 diabetes. RESEARCH DESIGN AND METHODS: Reliability and validity of the CIDS scale were...... and importance of self-care. Test-retest reliability was established in a second Dutch sample (n = 62). RESULTS: Internal consistency (Cronbach's alpha = 0.86 for Dutch patients and 0.90 U.S. patients) and test-retest reliability (Spearman's r = 0.85, P

  8. Haematological and biochemical reference intervals for free-ranging brown bears (Ursus arctos) in Sweden

    Science.gov (United States)

    2014-01-01

    Background Establishment of haematological and biochemical reference intervals is important to assess health of animals on individual and population level. Reference intervals for 13 haematological and 34 biochemical variables were established based on 88 apparently healthy free-ranging brown bears (39 males and 49 females) in Sweden. The animals were chemically immobilised by darting from a helicopter with a combination of medetomidine, tiletamine and zolazepam in April and May 2006–2012 in the county of Dalarna, Sweden. Venous blood samples were collected during anaesthesia for radio collaring and marking for ecological studies. For each of the variables, the reference interval was described based on the 95% confidence interval, and differences due to host characteristics sex and age were included if detected. To our knowledge, this is the first report of reference intervals for free-ranging brown bears in Sweden. Results The following variables were not affected by host characteristics: red blood cell, white blood cell, monocyte and platelet count, alanine transaminase, amylase, bilirubin, free fatty acids, glucose, calcium, chloride, potassium, and cortisol. Age differences were seen for the majority of the haematological variables, whereas sex influenced only mean corpuscular haemoglobin concentration, aspartate aminotransferase, lipase, lactate dehydrogenase, β-globulin, bile acids, triglycerides and sodium. Conclusions The biochemical and haematological reference intervals provided and the differences due to host factors age and gender can be useful for evaluation of health status in free-ranging European brown bears. PMID:25139149

  9. How Much Confidence Can We Have in EU-SILC? Complex Sample Designs and the Standard Error of the Europe 2020 Poverty Indicators

    Science.gov (United States)

    Goedeme, Tim

    2013-01-01

    If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…

  10. Student Perceptions of and Confidence in Self-Care Course Concepts Using Team-based Learning.

    Science.gov (United States)

    Frame, Tracy R; Gryka, Rebecca; Kiersma, Mary E; Todt, Abby L; Cailor, Stephanie M; Chen, Aleda M H

    2016-04-25

    Objective. To evaluate changes in student perceptions of and confidence in self-care concepts after completing a team-based learning (TBL) self-care course. Methods. Team-based learning was used at two universities in first professional year, semester-long self-care courses. Two instruments were created and administered before and after the semester. The instruments were designed to assess changes in student perceptions of self-care using the theory of planned behavior (TPB) domains and confidence in learning self-care concepts using Bandura's Social Cognitive Theory. Wilcoxon signed rank tests were used to evaluate pre/post changes, and Mann Whitney U tests were used to evaluate university differences. Results. Fifty-three Cedarville University and 58 Manchester University students completed both instruments (100% and 92% response rates, respectively). Student self-care perceptions with TPB decreased significantly on nine of 13 items for Cedarville and decreased for one of 13 items for Manchester. Student confidence in self-care concepts improved significantly on all questions for both universities. Conclusion. Data indicate TBL self-care courses were effective in improving student confidence about self-care concepts. Establishing students' skill sets prior to entering the profession is beneficial because pharmacists will use self-directed learning to expand their knowledge and adapt to problem-solving situations.

  11. [Effects of group psychological counseling on self-confidence and social adaptation of burn patients].

    Science.gov (United States)

    Dang, Rui; Wang, Yishen; Li, Na; He, Ting; Shi, Mengna; Liang, Yanyan; Zhu, Chan; Zhou, Yongbo; Qi, Zongshi; Hu, Dahai

    2014-12-01

    To explore the effects of group psychological counseling on the self-confidence and social adaptation of burn patients during the course of rehabilitation. Sixty-four burn patients conforming to the inclusion criteria and hospitalized from January 2012 to January 2014 in Xijing Hospital were divided into trial group and control group according to the method of rehabilitation, with 32 cases in each group. Patients in the two groups were given ordinary rehabilitation training for 8 weeks, and the patients in trial group were given a course of group psychological counseling in addition. The Rosenberg's Self-Esteem Scale was used to evaluate the changes in self-confidence levels, and the number of patients with inferiority complex, normal feeling, self-confidence, and over self-confidence were counted before and after treatment. The Abbreviated Burn-Specific Health Scale was used to evaluate physical function, psychological function, social relationship, health condition, and general condition before and after treatment to evaluate the social adaptation of patients. Data were processed with t test, chi-square test, Mann-Whitney U test, and Wilcoxon test. (1) After treatment, the self-confidence levels of patients in trial group were significantly higher than those in control group (Z = -2.573, P 0.05). (2) After treatment, the scores of psychological function, social relationship, health condition, and general condition were (87 ± 3), (47.8 ± 3.6), (49 ± 3), and (239 ± 10) points in trial group, which were significantly higher than those in control group [(79 ± 4), (38.3 ± 5.6), (46 ± 4), and (231 ± 9) points, with t values respectively -8.635, -8.125, -3.352, -3.609, P values below 0.01]. After treatment, the scores of physical function, psychological function, social relationship, health condition, and general condition in trial group were significantly higher than those before treatment (with t values from -33.282 to -19.515, P values below 0.05). The scores

  12. Electron density diagnostics in the 10-100 A interval for a solar flare

    Science.gov (United States)

    Brown, W. A.; Bruner, M. E.; Acton, L. W.; Mason, H. E.

    1986-01-01

    Electron density measurements from spectral-line diagnostics are reported for a solar flare on July 13, 1982, 1627 UT. The spectrogram, covering the 10-95 A interval, contained usable lines of helium-like ions C V, N VI, O VII, and Ne IX which are formed over the temperature interval 0.7-3.5 x 10 to the 6th K. In addition, spectral-line ratios of Si IX, Fe XIV, and Ca XV were compared with new theoretical estimates of their electron density sensitivity to obtain additional electron density diagnostics. An electron density of 3 x 10 to the 10th/cu cm was obtained. The comparison of these results from helium-like and other ions gives confidence in the utility of these tools for solar coronal analysis and will lead to a fuller understanding of the phenomena observed in this flare.

  13. Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China.

    Science.gov (United States)

    Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li'an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling

    2016-03-01

    A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box-Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China.

  14. Evaluation of Healing Intervals of Incisional Skin Wounds of Goats ...

    African Journals Online (AJOL)

    The aim of this study was to compare the healing intervals among simple interrupted (SI), ford interlocking (FI) and subcuticular (SC) suture patterns in goats. We hypothesized that these common suture patterns used for closure of incisional skin wounds may have effect on the healing interval. To test this hypothesis, two ...

  15. Are You Sure? Confidence about the Satiating Capacity of a Food Affects Subsequent Food Intake.

    Science.gov (United States)

    Schiöth, Helgi B; Ferriday, Danielle; Davies, Sarah R; Benedict, Christian; Elmståhl, Helena; Brunstrom, Jeffrey M; Hogenkamp, Pleunie S

    2015-06-24

    Expectations about a food's satiating capacity predict self-selected portion size, food intake and food choice. However, two individuals might have a similar expectation, but one might be extremely confident while the other might be guessing. It is unclear whether confidence about an expectation affects adjustments in energy intake at a subsequent meal. In a randomized cross-over design, 24 subjects participated in three separate breakfast sessions, and were served a low-energy-dense preload (53 kcal/100 g), a high-energy-dense preload (94 kcal/100 g), or no preload. Subjects received ambiguous information about the preload's satiating capacity and rated how confident they were about their expected satiation before consuming the preload in its entirety. They were served an ad libitum test meal 30 min later. Confidence ratings were negatively associated with energy compensation after consuming the high-energy-dense preload (r = -0.61; p = 0.001). The same relationship was evident after consuming the low-energy-dense preload, but only after controlling for dietary restraint, hunger prior to, and liking of the test meal (p = 0.03). Our results suggest that confidence modifies short-term controls of food intake by affecting energy compensation. These results merit consideration because imprecise caloric compensation has been identified as a potential risk factor for a positive energy balance and weight gain.

  16. Are You Sure? Confidence about the Satiating Capacity of a Food Affects Subsequent Food Intake

    Directory of Open Access Journals (Sweden)

    Helgi B. Schiöth

    2015-06-01

    Full Text Available Expectations about a food’s satiating capacity predict self-selected portion size, food intake and food choice. However, two individuals might have a similar expectation, but one might be extremely confident while the other might be guessing. It is unclear whether confidence about an expectation affects adjustments in energy intake at a subsequent meal. In a randomized cross-over design, 24 subjects participated in three separate breakfast sessions, and were served a low-energy-dense preload (53 kcal/100 g, a high-energy-dense preload (94 kcal/100 g, or no preload. Subjects received ambiguous information about the preload’s satiating capacity and rated how confident they were about their expected satiation before consuming the preload in its entirety. They were served an ad libitum test meal 30 min later. Confidence ratings were negatively associated with energy compensation after consuming the high-energy-dense preload (r = −0.61; p = 0.001. The same relationship was evident after consuming the low-energy-dense preload, but only after controlling for dietary restraint, hunger prior to, and liking of the test meal (p = 0.03. Our results suggest that confidence modifies short-term controls of food intake by affecting energy compensation. These results merit consideration because imprecise caloric compensation has been identified as a potential risk factor for a positive energy balance and weight gain.

  17. Nuclear power: restoring public confidence

    International Nuclear Information System (INIS)

    Arnold, L.

    1986-01-01

    The paper concerns a one day conference on nuclear power organised by the Centre for Science Studies and Science Policy, Lancaster, April 1986. Following the Chernobyl reactor accident, the conference concentrated on public confidence in nuclear power. Causes of lack of public confidence, public perceptions of risk, and the effect of Chernobyl in the United Kingdom, were all discussed. A Select Committee on the Environment examined the problems of radioactive waste disposal. (U.K.)

  18. Probabilistic confidence for decisions based on uncertain reliability estimates

    Science.gov (United States)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  19. The impact of athlete leaders on team members’ team outcome confidence: A test of mediation by team identification and collective efficacy

    OpenAIRE

    Fransen, Katrien; Coffee, Pete; Vanbeselaere, Norbert; Slater, Matthew; De Cuyper, Bert; Boen, Filip

    2014-01-01

    Research on the effect of athlete leadership on pre-cursors of team performance such as team confidence is sparse. To explore the underlying mechanisms of how athlete leaders impact their team’s confidence, an online survey was completed by 2,867 players and coaches from nine different team sports in Flanders (Belgium). We distinguished between two types of team confidence: collective efficacy, assessed by the CEQS subscales of Effort, Persistence, Preparation, and Unity; and team outcome con...

  20. Programming with Intervals

    Science.gov (United States)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  1. Psychometric properties of the communication Confidence Rating Scale for Aphasia (CCRSA): phase 1.

    Science.gov (United States)

    Cherney, Leora R; Babbitt, Edna M; Semik, Patrick; Heinemann, Allen W

    2011-01-01

    Confidence is a construct that has not been explored previously in aphasia research. We developed the Communication Confidence Rating Scale for Aphasia (CCRSA) to assess confidence in communicating in a variety of activities and evaluated its psychometric properties using rating scale (Rasch) analysis. The CCRSA was administered to 21 individuals with aphasia before and after participation in a computer-based language therapy study. Person reliability of the 8-item CCRSA was .77. The 5-category rating scale demonstrated monotonic increases in average measures from low to high ratings. However, one item ("I follow news, sports, stories on TV/movies") misfit the construct defined by the other items (mean square infit = 1.69, item-measure correlation = .41). Deleting this item improved reliability to .79; the 7 remaining items demonstrated excellent fit to the underlying construct, although there was a modest ceiling effect in this sample. Pre- to posttreatment changes on the 7-item CCRSA measure were statistically significant using a paired samples t test. Findings support the reliability and sensitivity of the CCRSA in assessing participants' self-report of communication confidence. Further evaluation of communication confidence is required with larger and more diverse samples.

  2. Tumor phenotype and breast density in distinct categories of interval cancer: results of population-based mammography screening in Spain

    Science.gov (United States)

    2014-01-01

    Introduction Interval cancers are tumors arising after a negative screening episode and before the next screening invitation. They can be classified into true interval cancers, false-negatives, minimal-sign cancers, and occult tumors based on mammographic findings in screening and diagnostic mammograms. This study aimed to describe tumor-related characteristics and the association of breast density and tumor phenotype within four interval cancer categories. Methods We included 2,245 invasive tumors (1,297 screening-detected and 948 interval cancers) diagnosed from 2000 to 2009 among 645,764 women aged 45 to 69 who underwent biennial screening in Spain. Interval cancers were classified by a semi-informed retrospective review into true interval cancers (n = 455), false-negatives (n = 224), minimal-sign (n = 166), and occult tumors (n = 103). Breast density was evaluated using Boyd’s scale and was conflated into: 75%. Tumor-related information was obtained from cancer registries and clinical records. Tumor phenotype was defined as follows: luminal A: ER+/HER2- or PR+/HER2-; luminal B: ER+/HER2+ or PR+/HER2+; HER2: ER-/PR-/HER2+; triple-negative: ER-/PR-/HER2-. The association of tumor phenotype and breast density was assessed using a multinomial logistic regression model. Adjusted odds ratios (OR) and 95% confidence intervals (95% CI) were calculated. All statistical tests were two-sided. Results Forty-eight percent of interval cancers were true interval cancers and 23.6% false-negatives. True interval cancers were associated with HER2 and triple-negative phenotypes (OR = 1.91 (95% CI:1.22-2.96), OR = 2.07 (95% CI:1.42-3.01), respectively) and extremely dense breasts (>75%) (OR = 1.67 (95% CI:1.08-2.56)). However, among true interval cancers a higher proportion of triple-negative tumors was observed in predominantly fatty breasts (breasts (28.7%, 21.4%, 11.3% and 14.3%, respectively; screening-detected cancers, extreme breast density

  3. Metacognition and Confidence: Comparing Math to Other Academic Subjects

    Directory of Open Access Journals (Sweden)

    Shanna eErickson

    2015-06-01

    Full Text Available Two studies addressed student metacognition in math, measuring confidence accuracy about math performance. Underconfidence would be expected in light of pervasive math anxiety. However, one might alternatively expect overconfidence based on previous results showing overconfidence in other subject domains. Metacognitive judgments and performance were assessed for biology, literature, and mathematics tests. In Study 1, high school students took three different tests and provided estimates of their performance both before and after taking each test. In Study 2, undergraduates similarly took three shortened SAT II Subject Tests. Students were overconfident in predicting math performance, indeed showing greater overconfidence compared to other academic subjects. It appears that both overconfidence and anxiety can adversely affect metacognitive ability and can lead to math avoidance. The results have implications for educational practice and other environments that require extensive use of math.

  4. Test of a mosquito eggshell isolation method and subsampling procedure.

    Science.gov (United States)

    Turner, P A; Streever, W J

    1997-03-01

    Production of Aedes vigilax, the common salt-marsh mosquito, can be assessed by determining eggshell densities found in soil. In this study, 14 field-collected eggshell samples were used to test a subsampling technique and compare eggshell counts obtained with a flotation method to those obtained by direct examination of sediment (DES). Relative precision of the subsampling technique was assessed by determining the minimum number of subsamples required to estimate the true mean and confidence interval of a sample at a predetermined confidence level. A regression line was fitted to cube-root transformed eggshell counts obtained from flotation and DES and found to be significant (P eggshells present. Eggshells obtained with the flotation method can be used to predict those from DES using the following equation: DES count = [1.386 x (flotation count)0.33 - 0.01]3.

  5. Identification of atrial fibrillation using electrocardiographic RR-interval difference

    Science.gov (United States)

    Eliana, M.; Nuryani, N.

    2017-11-01

    Automated detection of atrial fibrillation (AF) is an interesting topic. It is an account of very dangerous, not only as a trigger of embolic stroke, but it’s also related to some else chronical disease. In this study, we analyse the presence of AF by determining irregularities of RR-interval. We utilize the interval comparison to measure the degree of irregularities of RR-interval in a defined segment. The series of RR-interval is segmented with the length of 10 of them. In this study, we use interval comparison for the method. We were comparing all of the intervals there each other. Then we put the threshold to define the low difference and high difference (δ). A segment is defined as AF or Normal Sinus by the number of high δ, so we put the tolerance (β) of high δ there. We have used this method to test the 23 patients data from MIT-BIH. Using the approach and the clinical data we find accuracy, sensitivity, and specificity of 84.98%, 91.99%, and 77.85% respectively.

  6. Practice effects and test-re-test reliability of the Five Digit Test in patients with stroke over four serial assessments.

    Science.gov (United States)

    Chiu, En-Chi; Koh, Chia-Lin; Tsai, Chia-Yin; Lu, Wen-Shian; Sheu, Ching-Fan; Hsueh, I-Ping; Hsieh, Ching-Lin

    2014-01-01

    To investigate practice effect and test-re-test reliability of the Five Digit Test (FDT) over four serial assessments in patients with stroke. Single-group repeated measures design. Twenty-five patients with stroke were administered the FDT in four consecutive assessments every 2 weeks. The FDT contains four parts with five indices: 'basic measures of attention and processing speed', 'selective attention', 'alternating attention', 'ability of inhibition' and 'ability of switching'. The five indices of the FDT showed trivial-to-small practice effects (Cohen's d = 0.03-0.47) and moderate-to-excellent test-re-test reliability (intra-class correlation coefficient = 0.59-0.97). Practice effects of the five indices all appeared cumulative, but one index, 'basic measures of attention and processing speed', reached a plateau after the second assessment. The minimum and maximum values of the 90% confidence interval (CI) of reliable change index modified for practice (RCIp) for this index were [-17.6, 11.2]. One of five indices of the FDT reached a plateau, whose minimum and maximum values of the 90% CI RCIp are useful to determine whether the change in an individual's score is real. However, clinicians and researchers should be cautious when interpreting the test results of these four indices over repeated assessments.

  7. Organic labbeling systems and consumer confidence

    OpenAIRE

    Sønderskov, Kim Mannemar; Daugbjerg, Carsten

    2009-01-01

    A research analysis suggests that a state certification and labelling system creates confidence in organic labelling systems and consequently green consumerism. Danish consumers have higher levels of confidence in the labelling system than consumers in countries where the state plays a minor role in labelling and certification.

  8. Effects of circular gait training on balance, balance confidence in patients with stroke: a pilot study.

    Science.gov (United States)

    Park, Shin-Kyu; Kim, Sung-Jin; Yoon, Tak Yong; Lee, Suk-Min

    2018-05-01

    [Purpose] This study aimed to investigate the effects of circular gait training on balance and balance confidence in patients with stroke. [Subjects and Methods] Fifteen patients with stroke were randomly divided into either the circular gait training (CGT) group (n=8) or the straight gait training (SGT) group (n=7). Both groups had conventional therapy that adhered to the neurodevelopmental treatment (NDT) approach, for 30 min. In addition, the CGT group performed circular gait training, and the SGT group practiced straight gait training for 30 min. Each intervention was applied for 1 h, 5 days a week, for 2 weeks. Berg Balance Scale (BBS), Timed Up and Go (TUG) test, and Activities-specific Balance Confidence (ABC) scale were used to test balance and balance confidence. [Results] After the intervention, both groups showed significant increases in balance and balance confidence. Significant improvements in the balance of the CGT group compared with the SGT group were observed at post-assessment. [Conclusion] This study showed that circular gait training significantly improves balance in patients with stroke.

  9. Prognostic durability of liver fibrosis tests and improvement in predictive performance for mortality by combining tests.

    Science.gov (United States)

    Bertrais, Sandrine; Boursier, Jérôme; Ducancelle, Alexandra; Oberti, Frédéric; Fouchard-Hubert, Isabelle; Moal, Valérie; Calès, Paul

    2017-06-01

    There is currently no recommended time interval between noninvasive fibrosis measurements for monitoring chronic liver diseases. We determined how long a single liver fibrosis evaluation may accurately predict mortality, and assessed whether combining tests improves prognostic performance. We included 1559 patients with chronic liver disease and available baseline liver stiffness measurement (LSM) by Fibroscan, aspartate aminotransferase to platelet ratio index (APRI), FIB-4, Hepascore, and FibroMeter V2G . Median follow-up was 2.8 years during which 262 (16.8%) patients died, with 115 liver-related deaths. All fibrosis tests were able to predict mortality, although APRI (and FIB-4 for liver-related mortality) showed lower overall discriminative ability than the other tests (differences in Harrell's C-index: P fibrosis, 1 year in patients with significant fibrosis, and liver disease (MELD) score testing sets. In the training set, blood tests and LSM were independent predictors of all-cause mortality. The best-fit multivariate model included age, sex, LSM, and FibroMeter V2G with C-index = 0.834 (95% confidence interval, 0.803-0.862). The prognostic model for liver-related mortality included the same covariates with C-index = 0.868 (0.831-0.902). In the testing set, the multivariate models had higher prognostic accuracy than FibroMeter V2G or LSM alone for all-cause mortality and FibroMeter V2G alone for liver-related mortality. The prognostic durability of a single baseline fibrosis evaluation depends on the liver fibrosis level. Combining LSM with a blood fibrosis test improves mortality risk assessment. © 2016 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

  10. Modified Dempster-Shafer approach using an expected utility interval decision rule

    Science.gov (United States)

    Cheaito, Ali; Lecours, Michael; Bosse, Eloi

    1999-03-01

    The combination operation of the conventional Dempster- Shafer algorithm has a tendency to increase exponentially the number of propositions involved in bodies of evidence by creating new ones. The aim of this paper is to explore a 'modified Dempster-Shafer' approach of fusing identity declarations emanating form different sources which include a number of radars, IFF and ESM systems in order to limit the explosion of the number of propositions. We use a non-ad hoc decision rule based on the expected utility interval to select the most probable object in a comprehensive Platform Data Base containing all the possible identity values that a potential target may take. We study the effect of the redistribution of the confidence levels of the eliminated propositions which otherwise overload the real-time data fusion system; these eliminated confidence levels can in particular be assigned to ignorance, or uniformly added to the remaining propositions and to ignorance. A scenario has been selected to demonstrate the performance of our modified Dempster-Shafer method of evidential reasoning.

  11. Constrained optimization of test intervals using a steady-state genetic algorithm

    International Nuclear Information System (INIS)

    Martorell, S.; Carlos, S.; Sanchez, A.; Serradell, V.

    2000-01-01

    There is a growing interest from both the regulatory authorities and the nuclear industry to stimulate the use of Probabilistic Risk Analysis (PRA) for risk-informed applications at Nuclear Power Plants (NPPs). Nowadays, special attention is being paid on analyzing plant-specific changes to Test Intervals (TIs) within the Technical Specifications (TSs) of NPPs and it seems to be a consensus on the need of making these requirements more risk-effective and less costly. Resource versus risk-control effectiveness principles formally enters in optimization problems. This paper presents an approach for using the PRA models in conducting the constrained optimization of TIs based on a steady-state genetic algorithm (SSGA) where the cost or the burden is to be minimized while the risk or performance is constrained to be at a given level, or vice versa. The paper encompasses first with the problem formulation, where the objective function and constraints that apply in the constrained optimization of TIs based on risk and cost models at system level are derived. Next, the foundation of the optimizer is given, which is derived by customizing a SSGA in order to allow optimizing TIs under constraints. Also, a case study is performed using this approach, which shows the benefits of adopting both PRA models and genetic algorithms, in particular for the constrained optimization of TIs, although it is also expected a great benefit of using this approach to solve other engineering optimization problems. However, care must be taken in using genetic algorithms in constrained optimization problems as it is concluded in this paper

  12. Consumer confidence or the business cycle

    DEFF Research Database (Denmark)

    Møller, Stig Vinther; Nørholm, Henrik; Rangvid, Jesper

    2014-01-01

    Answer: The business cycle. We show that consumer confidence and the output gap both excess returns on stocks in many European countries: When the output gap is positive (the economy is doing well), expected returns are low, and when consumer confidence is high, expected returns are also low...

  13. Secure and Usable Bio-Passwords based on Confidence Interval

    OpenAIRE

    Aeyoung Kim; Geunshik Han; Seung-Hyun Seo

    2017-01-01

    The most popular user-authentication method is the password. Many authentication systems try to enhance their security by enforcing a strong password policy, and by using the password as the first factor, something you know, with the second factor being something you have. However, a strong password policy and a multi-factor authentication system can make it harder for a user to remember the password and login in. In this paper a bio-password-based scheme is proposed as a unique authenticatio...

  14. Communication confidence in persons with aphasia.

    Science.gov (United States)

    Babbitt, Edna M; Cherney, Leora R

    2010-01-01

    Communication confidence is a construct that has not been explored in the aphasia literature. Recently, national and international organizations have endorsed broader assessment methods that address quality of life and include participation, activity, and impairment domains as well as psychosocial areas. Individuals with aphasia encounter difficulties in all these areas on a daily basis in living with a communication disorder. Improvements are often reflected in narratives that are not typically included in standard assessments. This article illustrates how a new instrument measuring communication confidence might fit into a broad assessment framework and discusses the interaction of communication confidence, autonomy, and self-determination for individuals living with aphasia.

  15. Diagnostic interval and mortality in colorectal cancer

    DEFF Research Database (Denmark)

    Tørring, Marie Louise; Frydenberg, Morten; Hamilton, William

    2012-01-01

    Objective To test the theory of a U-shaped association between time from the first presentation of symptoms in primary care to the diagnosis (the diagnostic interval) and mortality after diagnosis of colorectal cancer (CRC). Study Design and Setting Three population-based studies in Denmark...

  16. The time interval distribution of sand–dust storms in theory: testing with observational data for Yanchi, China

    International Nuclear Information System (INIS)

    Liu, Guoliang; Zhang, Feng; Hao, Lizhen

    2012-01-01

    We previously introduced a time record model for use in studying the duration of sand–dust storms. In the model, X is the normalized wind speed and Xr is the normalized wind speed threshold for the sand–dust storm. X is represented by a random signal with a normal Gaussian distribution. The storms occur when X ≥ Xr. From this model, the time interval distribution of N = Aexp(−bt) can be deduced, wherein N is the number of time intervals with length greater than t, A and b are constants, and b is related to Xr. In this study, sand–dust storm data recorded in spring at the Yanchi meteorological station in China were analysed to verify whether the time interval distribution of the sand–dust storms agrees with the above time interval distribution. We found that the distribution of the time interval between successive sand–dust storms in April agrees well with the above exponential equation. However, the interval distribution for the sand–dust storm data for the entire spring period displayed a better fit to the Weibull equation and depended on the variation of the sand–dust storm threshold wind speed. (paper)

  17. TEST-RETEST RELIABILITY OF THE CLOSED KINETIC CHAIN UPPER EXTREMITY STABILITY TEST (CKCUEST) IN ADOLESCENTS: RELIABILITY OF CKCUEST IN ADOLESCENTS.

    Science.gov (United States)

    de Oliveira, Valéria M A; Pitangui, Ana C R; Nascimento, Vinícius Y S; da Silva, Hítalo A; Dos Passos, Muana H P; de Araújo, Rodrigo C

    2017-02-01

    The Closed Kinetic Chain Upper Extremity Stability Test (CKCUEST) has been proposed as an option to assess upper limb function and stability; however, there are few studies that support the use of this test in adolescents. The purpose of the present study was to investigate the intersession reliability and agreement of three CKCUEST scores in adolescents and establish clinimetric values for this test. Test-retest reliability. Twenty-five healthy adolescents of both sexes were evaluated. The subjects performed two CKCUEST with an interval of one week between the tests. An intraclass correlation coefficient (ICC 3,3 ) two-way mixed model with a 95% interval of confidence was utilized to determine intersession reliability. A Bland-Altman graph was plotted to analyze the agreement between assessments. The presence of systematic error was evaluated by a one-sample t test. The difference between the evaluation and reevaluation was observed using a paired-sample t test. The level of significance was set at 0.05. Standard error of measurements and minimum detectable changes were calculated. The intersession reliability of the average touches score, normalized score, and power score were 0.68, 0.68 and 0.87, the standard error of measurement were 2.17, 1.35 and 6.49, and the minimal detectable change was 6.01, 3.74 and 17.98, respectively. The presence of systematic error (p test with moderate to excellent reliability when used with adolescents. The CKCUEST is a measurement with moderate to excellent reliability for adolescents. 2b.

  18. A Sensitivity Study of Human Errors in Optimizing Surveillance Test Interval (STI) and Allowed Outage Time (AOT) of Standby Safety System

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Shin, Won Ky; You, Young Woo; Yang, Hui Chang

    1998-01-01

    In most cases, the surveillance test intervals (STIs), allowed outage times (AOTS) and testing strategies of safety components in nuclear power plant are prescribed in plant technical specifications. And, in general, it is required that standby safety system shall be redundant (i.e., composed of multiple components) and these components are tested by either staggered test strategy or sequential test strategy. In this study, a linear model is presented to incorporate the effects of human errors associated with test into the evaluation of unavailability. The average unavailabilities of 1/4, 2/4 redundant systems are computed considering human error and testing strategy. The adverse effects of test on system unavailability, such as component wear and test-induced transient have been modelled. The final outcome of this study would be the optimized human error domain from 3-D human error sensitivity analysis by selecting finely classified segment. The results of sensitivity analysis show that the STI and AOT can be optimized provided human error probability is maintained within allowable range. (authors)

  19. A some aspects of medical demographical situation in the regions, confidant to Semipalatinsk former test site

    International Nuclear Information System (INIS)

    Slazhneva, T.I.; Korchevskij, A.A.; Tret'yakova, S.N.; Pozdnyakova, A.P.

    1993-01-01

    It had been analysed the data of mortality index and average future life span (AFLS).The data was devided in sex and age groups of Pavlodar region (Kazakstan) for the period of 1970, 1979, 1989 and given in comparison with Semipalatinsk region (Kazakstan) and Former Soviet Union. It was discovered peculiarities of demographic index dynamics for last decades: downfall of average life span of population from 1970 to 1979 with further increasing in 1989. In Semipalatinsk region the AFLS of men was decreasing to 2,19 year, women - to 1,24 year; in Pavlodar region the AFLS of men was decreasing to 3,87 year, women - to 4,3 year. Relative compensation of this effect was being marked to 1989 year: from 1979 to 1989 the AFLS index of Pavlodar region men increased to 2,93 year, women - to 1,83 year. Similar oscillations were being followed up for all age groups. Special attention is drawing to the infants mortality dynamic in the regions, confidant to Semipalatinsk test site. Radical ascent of the infants mortality in 1970-1983 period leaded to shaping of excluding unfavourable indexes (71,9 % for 1000 burned in 1975). Analysis confirmed the information of demographic indexes, as integral characteristics of population health levels and ecological equilibrium rate in the regions

  20. Indirect methods for reference interval determination - review and recommendations.

    Science.gov (United States)

    Jones, Graham R D; Haeckel, Rainer; Loh, Tze Ping; Sikaris, Ken; Streichert, Thomas; Katayev, Alex; Barth, Julian H; Ozarda, Yesim

    2018-04-19

    Reference intervals are a vital part of the information supplied by clinical laboratories to support interpretation of numerical pathology results such as are produced in clinical chemistry and hematology laboratories. The traditional method for establishing reference intervals, known as the direct approach, is based on collecting samples from members of a preselected reference population, making the measurements and then determining the intervals. An alternative approach is to perform analysis of results generated as part of routine pathology testing and using appropriate statistical techniques to determine reference intervals. This is known as the indirect approach. This paper from a working group of the International Federation of Clinical Chemistry (IFCC) Committee on Reference Intervals and Decision Limits (C-RIDL) aims to summarize current thinking on indirect approaches to reference intervals. The indirect approach has some major potential advantages compared with direct methods. The processes are faster, cheaper and do not involve patient inconvenience, discomfort or the risks associated with generating new patient health information. Indirect methods also use the same preanalytical and analytical techniques used for patient management and can provide very large numbers for assessment. Limitations to the indirect methods include possible effects of diseased subpopulations on the derived interval. The IFCC C-RIDL aims to encourage the use of indirect methods to establish and verify reference intervals, to promote publication of such intervals with clear explanation of the process used and also to support the development of improved statistical techniques for these studies.

  1. Comprehensive Plan for Public Confidence in Nuclear Regulator

    International Nuclear Information System (INIS)

    Choi, Kwang Sik; Choi, Young Sung; Kim, Ho ki

    2008-01-01

    Public confidence in nuclear regulator has been discussed internationally. Public trust or confidence is needed for achieving regulatory goal of assuring nuclear safety to the level that is acceptable by the public or providing public ease for nuclear safety. In Korea, public ease or public confidence has been suggested as major policy goal in the 'Nuclear regulatory policy direction' annually announced. This paper reviews theory of trust, its definitions and defines nuclear safety regulation, elements of public trust or public confidence developed based on the study conducted so far. Public ease model developed and 10 measures for ensuring public confidence are also presented and future study directions are suggested

  2. Simultaneous confidence bands for the integrated hazard function

    OpenAIRE

    Dudek, Anna; Gocwin, Maciej; Leskow, Jacek

    2006-01-01

    The construction of the simultaneous confidence bands for the integrated hazard function is considered. The Nelson--Aalen estimator is used. The simultaneous confidence bands based on bootstrap methods are presented. Two methods of construction of such confidence bands are proposed. The weird bootstrap method is used for resampling. Simulations are made to compare the actual coverage probability of the bootstrap and the asymptotic simultaneous confidence bands. It is shown that the equal--tai...

  3. Increasing the reliability of the fluid/crystallized difference score from the Kaufman Adolescent and Adult Intelligence Test with reliable component analysis.

    Science.gov (United States)

    Caruso, J C

    2001-06-01

    The unreliability of difference scores is a well documented phenomenon in the social sciences and has led researchers and practitioners to interpret differences cautiously, if at all. In the case of the Kaufman Adult and Adolescent Intelligence Test (KAIT), the unreliability of the difference between the Fluid IQ and the Crystallized IQ is due to the high correlation between the two scales. The consequences of the lack of precision with which differences are identified are wide confidence intervals and unpowerful significance tests (i.e., large differences are required to be declared statistically significant). Reliable component analysis (RCA) was performed on the subtests of the KAIT in order to address these problems. RCA is a new data reduction technique that results in uncorrelated component scores with maximum proportions of reliable variance. Results indicate that the scores defined by RCA have discriminant and convergent validity (with respect to the equally weighted scores) and that differences between the scores, derived from a single testing session, were more reliable than differences derived from equal weighting for each age group (11-14 years, 15-34 years, 35-85+ years). This reliability advantage results in narrower confidence intervals around difference scores and smaller differences required for statistical significance.

  4. CONSEL: for assessing the confidence of phylogenetic tree selection.

    Science.gov (United States)

    Shimodaira, H; Hasegawa, M

    2001-12-01

    CONSEL is a program to assess the confidence of the tree selection by giving the p-values for the trees. The main thrust of the program is to calculate the p-value of the Approximately Unbiased (AU) test using the multi-scale bootstrap technique. This p-value is less biased than the other conventional p-values such as the Bootstrap Probability (BP), the Kishino-Hasegawa (KH) test, the Shimodaira-Hasegawa (SH) test, and the Weighted Shimodaira-Hasegawa (WSH) test. CONSEL calculates all these p-values from the output of the phylogeny program packages such as Molphy, PAML, and PAUP*. Furthermore, CONSEL is applicable to a wide class of problems where the BPs are available. The programs are written in C language. The source code for Unix and the executable binary for DOS are found at http://www.ism.ac.jp/~shimo/ shimo@ism.ac.jp

  5. Learned Interval Time Facilitates Associate Memory Retrieval

    Science.gov (United States)

    van de Ven, Vincent; Kochs, Sarah; Smulders, Fren; De Weerd, Peter

    2017-01-01

    The extent to which time is represented in memory remains underinvestigated. We designed a time paired associate task (TPAT) in which participants implicitly learned cue-time-target associations between cue-target pairs and specific cue-target intervals. During subsequent memory testing, participants showed increased accuracy of identifying…

  6. Extended score interval in the assessment of basic surgical skills.

    Science.gov (United States)

    Acosta, Stefan; Sevonius, Dan; Beckman, Anders

    2015-01-01

    The Basic Surgical Skills course uses an assessment score interval of 0-3. An extended score interval, 1-6, was proposed by the Swedish steering committee of the course. The aim of this study was to analyze the trainee scores in the current 0-3 scored version compared to a proposed 1-6 scored version. Sixteen participants, seven females and nine males, were evaluated in the current and proposed assessment forms by instructors, observers, and learners themselves during the first and second day. In each assessment form, 17 tasks were assessed. The inter-rater reliability between the current and the proposed score sheets were evaluated with intraclass correlation (ICC) with 95% confidence intervals (CI). The distribution of scores for 'knot tying' at the last time point and 'bowel anastomosis side to side' given by the instructors in the current assessment form showed that the highest score was given in 31 and 62%, respectively. No ceiling effects were found in the proposed assessment form. The overall ICC between the current and proposed score sheets after assessment by the instructors increased from 0.38 (95% CI 0.77-0.78) on Day 1 to 0.83 (95% CI 0.51-0.94) on Day 2. A clear ceiling effect of scores was demonstrated in the current assessment form, questioning its validity. The proposed score sheet provides more accurate scores and seems to be a better feedback instrument for learning technical surgical skills in the Basic Surgical Skills course.

  7. Dead certain: confidence and conservatism predict aggression in simulated international crisis decision-making.

    Science.gov (United States)

    Johnson, Dominic D P; McDermott, Rose; Cowden, Jon; Tingley, Dustin

    2012-03-01

    Evolutionary psychologists have suggested that confidence and conservatism promoted aggression in our ancestral past, and that this may have been an adaptive strategy given the prevailing costs and benefits of conflict. However, in modern environments, where the costs and benefits of conflict can be very different owing to the involvement of mass armies, sophisticated technology, and remote leadership, evolved tendencies toward high levels of confidence and conservatism may continue to be a contributory cause of aggression despite leading to greater costs and fewer benefits. The purpose of this paper is to test whether confidence and conservatism are indeed associated with greater levels of aggression-in an explicitly political domain. We present the results of an experiment examining people's levels of aggression in response to hypothetical international crises (a hostage crisis, a counter-insurgency campaign, and a coup). Levels of aggression (which range from concession to negotiation to military attack) were significantly predicted by subjects' (1) confidence that their chosen policy would succeed, (2) score on a liberal-conservative scale, (3) political party affiliation, and (4) preference for the use of military force in real-world U.S. policy toward Iraq and Iran. We discuss the possible adaptive and maladaptive implications of confidence and conservatism for the prospects of war and peace in the modern world.

  8. Maternal Confidence for Physiologic Childbirth: A Concept Analysis.

    Science.gov (United States)

    Neerland, Carrie E

    2018-06-06

    Confidence is a term often used in research literature and consumer media in relation to birth, but maternal confidence has not been clearly defined, especially as it relates to physiologic labor and birth. The aim of this concept analysis was to define maternal confidence in the context of physiologic labor and childbirth. Rodgers' evolutionary method was used to identify attributes, antecedents, and consequences of maternal confidence for physiologic birth. Databases searched included Ovid MEDLINE, CINAHL, PsycINFO, and Sociological Abstracts from the years 1995 to 2015. A total of 505 articles were retrieved, using the search terms pregnancy, obstetric care, prenatal care, and self-efficacy and the keyword confidence. Articles were identified for in-depth review and inclusion based on whether the term confidence was used or assessed in relationship to labor and/or birth. In addition, a hand search of the reference lists of the selected articles was performed. Twenty-four articles were reviewed in this concept analysis. We define maternal confidence for physiologic birth as a woman's belief that physiologic birth can be achieved, based on her view of birth as a normal process and her belief in her body's innate ability to birth, which is supported by social support, knowledge, and information founded on a trusted relationship with a maternity care provider in an environment where the woman feels safe. This concept analysis advances the concept of maternal confidence for physiologic birth and provides new insight into how women's confidence for physiologic birth might be enhanced during the prenatal period. Further investigation of confidence for physiologic birth across different cultures is needed to identify cultural differences in constructions of the concept. © 2018 by the American College of Nurse-Midwives.

  9. Reducing public communication apprehension by boosting self confidence on communication competence

    Directory of Open Access Journals (Sweden)

    Eva Rachmi

    2012-07-01

    medical doctor should be competent in communicating with others. Some students at the medical faculty Universitas Mulawarman tend to be silent at public communication training, and this is thought to be influenced by communication anxiety. This study aimed to analyze the possibility of self-confidence on communication competence and communication skills are risk factors of communication apprehension. Methods: This study was conducted on 55 students at the medical faculty Universitas Mulawarman.  Public communication apprehension was measured using the Personal Report of Communication Apprehension (PRCA-24. Confidence in communication competence was determined by the Self Perceived Communication Competence scale (SPCC.  Communication skills were based on the instructor’s score during the communication training program. Data were analyzed by linear regression to identify dominant factors using STATA 9.0. Results: The study showed a negative association between public communication apprehension and students’ self confidence in communication competence [coefficient regression (CR =-0.13; p=0.000; 95% confidence interval (CI=-0.20; -0.52]. However, it was not related to communication skills (p=0.936. Among twelve traits of self confidence on communication competence, students who had confidence to talk to a group of strangers had lower public communication apprehension (adjusted CR=-0.13; CI=-0.21; 0.05; p=0.002. Conclusions:  Increased confidence in their communication competence will reduce the degree of public communication apprehension by students. Therefore, the faculty should provide more opportunities for students to practice public communication, in particular, talking to a group of strangers more frequently. (Health Science Indones 2010; 1: 37 - 42

  10. Self-Confidence in the Hospitality Industry

    Directory of Open Access Journals (Sweden)

    Michael Oshins

    2014-02-01

    Full Text Available Few industries rely on self-confidence to the extent that the hospitality industry does because guests must feel welcome and that they are in capable hands. This article examines the results of hundreds of student interviews with industry professionals at all levels to determine where the majority of the hospitality industry gets their self-confidence.

  11. The Institution of Advertising: Predictors of Cross-National Differences in Consumer Confidence.

    Science.gov (United States)

    Zinkhan, George M.; Balazs, Anne L.

    1998-01-01

    Contributes to scholarship on advertising and cross-cultural studies by exploring cultural factors affecting customer confidence in advertising. Uses a sample of 16 European nations to test G. Hofstede's theory of cross-national values. Finds that Hofstede's dimensions of uncertainty avoidance, masculinity, and individualism are important…

  12. ADAM SMITH: THE INVISIBLE HAND OR CONFIDENCE

    Directory of Open Access Journals (Sweden)

    Fernando Luis, Gache

    2010-01-01

    Full Text Available In 1776 Adam Smith raised the matter that an invisible hand was the one which moved the markets to obtain its efficiency. Despite in the present paper we are going to raise the hypothesis, that this invisible hand is in fact the confidence that each person feels when he is going to do business. That in addition it is unique, because it is different from the confidence of the others and that is a variable nonlinear that essentially is ligatured to respective personal histories. For that we are going to take as its bases the paper by Leopoldo Abadía (2009, with respect to the financial economy crisis that happened in 2007-2008, to evidence the form in which confidence operates. Therefore the contribution that we hope to do with this paper is to emphasize that, the level of confidence of the different actors, is the one which really moves the markets, (therefore the economy and that the crisis of the subprime mortgages is a confidence crisis at world-wide level.

  13. Word Memory Test Predicts Recovery in Claimants With Work-Related Head Injury.

    Science.gov (United States)

    Colangelo, Annette; Abada, Abigail; Haws, Calvin; Park, Joanne; Niemeläinen, Riikka; Gross, Douglas P

    2016-05-01

    To investigate the predictive validity of the Word Memory Test (WMT), a verbal memory neuropsychological test developed as a performance validity measure to assess memory, effort, and performance consistency. Cohort study with 1-year follow-up. Workers' compensation rehabilitation facility. Participants included workers' compensation claimants with work-related head injury (N=188; mean age, 44y; 161 men [85.6%]). Not applicable. Outcome measures for determining predictive validity included days to suspension of wage replacement benefits during the 1-year follow-up and work status at discharge in claimants undergoing rehabilitation. Analysis included multivariable Cox and logistic regression. Better WMT performance was significantly but weakly correlated with younger age (r=-.30), documented brain abnormality (r=.28), and loss of consciousness at the time of injury (r=.25). Claimants with documented brain abnormalities on diagnostic imaging scans performed better (∼9%) on the WMT than those without brain abnormalities. The WMT predicted days receiving benefits (adjusted hazard ratio, 1.13; 95% confidence interval, 1.04-1.24) and work status outcome at program discharge (adjusted odds ratio, 1.62; 95% confidence interval, 1.13-2.34). Our results provide evidence for the predictive validity of the WMT in workers' compensation claimants. Younger claimants and those with more severe brain injuries performed better on the WMT. It may be that financial incentives or other factors related to the compensation claim affected the performance. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  14. Time Interval to Initiation of Contraceptive Methods Following ...

    African Journals Online (AJOL)

    2018-01-30

    Jan 30, 2018 ... interval between a woman's last childbirth and the initiation of contraception. Materials and ..... DF=Degree of freedom; χ2=Chi‑square test ..... practice of modern contraception among single women in a rural and urban ...

  15. The significance test controversy revisited the fiducial Bayesian alternative

    CERN Document Server

    Lecoutre, Bruno

    2014-01-01

    The purpose of this book is not only to revisit the “significance test controversy,”but also to provide a conceptually sounder alternative. As such, it presents a Bayesian framework for a new approach to analyzing and interpreting experimental data. It also prepares students and researchers for reporting on experimental results. Normative aspects: The main views of statistical tests are revisited and the philosophies of Fisher, Neyman-Pearson and Jeffrey are discussed in detail. Descriptive aspects: The misuses of Null Hypothesis Significance Tests are reconsidered in light of Jeffreys’ Bayesian conceptions concerning the role of statistical inference in experimental investigations. Prescriptive aspects: The current effect size and confidence interval reporting practices are presented and seriously questioned. Methodological aspects are carefully discussed and fiducial Bayesian methods are proposed as a more suitable alternative for reporting on experimental results. In closing, basic routine procedures...

  16. Bayes Factor Approaches for Testing Interval Null Hypotheses

    NARCIS (Netherlands)

    Morey, Richard D.; Rouder, Jeffrey N.

    2011-01-01

    Psychological theories are statements of constraint. The role of hypothesis testing in psychology is to test whether specific theoretical constraints hold in data. Bayesian statistics is well suited to the task of finding supporting evidence for constraint, because it allows for comparing evidence

  17. Randomness confidence bands of fractal scaling exponents for financial price returns

    International Nuclear Information System (INIS)

    Ibarra-Valdez, C.; Alvarez, J.; Alvarez-Ramirez, J.

    2016-01-01

    Highlights: • A robust test for randomness of price returns is proposed. • The DFA scaling exponent is contrasted against confidence bands for random sequences. • The size of the band depends of the sequence length. • Crude oil and USA stock markets have been rarely inefficient. - Abstract: The weak-form of the efficient market hypothesis (EMH) establishes that price returns behave as a pure random process and so their outcomes cannot be forecasted. The detrended fluctuation analysis (DFA) has been widely used to test the weak-form of the EMH by showing that time series of price returns are serially uncorrelated. In this case, the DFA scaling exponent exhibits deviations from the theoretical value of 0.5. This work considers the test of the EMH for DFA implementation on a sliding window, which is an approach that is intended to monitor the evolution of markets. Under these conditions, the scaling exponent exhibits important variations over the scrutinized period that can offer valuable insights in the behavior of the market provided the estimated scaling value is kept within strict statistical tests to verify the presence or not of serial correlations in the price returns. In this work, the statistical tests are based on comparing the estimated scaling exponent with the values obtained from pure Gaussian sequences with the length of the real time series. In this way, the presence of serial correlations can be guaranteed only in terms of the confidence bands of a pure Gaussian process. The crude oil (WTI) and the USA stock (DJIA) markets are used to illustrate the methodology.

  18. Assessing nonchoosers' eyewitness identification accuracy from photographic showups by using confidence and response times.

    Science.gov (United States)

    Sauerland, Melanie; Sagana, Anna; Sporer, Siegfried L

    2012-10-01

    While recent research has shown that the accuracy of positive identification decisions can be assessed via confidence and decision times, gauging lineup rejections has been less successful. The current study focused on 2 different aspects which are inherent in lineup rejections. First, we hypothesized that decision times and confidence ratings should be postdictive of identification rejections if they refer to a single lineup member only. Second, we hypothesized that dividing nonchoosers according to the reasons they provided for their decisions can serve as a useful postdictor for nonchoosers' accuracy. To test these assumptions, we used (1) 1-person lineups (showups) in order to obtain confidence and response time measures referring to a single lineup member, and (2) asked nonchoosers about their reasons for making a rejection. Three hundred and eighty-four participants were asked to identify 2 different persons after watching 1 of 2 stimulus films. The results supported our hypotheses. Nonchoosers' postdecision confidence ratings were well-calibrated. Likewise, we successfully established optimum time and confidence boundaries for nonchoosers. Finally, combinations of postdictors increased the number of accurate classifications compared with individual postdictors. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  19. Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels

    Science.gov (United States)

    Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.

    2018-01-01

    A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…

  20. Chosen interval methods for solving linear interval systems with special type of matrix

    Science.gov (United States)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  1. Improvement of risk informed surveillance test interval for the safety related instrument and control system of Ulchin units 3 and 4

    International Nuclear Information System (INIS)

    Jang, Seung Cheol; Lee, Yun Hwan; Lee, Seung Joon; Han, Sang Hoon

    2012-05-01

    The purpose of this research is the development of various methodologies necessary for the licensing of the risk informed surveillance test interval(STI) improvement for the safety related I and C systems in UCN 3 and 4, for instance, reactor protection system (RPS), engineered safety features actuation system (ESFAS), ESF auxiliary relay cabinet (ARC), and core protection calculator (CPC). The technical adequacy of the methodology was sufficiently verified through the application to the following STI changes. o CPC channel functional test (change from 1 month to 3 months including safety channel and log power test) o RPS channel functional test (change from 1 month to 3 months) o RPS logic and trip channel test (change from 1 month to 3 months. 1 month for RPS manual actuation test) o ESFAS channel functional test (change from 1 month to 3 months) o ESFAS logic and trip channel test (change from 1 month to 3 months) o ESF auxiliary relay test (change from 1 month to 3 months with staggered test. Manual actuation at the ESF ARC is added as a backup of ESF actuation signals during emergency operation

  2. Improvement of risk informed surveillance test interval for the safety related instrumentation and control system of Yonggwang units 3 and 4

    International Nuclear Information System (INIS)

    Jang, Seung Cheol; Lee, Yun Hwan; Lee, Seung Joon; Han, Sang Hoon

    2012-05-01

    The purpose of this research is the development of various methodologies necessary for the licensing of the risk informed surveillance test interval(STI) improvement for the safety related I and C systems in YGN 3 and 4, for instance, reactor protection system (RPS), engineered safety features actuation system (ESFAS), ESF auxiliary relay cabinet (ARC), and core protection calculator (CPC). The technical adequacy of the methodology was sufficiently verified through the application to the following STI changes. o CPC channel functional test (change from 1 month to 3 months including safety channel and log power test) o RPS channel functional test (change from 1 month to 3 months) o RPS logic and trip channel test (change from 1 month to 3 months. 1 month for RPS manual actuation test) o ESFAS channel functional test (change from 1 month to 3 months) o ESFAS logic and trip channel test (change from 1 month to 3 months) o ESF auxiliary relay test (change from 1 month to 3 months with staggered test. Manual actuation at the ESF ARC is added as a backup of ESF actuation signals during emergency operation

  3. HIV intertest interval among MSM in King County, Washington.

    Science.gov (United States)

    Katz, David A; Dombrowski, Julia C; Swanson, Fred; Buskin, Susan E; Golden, Matthew R; Stekler, Joanne D

    2013-02-01

    The authors examined temporal trends and correlates of HIV testing frequency among men who have sex with men (MSM) in King County, Washington. The authors evaluated data from MSM testing for HIV at the Public Health-Seattle & King County (PHSKC) STD Clinic and Gay City Health Project (GCHP) and testing history data from MSM in PHSKC HIV surveillance. The intertest interval (ITI) was defined as the number of days between the last negative HIV test and the current testing visit or first positive test. Correlates of the log(10)-transformed ITI were determined using generalised estimating equations linear regression. Between 2003 and 2010, the median ITI among MSM seeking HIV testing at the STD Clinic and GCHP were 215 (IQR: 124-409) and 257 (IQR: 148-503) days, respectively. In multivariate analyses, younger age, having only male partners and reporting ≥10 male sex partners in the last year were associated with shorter ITIs at both testing sites (pGCHP attendees, having a regular healthcare provider, seeking a test as part of a regular schedule and inhaled nitrite use in the last year were also associated with shorter ITIs (pGCHP (median 359 vs 255 days, p=0.02). Although MSM in King County appear to be testing at frequent intervals, further efforts are needed to reduce the time that HIV-infected persons are unaware of their status.

  4. Risk prediction of cardiovascular death based on the QTc interval

    DEFF Research Database (Denmark)

    Nielsen, Jonas B; Graff, Claus; Rasmussen, Peter V

    2014-01-01

    electrocardiograms from 173 529 primary care patients aged 50-90 years were collected during 2001-11. The Framingham formula was used for heart rate-correction of the QT interval. Data on medication, comorbidity, and outcomes were retrieved from administrative registries. During a median follow-up period of 6......AIMS: Using a large, contemporary primary care population we aimed to provide absolute long-term risks of cardiovascular death (CVD) based on the QTc interval and to test whether the QTc interval is of value in risk prediction of CVD on an individual level. METHODS AND RESULTS: Digital...

  5. Trust, confidence, and the 2008 global financial crisis.

    Science.gov (United States)

    Earle, Timothy C

    2009-06-01

    The 2008 global financial crisis has been compared to a "once-in-a-century credit tsunami," a disaster in which the loss of trust and confidence played key precipitating roles and the recovery from which will require the restoration of these crucial factors. Drawing on the analogy between the financial crisis and environmental and technological hazards, recent research on the role of trust and confidence in the latter is used to provide a perspective on the former. Whereas "trust" and "confidence" are used interchangeably and without explicit definition in most discussions of the financial crisis, this perspective uses the TCC model of cooperation to clearly distinguish between the two and to demonstrate how this distinction can lead to an improved understanding of the crisis. The roles of trust and confidence-both in precipitation and in possible recovery-are discussed for each of the three major sets of actors in the crisis, the regulators, the banks, and the public. The roles of trust and confidence in the larger context of risk management are also examined; trust being associated with political approaches, confidence with technical. Finally, the various stances that government can take with regard to trust-such as supportive or skeptical-are considered. Overall, it is argued that a clear understanding of trust and confidence and a close examination of the specific, concrete circumstances of a crisis-revealing when either trust or confidence is appropriate-can lead to useful insights for both recovery and prevention of future occurrences.

  6. Confidence in Numerical Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to “forecast,” that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists “think.” This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. “Confidence” derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  7. 49 CFR 1103.23 - Confidences of a client.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Confidences of a client. 1103.23 Section 1103.23... Responsibilities Toward A Client § 1103.23 Confidences of a client. (a) The practitioner's duty to preserve his client's confidence outlasts the practitioner's employment by the client, and this duty extends to the...

  8. Confidence assessment. Site-descriptive modelling SDM-Site Laxemar

    International Nuclear Information System (INIS)

    2009-06-01

    The objective of this report is to assess the confidence that can be placed in the Laxemar site descriptive model, based on the information available at the conclusion of the surface-based investigations (SDM-Site Laxemar). In this exploration, an overriding question is whether remaining uncertainties are significant for repository engineering design or long-term safety assessment and could successfully be further reduced by more surface-based investigations or more usefully by explorations underground made during construction of the repository. Procedures for this assessment have been progressively refined during the course of the site descriptive modelling, and applied to all previous versions of the Forsmark and Laxemar site descriptive models. They include assessment of whether all relevant data have been considered and understood, identification of the main uncertainties and their causes, possible alternative models and their handling, and consistency between disciplines. The assessment then forms the basis for an overall confidence statement. The confidence in the Laxemar site descriptive model, based on the data available at the conclusion of the surface based site investigations, has been assessed by exploring: - Confidence in the site characterization data base, - remaining issues and their handling, - handling of alternatives, - consistency between disciplines and - main reasons for confidence and lack of confidence in the model. Generally, the site investigation database is of high quality, as assured by the quality procedures applied. It is judged that the Laxemar site descriptive model has an overall high level of confidence. Because of the relatively robust geological model that describes the site, the overall confidence in the Laxemar Site Descriptive model is judged to be high, even though details of the spatial variability remain unknown. The overall reason for this confidence is the wide spatial distribution of the data and the consistency between

  9. Confidence assessment. Site-descriptive modelling SDM-Site Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    2008-12-15

    The objective of this report is to assess the confidence that can be placed in the Laxemar site descriptive model, based on the information available at the conclusion of the surface-based investigations (SDM-Site Laxemar). In this exploration, an overriding question is whether remaining uncertainties are significant for repository engineering design or long-term safety assessment and could successfully be further reduced by more surface-based investigations or more usefully by explorations underground made during construction of the repository. Procedures for this assessment have been progressively refined during the course of the site descriptive modelling, and applied to all previous versions of the Forsmark and Laxemar site descriptive models. They include assessment of whether all relevant data have been considered and understood, identification of the main uncertainties and their causes, possible alternative models and their handling, and consistency between disciplines. The assessment then forms the basis for an overall confidence statement. The confidence in the Laxemar site descriptive model, based on the data available at the conclusion of the surface based site investigations, has been assessed by exploring: - Confidence in the site characterization data base, - remaining issues and their handling, - handling of alternatives, - consistency between disciplines and - main reasons for confidence and lack of confidence in the model. Generally, the site investigation database is of high quality, as assured by the quality procedures applied. It is judged that the Laxemar site descriptive model has an overall high level of confidence. Because of the relatively robust geological model that describes the site, the overall confidence in the Laxemar Site Descriptive model is judged to be high, even though details of the spatial variability remain unknown. The overall reason for this confidence is the wide spatial distribution of the data and the consistency between

  10. Interval Size and Phrase Position: A Comparison between German and Chinese Folksongs

    Directory of Open Access Journals (Sweden)

    Daniel Shanahan

    2012-09-01

    Full Text Available It is well known that the pitch of the voice tends to decline over the course of a spoken utterance. Ladd (2008 showed that there is also a tendency for the pitch range of spoken utterances to shrink as the pitch of the voice declines. Motivated by this work, two studies are reported that test for the existence of “late phrase compression” in music where the interval size tends to decline toward the end of a phrase. A study of 39,863 phrases from notated Germanic folksongs shows the predicted decline in interval size. However, a second study of 10,985 phrases from Chinese folksongs shows a reverse relationship. In fact, the interval behaviors in Chinese and Germanic folksongs provide marked contrasts: Chinese phrases are dominated by relatively large intervals, but begin with small intervals and end with medium-small intervals. Germanic phrases are dominated by relatively medium intervals, but begin with large intervals and end with small intervals. In short, late phrase interval compression is not evident cross-culturally.

  11. Confidence assessment. Site descriptive modelling SDM-Site Forsmark

    International Nuclear Information System (INIS)

    2008-09-01

    distribution and size-intensity models for fractures at repository depth can only be reduced by data from underground, i.e. from fracture mapping of tunnel walls etc. Specifically it will be necessary to carry out statistical modelling of fractures in a DFN study at depth during construction work on the access ramp and shafts. Uncertainties in stress magnitude will be reduced by observations and measurements of deformation with back analysis during the construction phase. Underground mapping data from deposition tunnels will allow fore a division of the fine-grained granitoid into different rock types. This will enable thermal optimisation of the repository. The next step in confidence building would be to predict conditions and impacts from underground tunnels. Tunnel data will provide information about the fracture size distribution at the relevant depths. The underground excavations will also provide possibilities for short-range interference tests at relevant depth. Uncertainties in understanding chemical processes may be reduced by assessing results from underground monitoring (groundwater chemistry; fracture minerals etc) of the effects of drawdown and inflows during excavation. The hydrogeological DFN fitting parameters for fractures within the repository volume can only be properly constrained by mapping of flowing or potentially open fracture statistics in tunnels. Surface outcrop statistics are not relevant for properties at repository depth. During underground investigations, the flowing fracture frequencies in tunnels and investigations of couplings between rock mechanical properties and fracture transmissivities may give clues to the extent of in-plane flow channelling which will lead to more reliable models for transport from the repository volume, particularly close to deposition holes where the most important retention and retardation of any released radionuclides may occur in the rock barrier

  12. High Confidence Software and Systems Research Needs

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This White Paper presents a survey of high confidence software and systems research needs. It has been prepared by the High Confidence Software and Systems...

  13. Confidence in Alternative Dispute Resolution: Experience from Switzerland

    Directory of Open Access Journals (Sweden)

    Christof Schwenkel

    2014-06-01

    Full Text Available Alternative Dispute Resolution plays a crucial role in the justice system of Switzerland. With the unified Swiss Code of Civil Procedure, it is required that each litigation session shall be preceded by an attempt at conciliation before a conciliation authority. However, there has been little research on conciliation authorities and the public's perception of the authorities. This paper looks at public confidence in conciliation authorities and provides results of a survey conducted with more than 3,400 participants. This study found that public confidence in Swiss conciliation authorities is generally high, exceeds the ratings for confidence in cantonal governments and parliaments, but is lower than confidence in courts.Since the institutional models of the conciliation authorities (meaning the organization of the authorities and the selection of the conciliators differ widely between the 26 Swiss cantons, the influence of the institutional models on public confidence is analyzed. Contrary to assumptions based on New Institutional-ism approaches, this study reports that the institutional models do not impact public confidence. Also, the relationship between a participation in an election of justices of the peace or conciliators and public confidence in these authorities is found to be at most very limited (and negative. Similar to common findings on courts, the results show that general contacts with conciliation authorities decrease public confidence in these institutions whereas a positive experience with a conciliation authority leads to more confidence.The Study was completed as part of the research project 'Basic Research into Court Management in Switzerland', supported by the Swiss National Science Foundation (SNSF. Christof Schwenkel is a PhD student at the University of Lucerne and a research associate and project manager at Interface Policy Studies. A first version of this article was presented at the 2013 European Group for Public

  14. Replication, falsification, and the crisis of confidence in social psychology.

    Science.gov (United States)

    Earp, Brian D; Trafimow, David

    2015-01-01

    The (latest) crisis in confidence in social psychology has generated much heated discussion about the importance of replication, including how it should be carried out as well as interpreted by scholars in the field. For example, what does it mean if a replication attempt "fails"-does it mean that the original results, or the theory that predicted them, have been falsified? And how should "failed" replications affect our belief in the validity of the original research? In this paper, we consider the replication debate from a historical and philosophical perspective, and provide a conceptual analysis of both replication and falsification as they pertain to this important discussion. Along the way, we highlight the importance of auxiliary assumptions (for both testing theories and attempting replications), and introduce a Bayesian framework for assessing "failed" replications in terms of how they should affect our confidence in original findings.

  15. Replication, falsification, and the crisis of confidence in social psychology

    Science.gov (United States)

    Earp, Brian D.; Trafimow, David

    2015-01-01

    The (latest) crisis in confidence in social psychology has generated much heated discussion about the importance of replication, including how it should be carried out as well as interpreted by scholars in the field. For example, what does it mean if a replication attempt “fails”—does it mean that the original results, or the theory that predicted them, have been falsified? And how should “failed” replications affect our belief in the validity of the original research? In this paper, we consider the replication debate from a historical and philosophical perspective, and provide a conceptual analysis of both replication and falsification as they pertain to this important discussion. Along the way, we highlight the importance of auxiliary assumptions (for both testing theories and attempting replications), and introduce a Bayesian framework for assessing “failed” replications in terms of how they should affect our confidence in original findings. PMID:26042061

  16. Primary care physicians' perceptions about and confidence in deciding which patients to refer for total joint arthroplasty of the hip and knee.

    Science.gov (United States)

    Waugh, E J; Badley, E M; Borkhoff, C M; Croxford, R; Davis, A M; Dunn, S; Gignac, M A; Jaglal, S B; Sale, J; Hawker, G A

    2016-03-01

    The purpose of this study is to examine the perceptions of primary care physicians (PCPs) regarding indications, contraindications, risks and benefits of total joint arthroplasty (TJA) and their confidence in selecting patients for referral for TJA. PCPs recruited from among those providing care to participants in an established community cohort with hip or knee osteoarthritis (OA). Self-completed questionnaires were used to collect demographic and practice characteristics and perceptions about TJA. Confidence in referring appropriate patients for TJA was measured on a scale from 1 to 10; respondents scoring in the lowest tertile were considered to have 'low confidence'. Descriptive analyses were conducted and multiple logistic regression was used to determine key predictors of low confidence. 212 PCPs participated (58% response rate) (65% aged 50+ years, 45% female, 77% >15 years of practice). Perceptions about TJA were highly variable but on average, PCPs perceived that a typical surgical candidate would have moderate pain and disability, identified few absolute contraindications to TJA, and overestimated both the effectiveness and risks of TJA. On average, PCPs indicated moderate confidence in deciding who to refer. Independent predictors of low confidence were female physicians (OR = 2.18, 95% confidence interval (CI): 1.06-4.46) and reporting a 'lack of clarity about surgical indications' (OR = 3.54, 95% CI: 1.87-6.66). Variability in perceptions and lack of clarity about surgical indications underscore the need for decision support tools to inform PCP - patient decision making regarding referral for TJA. Copyright © 2015 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  17. The Los Alamos Gap Stick Test

    Science.gov (United States)

    Preston, Daniel; Hill, Larry; Johnson, Carl

    2015-06-01

    In this paper we describe a novel shock sensitivity test, the Gap Stick Test, which is a generalized variant of the ubiquitous Gap Test. Despite the popularity of the Gap Test, it has some disadvantages: multiple tests must be fired to obtain a single metric, and many tests must be fired to obtain its value to high precision and confidence. Our solution is a test wherein multiple gap tests are joined in series to form a rate stick. The complex re-initiation character of the traditional gap test is thereby retained, but the propagation speed is steady when measured at periodic intervals, and initiation delay in individual segments acts to decrement the average speed. We measure the shock arrival time before and after each inert gap, and compute the average detonation speed through the HE alone (discounting the gap thicknesses). We perform tests for a range of gap thicknesses. We then plot the aforementioned propagation speed as a function of gap thickness. The resulting curve has the same basic structure as a Diameter Effect (DE) curve, and (like the DE curve) terminates at a failure point. Comparison between experiment and hydrocode calculations using ALE3D and the Ignition and Growth reactive burn model calibrated for short duration shock inputs in PBX 9501 is discussed.

  18. Electrocardiographic Abnormalities and QTc Interval in Patients Undergoing Hemodialysis.

    Directory of Open Access Journals (Sweden)

    Yuxin Nie

    Full Text Available Sudden cardiac death is one of the primary causes of mortality in chronic hemodialysis (HD patients. Prolonged QTc interval is associated with increased rate of sudden cardiac death. The aim of this article is to assess the abnormalities found in electrocardiograms (ECGs, and to explore factors that can influence the QTc interval.A total of 141 conventional HD patients were enrolled in this study. ECG tests were conducted on each patient before a single dialysis session and 15 minutes before the end of dialysis session (at peak stress. Echocardiography tests were conducted before dialysis session began. Blood samples were drawn by phlebotomy immediately before and after the dialysis session.Before dialysis, 93.62% of the patients were in sinus rhythm, and approximately 65% of the patients showed a prolonged QTc interval (i.e., a QTc interval above 440 ms in males and above 460ms in females. A comparison of ECG parameters before dialysis and at peak stress showed increases in heart rate (77.45±11.92 vs. 80.38±14.65 bpm, p = 0.001 and QTc interval (460.05±24.53 ms vs. 470.93±24.92 ms, p<0.001. After dividing patients into two groups according to the QTc interval, lower pre-dialysis serum concentrations of potassium (K+, calcium (Ca2+, phosphorus, calcium* phosphorus (Ca*P, and higher concentrations of plasma brain natriuretic peptide (BNP were found in the group with prolonged QTc intervals. Patients in this group also had a larger left atrial diameter (LAD and a thicker interventricular septum, and they tended to be older than patients in the other group. Then patients were divided into two groups according to ΔQTc (ΔQTc = QTc peak-stress- QTc pre-HD. When analyzing the patients whose QTc intervals were longer at peak stress than before HD, we found that they had higher concentrations of Ca2+ and P5+ and lower concentrations of K+, ferritin, UA, and BNP. They were also more likely to be female. In addition, more cardiac construction

  19. Ventricular Cycle Length Characteristics Estimative of Prolonged RR Interval during Atrial Fibrillation

    Science.gov (United States)

    CIACCIO, EDWARD J.; BIVIANO, ANGELO B.; GAMBHIR, ALOK; EINSTEIN, ANDREW J.; GARAN, HASAN

    2014-01-01

    Background When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. Methods From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13–T2, and (2) Tm–T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. Results The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. Conclusions A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. PMID:23998759

  20. Lifetime Estimation of Electrolytic Capacitors in Fuel Cell Power Converter at Various Confidence Levels

    DEFF Research Database (Denmark)

    Zhou, Dao; Wang, Huai; Blaabjerg, Frede

    2016-01-01

    DC capacitors in power electronic converters are a major constraint on improvement of the power density and the reliability. In this paper, according to the degradation data of tested capacitors, the lifetime model of the component is analyzed at various confidence levels. Then, the mission profile...... based lifetime expectancy of the individual capacitor and the capacitor bank is estimated in a fuel cell backup power converter operating in both standby mode and operation mode. The lifetime prediction of the capacitor banks at different confidence levels is also obtained....