WorldWideScience

Sample records for confidence interval tests

  1. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  2. Parametric change point estimation, testing and confidence interval ...

    African Journals Online (AJOL)

    In many applications like finance, industry and medicine, it is important to consider that the model parameters may undergo changes at unknown moment in time. This paper deals with estimation, testing and confidence interval of a change point for a univariate variable which is assumed to be normally distributed. To detect ...

  3. Using the confidence interval confidently.

    Science.gov (United States)

    Hazra, Avijit

    2017-10-01

    Biomedical research is seldom done with entire populations but rather with samples drawn from a population. Although we work with samples, our goal is to describe and draw inferences regarding the underlying population. It is possible to use a sample statistic and estimates of error in the sample to get a fair idea of the population parameter, not as a single value, but as a range of values. This range is the confidence interval (CI) which is estimated on the basis of a desired confidence level. Calculation of the CI of a sample statistic takes the general form: CI = Point estimate ± Margin of error, where the margin of error is given by the product of a critical value (z) derived from the standard normal curve and the standard error of point estimate. Calculation of the standard error varies depending on whether the sample statistic of interest is a mean, proportion, odds ratio (OR), and so on. The factors affecting the width of the CI include the desired confidence level, the sample size and the variability in the sample. Although the 95% CI is most often used in biomedical research, a CI can be calculated for any level of confidence. A 99% CI will be wider than 95% CI for the same sample. Conflict between clinical importance and statistical significance is an important issue in biomedical research. Clinical importance is best inferred by looking at the effect size, that is how much is the actual change or difference. However, statistical significance in terms of P only suggests whether there is any difference in probability terms. Use of the CI supplements the P value by providing an estimate of actual clinical effect. Of late, clinical trials are being designed specifically as superiority, non-inferiority or equivalence studies. The conclusions from these alternative trial designs are based on CI values rather than the P value from intergroup comparison.

  4. Binomial confidence intervals for testing non-inferiority or superiority: a practitioner's dilemma.

    Science.gov (United States)

    Pradhan, Vivek; Evans, John C; Banerjee, Tathagata

    2016-08-01

    In testing for non-inferiority or superiority in a single arm study, the confidence interval of a single binomial proportion is frequently used. A number of such intervals are proposed in the literature and implemented in standard software packages. Unfortunately, use of different intervals leads to conflicting conclusions. Practitioners thus face a serious dilemma in deciding which one to depend on. Is there a way to resolve this dilemma? We address this question by investigating the performances of ten commonly used intervals of a single binomial proportion, in the light of two criteria, viz., coverage and expected length of the interval. © The Author(s) 2013.

  5. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

    2005-01-01

    The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

  6. Robust misinterpretation of confidence intervals

    NARCIS (Netherlands)

    Hoekstra, Rink; Morey, Richard; Rouder, Jeffrey N.; Wagenmakers, Eric-Jan

    2014-01-01

    Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more

  7. A Note on Confidence Interval for the Power of the One Sample Test

    OpenAIRE

    A. Wong

    2010-01-01

    In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a ( 1 − ) 100% confidence interval for...

  8. A Note on Confidence Interval for the Power of the One Sample Test

    Directory of Open Access Journals (Sweden)

    A. Wong

    2010-01-01

    Full Text Available In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a (1−100% confidence interval for the power of the Student's -test that detects the difference (−0. The calculations require only the density and the cumulative distribution functions of the standard normal distribution. In addition, the methodology presented can also be applied to determine the required sample size when the effect size and the power of a size test of mean are given.

  9. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    Science.gov (United States)

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  10. PCA-based bootstrap confidence interval tests for gene-disease association involving multiple SNPs

    Directory of Open Access Journals (Sweden)

    Xue Fuzhong

    2010-01-01

    Full Text Available Abstract Background Genetic association study is currently the primary vehicle for identification and characterization of disease-predisposing variant(s which usually involves multiple single-nucleotide polymorphisms (SNPs available. However, SNP-wise association tests raise concerns over multiple testing. Haplotype-based methods have the advantage of being able to account for correlations between neighbouring SNPs, yet assuming Hardy-Weinberg equilibrium (HWE and potentially large number degrees of freedom can harm its statistical power and robustness. Approaches based on principal component analysis (PCA are preferable in this regard but their performance varies with methods of extracting principal components (PCs. Results PCA-based bootstrap confidence interval test (PCA-BCIT, which directly uses the PC scores to assess gene-disease association, was developed and evaluated for three ways of extracting PCs, i.e., cases only(CAES, controls only(COES and cases and controls combined(CES. Extraction of PCs with COES is preferred to that with CAES and CES. Performance of the test was examined via simulations as well as analyses on data of rheumatoid arthritis and heroin addiction, which maintains nominal level under null hypothesis and showed comparable performance with permutation test. Conclusions PCA-BCIT is a valid and powerful method for assessing gene-disease association involving multiple SNPs.

  11. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  12. Robust misinterpretation of confidence intervals.

    Science.gov (United States)

    Hoekstra, Rink; Morey, Richard D; Rouder, Jeffrey N; Wagenmakers, Eric-Jan

    2014-10-01

    Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students-all in the field of psychology-were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.

  13. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

    Science.gov (United States)

    Carnegie, Nicole Bohme

    2011-04-15

    The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

  14. Interpretation of Confidence Interval Facing the Conflict

    Science.gov (United States)

    Andrade, Luisa; Fernández, Felipe

    2016-01-01

    As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

  15. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  16. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    Science.gov (United States)

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  17. Understanding Confidence Intervals With Visual Representations

    OpenAIRE

    Navruz, Bilgin; Delen, Erhan

    2014-01-01

    In the present paper, we showed how confidence intervals (CIs) are valuable and useful in research studies when they are used in the correct form with correct interpretations. The sixth edition of the APA (2010) Publication Manual strongly recommended reporting CIs in research studies, and it was described as “the best reporting strategy” (p. 34). Misconceptions and correct interpretations of CIs were presented from several textbooks. In addition, limitations of the null hypothesis statistica...

  18. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

    Directory of Open Access Journals (Sweden)

    Ionut Bebu

    2016-06-01

    Full Text Available For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR from several studies, the number needed to treat (NNT, and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates.

  19. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    Science.gov (United States)

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  20. Confidence intervals for the lognormal probability distribution

    International Nuclear Information System (INIS)

    Smith, D.L.; Naberejnev, D.G.

    2004-01-01

    The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

  1. Robust Confidence Interval for a Ratio of Standard Deviations

    Science.gov (United States)

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  2. Confidence intervals for correlations when data are not normal.

    Science.gov (United States)

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  3. Confidence Intervals from Normalized Data: A correction to Cousineau (2005

    Directory of Open Access Journals (Sweden)

    Richard D. Morey

    2008-09-01

    Full Text Available Presenting confidence intervals around means is a common method of expressing uncertainty in data. Loftus and Masson (1994 describe confidence intervals for means in within-subjects designs. These confidence intervals are based on the ANOVA mean squared error. Cousineau (2005 presents an alternative to the Loftus and Masson method, but his method produces confidence intervals that are smaller than those of Loftus and Masson. I show why this is the case and offer a simple correction that makes the expected size of Cousineau confidence intervals the same as that of Loftus and Masson confidence intervals.

  4. Learning about confidence intervals with software R

    Directory of Open Access Journals (Sweden)

    Gariela Gonçalves

    2013-08-01

    Full Text Available 0 0 1 202 1111 USAL 9 2 1311 14.0 Normal 0 21 false false false ES JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:ES; mso-fareast-language:EN-US;} This work was to study the feasibility of implementing a teaching method that employs software, in a Computational Mathematics course, involving students and teachers through the use of the statistical software R in carrying out practical work, such as strengthening the traditional teaching. The statistical inference, namely the determination of confidence intervals, was the content selected for this experience. It was intended show, first of all, that it is possible to promote, through the proposal methodology, the acquisition of basic skills in statistical inference and to promote the positive relationships between teachers and students. It presents also a comparative study between the methodologies used and their quantitative and qualitative results on two consecutive school years, in several indicators. The data used in the study were obtained from the students to the exam questions in the years 2010/2011 and 2011/2012, from the achievement of a working group in 2011/2012 and via the responses to a questionnaire (optional and anonymous also applied in 2011 / 2012. In terms of results, we emphasize a better performance of students in the examination questions in 2011/2012, the year that students used the software R, and a very favorable student’s perspective about

  5. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

    Science.gov (United States)

    Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

    2017-01-01

    Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

  6. Effect size, confidence intervals and statistical power in psychological research.

    Directory of Open Access Journals (Sweden)

    Téllez A.

    2015-07-01

    Full Text Available Quantitative psychological research is focused on detecting the occurrence of certain population phenomena by analyzing data from a sample, and statistics is a particularly helpful mathematical tool that is used by researchers to evaluate hypotheses and make decisions to accept or reject such hypotheses. In this paper, the various statistical tools in psychological research are reviewed. The limitations of null hypothesis significance testing (NHST and the advantages of using effect size and its respective confidence intervals are explained, as the latter two measurements can provide important information about the results of a study. These measurements also can facilitate data interpretation and easily detect trivial effects, enabling researchers to make decisions in a more clinically relevant fashion. Moreover, it is recommended to establish an appropriate sample size by calculating the optimum statistical power at the moment that the research is designed. Psychological journal editors are encouraged to follow APA recommendations strictly and ask authors of original research studies to report the effect size, its confidence intervals, statistical power and, when required, any measure of clinical significance. Additionally, we must account for the teaching of statistics at the graduate level. At that level, students do not receive sufficient information concerning the importance of using different types of effect sizes and their confidence intervals according to the different types of research designs; instead, most of the information is focused on the various tools of NHST.

  7. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    Science.gov (United States)

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  9. Confidence interval procedures for Monte Carlo transport simulations

    International Nuclear Information System (INIS)

    Pederson, S.P.

    1997-01-01

    The problem of obtaining valid confidence intervals based on estimates from sampled distributions using Monte Carlo particle transport simulation codes such as MCNP is examined. Such intervals can cover the true parameter of interest at a lower than nominal rate if the sampled distribution is extremely right-skewed by large tallies. Modifications to the standard theory of confidence intervals are discussed and compared with some existing heuristics, including batched means normality tests. Two new types of diagnostics are introduced to assess whether the conditions of central limit theorem-type results are satisfied: the relative variance of the variance determines whether the sample size is sufficiently large, and estimators of the slope of the right tail of the distribution are used to indicate the number of moments that exist. A simulation study is conducted to quantify the relationship between various diagnostics and coverage rates and to find sample-based quantities useful in indicating when intervals are expected to be valid. Simulated tally distributions are chosen to emulate behavior seen in difficult particle transport problems. Measures of variation in the sample variance s 2 are found to be much more effective than existing methods in predicting when coverage will be near nominal rates. Batched means tests are found to be overly conservative in this regard. A simple but pathological MCNP problem is presented as an example of false convergence using existing heuristics. The new methods readily detect the false convergence and show that the results of the problem, which are a factor of 4 too small, should not be used. Recommendations are made for applying these techniques in practice, using the statistical output currently produced by MCNP

  10. Differentially Private Confidence Intervals for Empirical Risk Minimization

    OpenAIRE

    Wang, Yue; Kifer, Daniel; Lee, Jaewoo

    2018-01-01

    The process of data mining with differential privacy produces results that are affected by two types of noise: sampling noise due to data collection and privacy noise that is designed to prevent the reconstruction of sensitive information. In this paper, we consider the problem of designing confidence intervals for the parameters of a variety of differentially private machine learning models. The algorithms can provide confidence intervals that satisfy differential privacy (as well as the mor...

  11. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-11-01

    MCNP's criticality methodology and some basic statistics are reviewed. Confidence intervals are discussed, as well as how to build them and their importance in the presentation of a Monte Carlo result. The combination of MCNP's three k eff estimators is shown, theoretically and empirically, by statistical studies and examples, to be the best k eff estimator. The method of combining estimators is based on a solid theoretical foundation, namely, the Gauss-Markov Theorem in regard to the least squares method. The confidence intervals of the combined estimator are also shown to have correct coverage rates for the examples considered

  12. Comparing confidence intervals for Goodman and Kruskal's gamma coefficient

    NARCIS (Netherlands)

    van der Ark, L.A.; van Aert, R.C.M.

    2015-01-01

    This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman-Kruskal CI, the Cliff-consistent CI, the

  13. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Science.gov (United States)

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  14. On Bayesian treatment of systematic uncertainties in confidence interval calculation

    CERN Document Server

    Tegenfeldt, Fredrik

    2005-01-01

    In high energy physics, a widely used method to treat systematic uncertainties in confidence interval calculations is based on combining a frequentist construction of confidence belts with a Bayesian treatment of systematic uncertainties. In this note we present a study of the coverage of this method for the standard Likelihood Ratio (aka Feldman & Cousins) construction for a Poisson process with known background and Gaussian or log-Normal distributed uncertainties in the background or signal efficiency. For uncertainties in the signal efficiency of upto 40 % we find over-coverage on the level of 2 to 4 % depending on the size of uncertainties and the region in signal space. Uncertainties in the background generally have smaller effect on the coverage. A considerable smoothing of the coverage curves is observed. A software package is presented which allows fast calculation of the confidence intervals for a variety of assumptions on shape and size of systematic uncertainties for different nuisance paramete...

  15. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

    Science.gov (United States)

    Kim, Kyung Yong; Lee, Won-Chan

    2018-01-01

    Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

  16. Confidence Intervals from Realizations of Simulated Nuclear Data

    Energy Technology Data Exchange (ETDEWEB)

    Younes, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ratkiewicz, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ressler, J. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-28

    Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.

  17. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    Science.gov (United States)

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  18. 用Delta法估计多维测验合成信度的置信区间%Estimating the Confidence Interval of Composite Reliability of a Multidimensional Test With the Delta Method

    Institute of Scientific and Technical Information of China (English)

    叶宝娟; 温忠麟

    2012-01-01

    Reliability is very important in evaluating the quality of a test. Based on the confirmatory factor analysis, composite reliabili- ty is a good index to estimate the test reliability for general applications. As is well known, point estimate contains limited information a- bout a population parameter and cannot indicate how far it can be from the population parameter. The confidence interval of the parame- ter can provide more information. In evaluating the quality of a test, the confidence interval of composite reliability has received atten- tion in recent years. There are three approaches to estimating the confidence interval of composite reliability of an unidimensional test: the Bootstrap method, the Delta method, and the direct use of the standard error of a software output (e. g. , LISREL). The Bootstrap method pro- vides empirical results of the standard error, and is the most credible method. But it needs data simulation techniques, and its computa- tion process is rather complex. The Delta method computes the standard error of composite reliability by approximate calculation. It is simpler than the Bootstrap method. The LISREL software can directly prompt the standard error, and it is the easiest among the three methods. By simulation study, it had been found that the interval estimates obtained by the Delta method and the Bootstrap method were almost identical, whereas the results obtained by LISREL and by the Bootstrap method were substantially different ( Ye & Wen, 2011 ). The Delta method is recommended when the confidence interval of composite reliability of a unidimensional test is estimated, because the Delta method is simpler than the Bootstrap method. There was little research about how to compute the confidence interval of composite reliability of a multidimensional test. We de- duced a formula by using the Delta method for computing the standard error of composite reliability of a multidimensional test. Based on the standard error, the

  19. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    Science.gov (United States)

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  20. On a linear method in bootstrap confidence intervals

    Directory of Open Access Journals (Sweden)

    Andrea Pallini

    2007-10-01

    Full Text Available A linear method for the construction of asymptotic bootstrap confidence intervals is proposed. We approximate asymptotically pivotal and non-pivotal quantities, which are smooth functions of means of n independent and identically distributed random variables, by using a sum of n independent smooth functions of the same analytical form. Errors are of order Op(n-3/2 and Op(n-2, respectively. The linear method allows a straightforward approximation of bootstrap cumulants, by considering the set of n independent smooth functions as an original random sample to be resampled with replacement.

  1. Comparison of Bootstrap Confidence Intervals Using Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Roberto S. Flowers-Cano

    2018-02-01

    Full Text Available Design of hydraulic works requires the estimation of design hydrological events by statistical inference from a probability distribution. Using Monte Carlo simulations, we compared coverage of confidence intervals constructed with four bootstrap techniques: percentile bootstrap (BP, bias-corrected bootstrap (BC, accelerated bias-corrected bootstrap (BCA and a modified version of the standard bootstrap (MSB. Different simulation scenarios were analyzed. In some cases, the mother distribution function was fit to the random samples that were generated. In other cases, a distribution function different to the mother distribution was fit to the samples. When the fitted distribution had three parameters, and was the same as the mother distribution, the intervals constructed with the four techniques had acceptable coverage. However, the bootstrap techniques failed in several of the cases in which the fitted distribution had two parameters.

  2. Surveillance test interval optimization

    International Nuclear Information System (INIS)

    Cepin, M.; Mavko, B.

    1995-01-01

    Technical specifications have been developed on the bases of deterministic analyses, engineering judgment, and expert opinion. This paper introduces our risk-based approach to surveillance test interval (STI) optimization. This approach consists of three main levels. The first level is the component level, which serves as a rough estimation of the optimal STI and can be calculated analytically by a differentiating equation for mean unavailability. The second and third levels give more representative results. They take into account the results of probabilistic risk assessment (PRA) calculated by a personal computer (PC) based code and are based on system unavailability at the system level and on core damage frequency at the plant level

  3. A note on Nonparametric Confidence Interval for a Shift Parameter ...

    African Journals Online (AJOL)

    The method is illustrated using the Cauchy distribution as a location model. The kernel-based method is found to have a shorter interval for the shift parameter between two Cauchy distributions than the one based on the Mann-Whitney test statistic. Keywords: Best Asymptotic Normal; Cauchy distribution; Kernel estimates; ...

  4. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-01-01

    MCNP has three different, but correlated, estimators for Calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov Theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the individual estimator with the smallest variance. The importance of MCNP's batch statistics is demonstrated by an investigation of the effects of individual estimator variance bias on the combination of estimators, both heuristically with the analytical study and emprically with MCNP

  5. The 95% confidence intervals of error rates and discriminant coefficients

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2015-02-01

    Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

  6. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  7. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-01-01

    The Monte Carlo code MCNP has three different, but correlated, estimators for calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the estimator with the smallest variance. Empirically, MCNP examples for several physical systems demonstrate the three-combined estimator's superiority over each of the three individual estimators and its correct coverage rates. Additionally, the importance of MCNP's statistical checks is demonstrated

  8. Secure and Usable Bio-Passwords based on Confidence Interval

    Directory of Open Access Journals (Sweden)

    Aeyoung Kim

    2017-02-01

    Full Text Available The most popular user-authentication method is the password. Many authentication systems try to enhance their security by enforcing a strong password policy, and by using the password as the first factor, something you know, with the second factor being something you have. However, a strong password policy and a multi-factor authentication system can make it harder for a user to remember the password and login in. In this paper a bio-password-based scheme is proposed as a unique authentication method, which uses biometrics and confidence interval sets to enhance the security of the log-in process and make it easier as well. The method offers a user-friendly solution for creating and registering strong passwords without the user having to memorize them. Here we also show the results of our experiments which demonstrate the efficiency of this method and how it can be used to protect against a variety of malicious attacks.

  9. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  10. Number of core samples: Mean concentrations and confidence intervals

    International Nuclear Information System (INIS)

    Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

    1995-01-01

    This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

  11. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    Science.gov (United States)

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  12. Interpretando correctamente en salud pública estimaciones puntuales, intervalos de confianza y contrastes de hipótesis Accurate interpretation of point estimates, confidence intervals, and hypothesis tests in public health

    Directory of Open Access Journals (Sweden)

    Manuel G Scotto

    2003-12-01

    Full Text Available El presente ensayo trata de aclarar algunos conceptos utilizados habitualmente en el campo de investigación de la salud pública, que en numerosas situaciones son interpretados de manera incorrecta. Entre ellos encontramos la estimación puntual, los intervalos de confianza, y los contrastes de hipótesis. Estableciendo un paralelismo entre estos tres conceptos, podemos observar cuáles son sus diferencias más importantes a la hora de ser interpretados, tanto desde el punto de vista del enfoque clásico como desde la óptica bayesiana.This essay reviews some statistical concepts frequently used in public health research that are commonly misinterpreted. These include point estimates, confidence intervals, and hypothesis tests. By comparing them using the classical and the Bayesian perspectives, their interpretation becomes clearer.

  13. Confidence intervals for experiments with background and small numbers of events

    International Nuclear Information System (INIS)

    Bruechle, W.

    2003-01-01

    Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

  14. Confidence intervals for experiments with background and small numbers of events

    International Nuclear Information System (INIS)

    Bruechle, W.

    2002-07-01

    Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

  15. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-12-01

    Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

  16. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    Science.gov (United States)

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

  17. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    Science.gov (United States)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  18. Bootstrap confidence intervals for three-way methods

    NARCIS (Netherlands)

    Kiers, Henk A.L.

    Results from exploratory three-way analysis techniques such as CANDECOMP/PARAFAC and Tucker3 analysis are usually presented without giving insight into uncertainties due to sampling. Here a bootstrap procedure is proposed that produces percentile intervals for all output parameters. Special

  19. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    Science.gov (United States)

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  20. Using an R Shiny to Enhance the Learning Experience of Confidence Intervals

    Science.gov (United States)

    Williams, Immanuel James; Williams, Kelley Kim

    2018-01-01

    Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…

  1. Estimating confidence intervals in predicted responses for oscillatory biological models.

    Science.gov (United States)

    St John, Peter C; Doyle, Francis J

    2013-07-29

    The dynamics of gene regulation play a crucial role in a cellular control: allowing the cell to express the right proteins to meet changing needs. Some needs, such as correctly anticipating the day-night cycle, require complicated oscillatory features. In the analysis of gene regulatory networks, mathematical models are frequently used to understand how a network's structure enables it to respond appropriately to external inputs. These models typically consist of a set of ordinary differential equations, describing a network of biochemical reactions, and unknown kinetic parameters, chosen such that the model best captures experimental data. However, since a model's parameter values are uncertain, and since dynamic responses to inputs are highly parameter-dependent, it is difficult to assess the confidence associated with these in silico predictions. In particular, models with complex dynamics - such as oscillations - must be fit with computationally expensive global optimization routines, and cannot take advantage of existing measures of identifiability. Despite their difficulty to model mathematically, limit cycle oscillations play a key role in many biological processes, including cell cycling, metabolism, neuron firing, and circadian rhythms. In this study, we employ an efficient parameter estimation technique to enable a bootstrap uncertainty analysis for limit cycle models. Since the primary role of systems biology models is the insight they provide on responses to rate perturbations, we extend our uncertainty analysis to include first order sensitivity coefficients. Using a literature model of circadian rhythms, we show how predictive precision is degraded with decreasing sample points and increasing relative error. Additionally, we show how this method can be used for model discrimination by comparing the output identifiability of two candidate model structures to published literature data. Our method permits modellers of oscillatory systems to confidently

  2. The P Value Problem in Otolaryngology: Shifting to Effect Sizes and Confidence Intervals.

    Science.gov (United States)

    Vila, Peter M; Townsend, Melanie Elizabeth; Bhatt, Neel K; Kao, W Katherine; Sinha, Parul; Neely, J Gail

    2017-06-01

    There is a lack of reporting effect sizes and confidence intervals in the current biomedical literature. The objective of this article is to present a discussion of the recent paradigm shift encouraging the use of reporting effect sizes and confidence intervals. Although P values help to inform us about whether an effect exists due to chance, effect sizes inform us about the magnitude of the effect (clinical significance), and confidence intervals inform us about the range of plausible estimates for the general population mean (precision). Reporting effect sizes and confidence intervals is a necessary addition to the biomedical literature, and these concepts are reviewed in this article.

  3. Graphing within-subjects confidence intervals using SPSS and S-Plus.

    Science.gov (United States)

    Wright, Daniel B

    2007-02-01

    Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.

  4. Energy Performance Certificate of building and confidence interval in assessment: An Italian case study

    International Nuclear Information System (INIS)

    Tronchin, Lamberto; Fabbri, Kristian

    2012-01-01

    The Directive 2002/91/CE introduced the Energy Performance Certificate (EPC), an energy policy tool. The aim of the EPC is to inform building buyers about the energy performance and energy costs of buildings. The EPCs represent a specific energy policy tool to orient the building sector and real-estate markets toward higher energy efficiency buildings. The effectiveness of the EPC depends on two factors: •The accuracy of energy performance evaluation made by independent experts. •The capability of the energy classification and of the scale of energy performance to control the energy index fluctuations. In this paper, the results of a case study located in Italy are shown. In this example, 162 independent technicians on energy performance of building evaluation have studied the same building. The results reveal which part of confidence intervals is dependent on software misunderstanding and that the energy classification ranges are able to tolerate the fluctuation of energy indices. The example was chosen in accordance with the legislation of the Emilia-Romagna Region on Energy Efficiency of Buildings. Following these results, some thermo-economic evaluation related to building and energy labelling are illustrated, as the EPC, which is an energy policy tool for the real-estate market and building sector to find a way to build or retrofit an energy efficiency building. - Highlights: ► Evaluation of the accuracy of energy performance of buildings in relation with the knowledge of independent experts. ► Round robin test based on 162 case studies on the confidence intervals expressed by independent experts. ► Statistical considerations between the confidence intervals expressed by independent experts and energy simulation software. ► Relation between “proper class” in energy classification of buildings and confidence intervals of independent experts.

  5. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  6. Confidence intervals for the first crossing point of two hazard functions.

    Science.gov (United States)

    Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

    2009-12-01

    The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

  7. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    Science.gov (United States)

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  8. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    Science.gov (United States)

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  9. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  10. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    Science.gov (United States)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  11. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  12. Comparing confidence intervals for Goodman and Kruskal’s gamma coefficient

    NARCIS (Netherlands)

    van der Ark, L.A.; van Aert, R.C.M.

    2015-01-01

    This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman–Kruskal CI, the Cliff-consistent CI, the

  13. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    Science.gov (United States)

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. The confidence-accuracy relationship for eyewitness identification decisions: Effects of exposure duration, retention interval, and divided attention.

    Science.gov (United States)

    Palmer, Matthew A; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

    2013-03-01

    Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the CA relationship for eyewitness identification decisions is affected by three, forensically relevant variables: exposure duration, retention interval, and divided attention at encoding. In Study 1 (N = 986), a field experiment, we examined the effects of exposure duration (5 s vs. 90 s) and retention interval (immediate testing vs. a 1-week delay) on the CA relationship. In Study 2 (N = 502), we examined the effects of attention during encoding on the CA relationship by reanalyzing data from a laboratory experiment in which participants viewed a stimulus video under full or divided attention conditions and then attempted to identify two targets from separate lineups. Across both studies, all three manipulations affected identification accuracy. The central analyses concerned the CA relation for positive identification decisions. For the manipulations of exposure duration and retention interval, overconfidence was greater in the more difficult conditions (shorter exposure; delayed testing) than the easier conditions. Only the exposure duration manipulation influenced resolution (which was better for 5 s than 90 s), and only the retention interval manipulation affected calibration (which was better for immediate testing than delayed testing). In all experimental conditions, accuracy and diagnosticity increased with confidence, particularly at the upper end of the confidence scale. Implications for theory and forensic settings are discussed.

  15. Methods for confidence interval estimation of a ratio parameter with application to location quotients

    Directory of Open Access Journals (Sweden)

    Beyene Joseph

    2005-10-01

    Full Text Available Abstract Background The location quotient (LQ ratio, a measure designed to quantify and benchmark the degree of relative concentration of an activity in the analysis of area localization, has received considerable attention in the geographic and economics literature. This index can also naturally be applied in the context of population health to quantify and compare health outcomes across spatial domains. However, one commonly observed limitation of LQ is its widespread use as only a point estimate without an accompanying confidence interval. Methods In this paper we present statistical methods that can be used to construct confidence intervals for location quotients. The delta and Fieller's methods are generic approaches for a ratio parameter and the generalized linear modelling framework is a useful re-parameterization particularly helpful for generating profile-likelihood based confidence intervals for the location quotient. A simulation experiment is carried out to assess the performance of each of the analytic approaches and a health utilization data set is used for illustration. Results Both the simulation results as well as the findings from the empirical data show that the different analytical methods produce very similar confidence limits for location quotients. When incidence of outcome is not rare and sample sizes are large, the confidence limits are almost indistinguishable. The confidence limits from the generalized linear model approach might be preferable in small sample situations. Conclusion LQ is a useful measure which allows quantification and comparison of health and other outcomes across defined geographical regions. It is a very simple index to compute and has a straightforward interpretation. Reporting this estimate with appropriate confidence limits using methods presented in this paper will make the measure particularly attractive for policy and decision makers.

  16. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Science.gov (United States)

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  17. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Directory of Open Access Journals (Sweden)

    Tak Fung

    Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  18. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.

    2012-12-06

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

  19. Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion

    Directory of Open Access Journals (Sweden)

    David Shilane

    2013-01-01

    Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

  20. Planning an Availability Demonstration Test with Consideration of Confidence Level

    Directory of Open Access Journals (Sweden)

    Frank Müller

    2017-08-01

    Full Text Available The full service life of a technical product or system is usually not completed after an initial failure. With appropriate measures, the system can be returned to a functional state. Availability is an important parameter for evaluating such repairable systems: Failure and repair behaviors are required to determine this availability. These data are usually given as mean value distributions with a certain confidence level. Consequently, the availability value also needs to be expressed with a confidence level. This paper first highlights the bootstrap Monte Carlo simulation (BMCS for availability demonstration and inference with confidence intervals based on limited failure and repair data. The BMCS enables point-, steady-state and average availability to be determined with a confidence level based on the pure samples or mean value distributions in combination with the corresponding sample size of failure and repair behavior. Furthermore, the method enables individual sample sizes to be used. A sample calculation of a system with Weibull-distributed failure behavior and a sample of repair times is presented. Based on the BMCS, an extended, new procedure is introduced: the “inverse bootstrap Monte Carlo simulation” (IBMCS to be used for availability demonstration tests with consideration of confidence levels. The IBMCS provides a test plan comprising the required number of failures and repair actions that must be observed to demonstrate a certain availability value. The concept can be applied to each type of availability and can also be applied to the pure samples or distribution functions of failure and repair behavior. It does not require special types of distribution. In other words, for example, a Weibull, a lognormal or an exponential distribution can all be considered as distribution functions of failure and repair behavior. After presenting the IBMCS, a sample calculation will be carried out and the potential of the BMCS and the IBMCS

  1. Bootstrap resampling: a powerful method of assessing confidence intervals for doses from experimental data

    International Nuclear Information System (INIS)

    Iwi, G.; Millard, R.K.; Palmer, A.M.; Preece, A.W.; Saunders, M.

    1999-01-01

    Bootstrap resampling provides a versatile and reliable statistical method for estimating the accuracy of quantities which are calculated from experimental data. It is an empirically based method, in which large numbers of simulated datasets are generated by computer from existing measurements, so that approximate confidence intervals of the derived quantities may be obtained by direct numerical evaluation. A simple introduction to the method is given via a detailed example of estimating 95% confidence intervals for cumulated activity in the thyroid following injection of 99m Tc-sodium pertechnetate using activity-time data from 23 subjects. The application of the approach to estimating confidence limits for the self-dose to the kidney following injection of 99m Tc-DTPA organ imaging agent based on uptake data from 19 subjects is also illustrated. Results are then given for estimates of doses to the foetus following administration of 99m Tc-sodium pertechnetate for clinical reasons during pregnancy, averaged over 25 subjects. The bootstrap method is well suited for applications in radiation dosimetry including uncertainty, reliability and sensitivity analysis of dose coefficients in biokinetic models, but it can also be applied in a wide range of other biomedical situations. (author)

  2. Rescaled Range Analysis and Detrended Fluctuation Analysis: Finite Sample Properties and Confidence Intervals

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    4/2010, č. 3 (2010), s. 236-250 ISSN 1802-4696 R&D Projects: GA ČR GD402/09/H045; GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310 Institutional research plan: CEZ:AV0Z10750506 Keywords : rescaled range analysis * detrended fluctuation analysis * Hurst exponent * long-range dependence Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/kristoufek-rescaled range analysis and detrended fluctuation analysis finite sample properties and confidence intervals.pdf

  3. A NEW METHOD FOR CONSTRUCTING CONFIDENCE INTERVAL FOR CPM BASED ON FUZZY DATA

    Directory of Open Access Journals (Sweden)

    Bahram Sadeghpour Gildeh

    2011-06-01

    Full Text Available A measurement control system ensures that measuring equipment and measurement processes are fit for their intended use and its importance in achieving product quality objectives. In most real life applications, the observations are fuzzy. In some cases specification limits (SLs are not precise numbers and they are expressed in fuzzy terms, s o that the classical capability indices could not be applied. In this paper we obtain 100(1 - α% fuzzy confidence interval for C pm fuzzy process capability index, where instead of precise quality we have two membership functions for specification limits.

  4. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  5. Confidence interval of intrinsic optimum temperature estimated using thermodynamic SSI model

    Institute of Scientific and Technical Information of China (English)

    Takaya Ikemoto; Issei Kurahashi; Pei-Jian Shi

    2013-01-01

    The intrinsic optimum temperature for the development of ectotherms is one of the most important factors not only for their physiological processes but also for ecological and evolutional processes.The Sharpe-Schoolfield-Ikemoto (SSI) model succeeded in defining the temperature that can thermodynamically meet the condition that at a particular temperature the probability of an active enzyme reaching its maximum activity is realized.Previously,an algorithm was developed by Ikemoto (Tropical malaria does not mean hot environments.Journal of Medical Entomology,45,963-969) to estimate model parameters,but that program was computationally very time consuming.Now,investigators can use the SSI model more easily because a full automatic computer program was designed by Shi et al.(A modified program for estimating the parameters of the SSI model.Environmental Entomology,40,462-469).However,the statistical significance of the point estimate of the intrinsic optimum temperature for each ectotherm has not yet been determined.Here,we provided a new method for calculating the confidence interval of the estimated intrinsic optimum temperature by modifying the approximate bootstrap confidence intervals method.For this purpose,it was necessary to develop a new program for a faster estimation of the parameters in the SSI model,which we have also done.

  6. Confidence intervals for modeling anthocyanin retention in grape pomace during nonisothermal heating.

    Science.gov (United States)

    Mishra, D K; Dolan, K D; Yang, L

    2008-01-01

    Degradation of nutraceuticals in low- and intermediate-moisture foods heated at high temperature (>100 degrees C) is difficult to model because of the nonisothermal condition. Isothermal experiments above 100 degrees C are difficult to design because they require high pressure and small sample size in sealed containers. Therefore, a nonisothermal method was developed to estimate the thermal degradation kinetic parameter of nutraceuticals and determine the confidence intervals for the parameters and the predicted Y (concentration). Grape pomace at 42% moisture content (wb) was heated in sealed 202 x 214 steel cans in a steam retort at 126.7 degrees C for > 30 min. Can center temperature was measured by thermocouple and predicted using Comsol software. Thermal conductivity (k) and specific heat (C(p)) were estimated as quadratic functions of temperature using Comsol and nonlinear regression. The k and C(p) functions were then used to predict temperature inside the grape pomace during retorting. Similar heating experiments were run at different time-temperature treatments from 8 to 25 min for kinetic parameter estimation. Anthocyanin concentration in the grape pomace was measured using HPLC. Degradation rate constant (k(110 degrees C)) and activation energy (E(a)) were estimated using nonlinear regression. The thermophysical properties estimates at 100 degrees C were k = 0.501 W/m degrees C, Cp= 3600 J/kg and the kinetic parameters were k(110 degrees C)= 0.0607/min and E(a)= 65.32 kJ/mol. The 95% confidence intervals for the parameters and the confidence bands and prediction bands for anthocyanin retention were plotted. These methods are useful for thermal processing design for nutraceutical products.

  7. Statistical variability and confidence intervals for planar dose QA pass rates

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher; Kumaraswamy, Lalith; Podgorsak, Matthew B. [Department of Physics, State University of New York at Buffalo, Buffalo, New York 14260 (United States) and Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Department of Biostatistics, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Molecular and Cellular Biophysics and Biochemistry, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States) and Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, New York 14214 (United States)

    2011-11-15

    Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics of various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization

  8. The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals.

    Science.gov (United States)

    Brand, Andrew; Bradley, Michael T

    2016-02-01

    Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.

  9. Assessing a disaggregated energy input: using confidence intervals around translog elasticity estimates

    International Nuclear Information System (INIS)

    Hisnanick, J.J.; Kyer, B.L.

    1995-01-01

    The role of energy in the production of manufacturing output has been debated extensively in the literature, particularly its relationship with capital and labor. In an attempt to provide some clarification in this debate, a two-step methodology was used. First under the assumption of a five-factor production function specification, we distinguished between electric and non-electric energy and assessed each component's relationship with capital and labor. Second, we calculated both the Allen and price elasticities and constructed 95% confidence intervals around these values. Our approach led to the following conclusions: that the disaggregation of the energy input into electric and non-electric energy is justified; that capital and electric energy and capital and non-electric energy are substitutes, while labor and electric energy and labor and non-electric energy are complements in production; and that capital and energy are substitutes, while labor and energy are complements. (author)

  10. An SPSS Macro to Compute Confidence Intervals for Pearson’s Correlation

    Directory of Open Access Journals (Sweden)

    Bruce Weaver

    2014-04-01

    Full Text Available In many disciplines, including psychology, medical research, epidemiology and public health, authors are required, or at least encouraged to report confidence intervals (CIs along with effect size estimates. Many students and researchers in these areas use IBM-SPSS for statistical analysis. Unfortunately, the CORRELATIONS procedure in SPSS does not provide CIs in the output. Various work-around solutions have been suggested for obtaining CIs for rhowith SPSS, but most of them have been sub-optimal. Since release 18, it has been possible to compute bootstrap CIs, but only if users have the optional bootstrap module. The !rhoCI macro described in this article is accessible to all SPSS users with release 14 or later. It directs output from the CORRELATIONS procedure to another dataset, restructures that dataset to have one row per correlation, computes a CI for each correlation, and displays the results in a single table. Because the macro uses the CORRELATIONS procedure, it allows users to specify a list of two or more variables to include in the correlation matrix, to choose a confidence level, and to select either listwise or pairwise deletion. Thus, it offers substantial improvements over previous solutions to theproblem of how to compute CIs for rho with SPSS.

  11. Computing confidence and prediction intervals of industrial equipment degradation by bootstrapped support vector regression

    International Nuclear Information System (INIS)

    Lins, Isis Didier; Droguett, Enrique López; Moura, Márcio das Chagas; Zio, Enrico; Jacinto, Carlos Magno

    2015-01-01

    Data-driven learning methods for predicting the evolution of the degradation processes affecting equipment are becoming increasingly attractive in reliability and prognostics applications. Among these, we consider here Support Vector Regression (SVR), which has provided promising results in various applications. Nevertheless, the predictions provided by SVR are point estimates whereas in order to take better informed decisions, an uncertainty assessment should be also carried out. For this, we apply bootstrap to SVR so as to obtain confidence and prediction intervals, without having to make any assumption about probability distributions and with good performance even when only a small data set is available. The bootstrapped SVR is first verified on Monte Carlo experiments and then is applied to a real case study concerning the prediction of degradation of a component from the offshore oil industry. The results obtained indicate that the bootstrapped SVR is a promising tool for providing reliable point and interval estimates, which can inform maintenance-related decisions on degrading components. - Highlights: • Bootstrap (pairs/residuals) and SVR are used as an uncertainty analysis framework. • Numerical experiments are performed to assess accuracy and coverage properties. • More bootstrap replications does not significantly improve performance. • Degradation of equipment of offshore oil wells is estimated by bootstrapped SVR. • Estimates about the scale growth rate can support maintenance-related decisions

  12. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

    Science.gov (United States)

    Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

    2011-02-20

    A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  14. Uncertainty in population growth rates: determining confidence intervals from point estimates of parameters.

    Directory of Open Access Journals (Sweden)

    Eleanor S Devenish Nelson

    Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.

  15. Bayesian-statistical decision threshold, detection limit, and confidence interval in nuclear radiation measurement

    International Nuclear Information System (INIS)

    Weise, K.

    1998-01-01

    When a contribution of a particular nuclear radiation is to be detected, for instance, a spectral line of interest for some purpose of radiation protection, and quantities and their uncertainties must be taken into account which, such as influence quantities, cannot be determined by repeated measurements or by counting nuclear radiation events, then conventional statistics of event frequencies is not sufficient for defining the decision threshold, the detection limit, and the limits of a confidence interval. These characteristic limits are therefore redefined on the basis of Bayesian statistics for a wider applicability and in such a way that the usual practice remains as far as possible unaffected. The principle of maximum entropy is applied to establish probability distributions from available information. Quantiles of these distributions are used for defining the characteristic limits. But such a distribution must not be interpreted as a distribution of event frequencies such as the Poisson distribution. It rather expresses the actual state of incomplete knowledge of a physical quantity. The different definitions and interpretations and their quantitative consequences are presented and discussed with two examples. The new approach provides a theoretical basis for the DIN 25482-10 standard presently in preparation for general applications of the characteristic limits. (orig.) [de

  16. Confidence interval estimation of the difference between two sensitivities to the early disease stage.

    Science.gov (United States)

    Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili

    2014-03-01

    Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Existence test for asynchronous interval iterations

    DEFF Research Database (Denmark)

    Madsen, Kaj; Caprani, O.; Stauning, Ole

    1997-01-01

    In the search for regions that contain fixed points ofa real function of several variables, tests based on interval calculationscan be used to establish existence ornon-existence of fixed points in regions that are examined in the course ofthe search. The search can e.g. be performed...... as a synchronous (sequential) interval iteration:In each iteration step all components of the iterate are calculatedbased on the previous iterate. In this case it is straight forward to base simple interval existence and non-existencetests on the calculations done in each step of the iteration. The search can also...... on thecomponentwise calculations done in the course of the iteration. These componentwisetests are useful for parallel implementation of the search, sincethe tests can then be performed local to each processor and only when a test issuccessful do a processor communicate this result to other processors....

  18. R package to estimate intracluster correlation coefficient with confidence interval for binary data.

    Science.gov (United States)

    Chakraborty, Hrishikesh; Hossain, Akhtar

    2018-03-01

    The Intracluster Correlation Coefficient (ICC) is a major parameter of interest in cluster randomized trials that measures the degree to which responses within the same cluster are correlated. There are several types of ICC estimators and its confidence intervals (CI) suggested in the literature for binary data. Studies have compared relative weaknesses and advantages of ICC estimators as well as its CI for binary data and suggested situations where one is advantageous in practical research. The commonly used statistical computing systems currently facilitate estimation of only a very few variants of ICC and its CI. To address the limitations of current statistical packages, we developed an R package, ICCbin, to facilitate estimating ICC and its CI for binary responses using different methods. The ICCbin package is designed to provide estimates of ICC in 16 different ways including analysis of variance methods, moments based estimation, direct probabilistic methods, correlation based estimation, and resampling method. CI of ICC is estimated using 5 different methods. It also generates cluster binary data using exchangeable correlation structure. ICCbin package provides two functions for users. The function rcbin() generates cluster binary data and the function iccbin() estimates ICC and it's CI. The users can choose appropriate ICC and its CI estimate from the wide selection of estimates from the outputs. The R package ICCbin presents very flexible and easy to use ways to generate cluster binary data and to estimate ICC and it's CI for binary response using different methods. The package ICCbin is freely available for use with R from the CRAN repository (https://cran.r-project.org/package=ICCbin). We believe that this package can be a very useful tool for researchers to design cluster randomized trials with binary outcome. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    Science.gov (United States)

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  20. Confidence Testing for Knowledge-Based Global Communities

    Science.gov (United States)

    Jack, Brady Michael; Liu, Chia-Ju; Chiu, Houn-Lin; Shymansky, James A.

    2009-01-01

    This proposal advocates the position that the use of confidence wagering (CW) during testing can predict the accuracy of a student's test answer selection during between-subject assessments. Data revealed female students were more favorable to taking risks when making CW and less inclined toward risk aversion than their male counterparts. Student…

  1. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    Science.gov (United States)

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

  2. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    Science.gov (United States)

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  3. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

    Science.gov (United States)

    Odgaard, Eric C.; Fowler, Robert L.

    2010-01-01

    Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

  4. Coverage probability of bootstrap confidence intervals in heavy-tailed frequency models, with application to precipitation data

    Czech Academy of Sciences Publication Activity Database

    Kyselý, Jan

    2010-01-01

    Roč. 101, 3-4 (2010), s. 345-361 ISSN 0177-798X R&D Projects: GA AV ČR KJB300420801 Institutional research plan: CEZ:AV0Z30420517 Keywords : bootstrap * extreme value analysis * confidence intervals * heavy-tailed distributions * precipitation amounts Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.684, year: 2010

  5. A computer program (COSTUM) to calculate confidence intervals for in situ stress measurements. V. 1

    International Nuclear Information System (INIS)

    Dzik, E.J.; Walker, J.R.; Martin, C.D.

    1989-03-01

    The state of in situ stress is one of the parameters required both for the design and analysis of underground excavations and for the evaluation of numerical models used to simulate underground conditions. To account for the variability and uncertainty of in situ stress measurements, it is desirable to apply confidence limits to measured stresses. Several measurements of the state of stress along a borehole are often made to estimate the average state of stress at a point. Since stress is a tensor, calculating the mean stress and confidence limits using scalar techniques is inappropriate as well as incorrect. A computer program has been written to calculate and present the mean principle stresses and the confidence limits for the magnitudes and directions of the mean principle stresses. This report describes the computer program, COSTUM

  6. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  7. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  8. Confidence intervals for effect sizes: compliance and clinical significance in the Journal of Consulting and clinical Psychology.

    Science.gov (United States)

    Odgaard, Eric C; Fowler, Robert L

    2010-06-01

    In 2005, the Journal of Consulting and Clinical Psychology (JCCP) became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest editorial effort to improve statistical reporting practices in any APA journal in at least a decade, in this article we investigate the efficacy of that change. All intervention studies published in JCCP in 2003, 2004, 2007, and 2008 were reviewed. Each article was coded for method of clinical significance, type of ES, and type of associated CI, broken down by statistical test (F, t, chi-square, r/R(2), and multivariate modeling). By 2008, clinical significance compliance was 75% (up from 31%), with 94% of studies reporting some measure of ES (reporting improved for individual statistical tests ranging from eta(2) = .05 to .17, with reasonable CIs). Reporting of CIs for ESs also improved, although only to 40%. Also, the vast majority of reported CIs used approximations, which become progressively less accurate for smaller sample sizes and larger ESs (cf. Algina & Kessleman, 2003). Changes are near asymptote for ESs and clinical significance, but CIs lag behind. As CIs for ESs are required for primary outcomes, we show how to compute CIs for the vast majority of ESs reported in JCCP, with an example of how to use CIs for ESs as a method to assess clinical significance.

  9. The best confidence interval of the failure rate and unavailability per demand when few experimental data are available

    International Nuclear Information System (INIS)

    Goodman, J.

    1985-01-01

    Using a few available data the likelihood functions for the failure rate and unavailability per demand are constructed. These likelihood functions are used to obtain likelihood density functions for the failure rate and unavailability per demand. The best (or shortest) confidence intervals for these functions are provided. The failure rate and unavailability per demand are important characteristics needed for reliability and availability analysis. The methods of estimation of these characteristics when plenty of observed data are available are well known. However, on many occasions when we deal with rare failure modes or with new equipment or components for which sufficient experience has not accumulated, we have scarce data where few or zero failures have occurred. In these cases, a technique which reflects exactly our state of knowledge is required. This technique is based on likelihood density function or Bayesian methods depending on the available prior distribution. To extract the maximum amount of information from the data the best confidence interval is determined

  10. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

    Science.gov (United States)

    Harari, Gil

    2014-01-01

    Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

  11. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    Science.gov (United States)

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

    Science.gov (United States)

    Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

    2010-08-06

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

  13. Technical Report: Algorithm and Implementation for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    Energy Technology Data Exchange (ETDEWEB)

    McLoughlin, Kevin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-11

    This report describes the design and implementation of an algorithm for estimating relative microbial abundances, together with confidence limits, using data from metagenomic DNA sequencing. For the background behind this project and a detailed discussion of our modeling approach for metagenomic data, we refer the reader to our earlier technical report, dated March 4, 2014. Briefly, we described a fully Bayesian generative model for paired-end sequence read data, incorporating the effects of the relative abundances, the distribution of sequence fragment lengths, fragment position bias, sequencing errors and variations between the sampled genomes and the nearest reference genomes. A distinctive feature of our modeling approach is the use of a Chinese restaurant process (CRP) to describe the selection of genomes to be sampled, and thus the relative abundances. The CRP component is desirable for fitting abundances to reads that may map ambiguously to multiple targets, because it naturally leads to sparse solutions that select the best representative from each set of nearly equivalent genomes.

  14. Technical Report: Benchmarking for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    Energy Technology Data Exchange (ETDEWEB)

    McLoughlin, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-22

    The software application “MetaQuant” was developed by our group at Lawrence Livermore National Laboratory (LLNL). It is designed to profile microbial populations in a sample using data from whole-genome shotgun (WGS) metagenomic DNA sequencing. Several other metagenomic profiling applications have been described in the literature. We ran a series of benchmark tests to compare the performance of MetaQuant against that of a few existing profiling tools, using real and simulated sequence datasets. This report describes our benchmarking procedure and results.

  15. Including test errors in evaluating surveillance test intervals

    International Nuclear Information System (INIS)

    Kim, I.S.; Samanta, P.K.; Martorell, S.; Vesely, W.E.

    1991-01-01

    Technical Specifications require surveillance testing to assure that the standby systems important to safety will start and perform their intended functions in the event of plant abnormality. However, as evidenced by operating experience, the surveillance tests may be adversely impact safety because of their undesirable side effects, such as initiation of plant transients during testing or wearing-out of safety systems due to testing. This paper first defines the concerns, i.e., the potential adverse effects of surveillance testing, from a risk perspective. Then, we present a methodology to evaluate the risk impact of those adverse effects, focusing on two important kinds of adverse impacts of surveillance testing: (1) risk impact of test-caused trips and (2) risk impact of test-caused equipment wear. The quantitative risk methodology is demonstrated with several surveillance tests conducted at boiling water reactors, such as the tests of the main steam isolation valves, the turbine overspeed protection system, and the emergency diesel generators. We present the results of the risk-effectiveness evaluation of surveillance test intervals, which compares the adverse risk impact with the beneficial risk impact of testing from potential failure detection, along with insights from sensitivity studies

  16. Weighted profile likelihood-based confidence interval for the difference between two proportions with paired binomial data.

    Science.gov (United States)

    Pradhan, Vivek; Saha, Krishna K; Banerjee, Tathagata; Evans, John C

    2014-07-30

    Inference on the difference between two binomial proportions in the paired binomial setting is often an important problem in many biomedical investigations. Tang et al. (2010, Statistics in Medicine) discussed six methods to construct confidence intervals (henceforth, we abbreviate it as CI) for the difference between two proportions in paired binomial setting using method of variance estimates recovery. In this article, we propose weighted profile likelihood-based CIs for the difference between proportions of a paired binomial distribution. However, instead of the usual likelihood, we use weighted likelihood that is essentially making adjustments to the cell frequencies of a 2 × 2 table in the spirit of Agresti and Min (2005, Statistics in Medicine). We then conduct numerical studies to compare the performances of the proposed CIs with that of Tang et al. and Agresti and Min in terms of coverage probabilities and expected lengths. Our numerical study clearly indicates that the weighted profile likelihood-based intervals and Jeffreys interval (cf. Tang et al.) are superior in terms of achieving the nominal level, and in terms of expected lengths, they are competitive. Finally, we illustrate the use of the proposed CIs with real-life examples. Copyright © 2014 John Wiley & Sons, Ltd.

  17. The Metamemory Approach to Confidence: A Test Using Semantic Memory

    Science.gov (United States)

    Brewer, William F.; Sampaio, Cristina

    2012-01-01

    The metamemory approach to memory confidence was extended and elaborated to deal with semantic memory tasks. The metamemory approach assumes that memory confidence is based on the products and processes of a completed memory task, as well as metamemory beliefs that individuals have about how their memory products and processes relate to memory…

  18. Testing and qualification of confidence in statistical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Serghiuta, D.; Tholammakkil, J.; Hammouda, N. [Canadian Nuclear Safety Commission (Canada); O' Hagan, A. [Sheffield Univ. (United Kingdom)

    2014-07-01

    tests, but targeted to the context of the particular application and aimed at identifying the domain of validity of the proposed tolerance limit method and algorithm, might provide the necessary confidence in the proposed statistical procedure. The Ontario Power Generation, Bruce Power and AMEC-NSS have supported this work and contributed to the development and execution of the test cases. Their statistical method and results are not, however, discussed in this paper. (author)

  19. A spreadsheet template compatible with Microsoft Excel and iWork Numbers that returns the simultaneous confidence intervals for all pairwise differences between multiple sample means.

    Science.gov (United States)

    Brown, Angus M

    2010-04-01

    The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.

  20. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

    Science.gov (United States)

    Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

    2014-01-01

    The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

  1. Incidence of interval cancers in faecal immunochemical test colorectal screening programmes in Italy.

    Science.gov (United States)

    Giorgi Rossi, Paolo; Carretta, Elisa; Mangone, Lucia; Baracco, Susanna; Serraino, Diego; Zorzi, Manuel

    2018-03-01

    Objective In Italy, colorectal screening programmes using the faecal immunochemical test from ages 50 to 69 every two years have been in place since 2005. We aimed to measure the incidence of interval cancers in the two years after a negative faecal immunochemical test, and compare this with the pre-screening incidence of colorectal cancer. Methods Using data on colorectal cancers diagnosed in Italy from 2000 to 2008 collected by cancer registries in areas with active screening programmes, we identified cases that occurred within 24 months of negative screening tests. We used the number of tests with a negative result as a denominator, grouped by age and sex. Proportional incidence was calculated for the first and second year after screening. Results Among 579,176 and 226,738 persons with negative test results followed up at 12 and 24 months, respectively, we identified 100 interval cancers in the first year and 70 in the second year. The proportional incidence was 13% (95% confidence interval 10-15) and 23% (95% confidence interval 18-25), respectively. The estimate for the two-year incidence is 18%, which was slightly higher in females (22%; 95% confidence interval 17-26), and for proximal colon (22%; 95% confidence interval 16-28). Conclusion The incidence of interval cancers in the two years after a negative faecal immunochemical test in routine population-based colorectal cancer screening was less than one-fifth of the expected incidence. This is direct evidence that the faecal immunochemical test-based screening programme protocol has high sensitivity for cancers that will become symptomatic.

  2. User guide to the UNC1NLI1 package and three utility programs for computation of nonlinear confidence and prediction intervals using MODFLOW-2000

    DEFF Research Database (Denmark)

    Christensen, Steen; Cooley, R.L.

    a model (for example when using the Parameter-Estimation Process of MODFLOW-2000) it is advantageous to also use regression-based methods to quantify uncertainty. For this reason the UNC Process computes (1) confidence intervals for parameters of the Parameter-Estimation Process and (2) confidence...

  3. Determination and Interpretation of Characteristic Limits for Radioactivity Measurements: Decision Threshhold, Detection Limit and Limits of the Confidence Interval

    International Nuclear Information System (INIS)

    2017-01-01

    Since 2004, the environment programme of the IAEA has included activities aimed at developing a set of procedures for analytical measurements of radionuclides in food and the environment. Reliable, comparable and fit for purpose results are essential for any analytical measurement. Guidelines and national and international standards for laboratory practices to fulfil quality assurance requirements are extremely important when performing such measurements. The guidelines and standards should be comprehensive, clearly formulated and readily available to both the analyst and the customer. ISO 11929:2010 is the international standard on the determination of the characteristic limits (decision threshold, detection limit and limits of the confidence interval) for measuring ionizing radiation. For nuclear analytical laboratories involved in the measurement of radioactivity in food and the environment, robust determination of the characteristic limits of radioanalytical techniques is essential with regard to national and international regulations on permitted levels of radioactivity. However, characteristic limits defined in ISO 11929:2010 are complex, and the correct application of the standard in laboratories requires a full understanding of various concepts. This publication provides additional information to Member States in the understanding of the terminology, definitions and concepts in ISO 11929:2010, thus facilitating its implementation in Member State laboratories.

  4. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    Science.gov (United States)

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  5. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions

    Directory of Open Access Journals (Sweden)

    Rik Crutzen

    2017-07-01

    Full Text Available When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy, we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

  6. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions.

    Science.gov (United States)

    Crutzen, Rik; Peters, Gjalt-Jorn Ygram; Noijen, Judith

    2017-01-01

    When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention) in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy), we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

  7. Five-band microwave radiometer system for noninvasive brain temperature measurement in newborn babies: Phantom experiment and confidence interval

    Science.gov (United States)

    Sugiura, T.; Hirata, H.; Hand, J. W.; van Leeuwen, J. M. J.; Mizushina, S.

    2011-10-01

    Clinical trials of hypothermic brain treatment for newborn babies are currently hindered by the difficulty in measuring deep brain temperatures. As one of the possible methods for noninvasive and continuous temperature monitoring that is completely passive and inherently safe is passive microwave radiometry (MWR). We have developed a five-band microwave radiometer system with a single dual-polarized, rectangular waveguide antenna operating within the 1-4 GHz range and a method for retrieving the temperature profile from five radiometric brightness temperatures. This paper addresses (1) the temperature calibration for five microwave receivers, (2) the measurement experiment using a phantom model that mimics the temperature profile in a newborn baby, and (3) the feasibility for noninvasive monitoring of deep brain temperatures. Temperature resolutions were 0.103, 0.129, 0.138, 0.105 and 0.111 K for 1.2, 1.65, 2.3, 3.0 and 3.6 GHz receivers, respectively. The precision of temperature estimation (2σ confidence interval) was about 0.7°C at a 5-cm depth from the phantom surface. Accuracy, which is the difference between the estimated temperature using this system and the measured temperature by a thermocouple at a depth of 5 cm, was about 2°C. The current result is not satisfactory for clinical application because the clinical requirement for accuracy must be better than 1°C for both precision and accuracy at a depth of 5 cm. Since a couple of possible causes for this inaccuracy have been identified, we believe that the system can take a step closer to the clinical application of MWR for hypothermic rescue treatment.

  8. Confidence bounds and hypothesis tests for normal distribution coefficients of variation

    Science.gov (United States)

    Steve P. Verrill; Richard A. Johnson

    2007-01-01

    For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations. To develop these confidence bounds and test, we first establish that estimators based on Newton steps from n-...

  9. Confidence bounds and hypothesis tests for normal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill; Richard A. Johnson

    2007-01-01

    For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations.

  10. Evaluation of test intervals strategies with a risk monitor

    International Nuclear Information System (INIS)

    Soerman, J.

    2005-01-01

    The Swedish nuclear power utility Oskarshamn Power Group (OKG), is investigating how the use of a risk monitor can facilitate and improve risk-informed decision-making at their nuclear power plants. The intent is to evaluate if risk-informed decision-making can be accepted. A pilot project was initiated and carried out in 2004. The project included investigating if a risk monitor can be used for optimising test intervals for diesel- and gas turbine generators with regard to risk level. The Oskarhamn 2 (O2), PSA Level 1 model was converted into a risk monitor using RiskSpectrum RiskWatcher (RSRW) software. The converted PSA model included the complete PSA model for the power operation mode. RSRW then performs a complete requantification for every analysis. Time dependent reliability data are taken into account, i.e. a shorter test interval will increases the components availability (possibility to e.g. start on demand). The converted O2 model was then used to investigate whether it would be possible to balance longer test intervals for diesel generators, gas turbine generators and high pressure injection system with shorter test intervals for the low pressure injection system, while maintaining a low risk level at the plant. The results show that a new mixture of test intervals can be implemented with only marginally changes in the risk calculated with the risk monitor model. The results indicate that the total number of test activities for the systems included in the pilot study could be reduced by 20% with a maintained level of risk. A risk monitor taking into account the impact from test intervals in availability calculations for components is well suited for evaluation of test interval strategies. It also enables the analyst to evaluate the risk level over a period of time including the impact the actual status of the plant may have on the risk level. (author)

  11. A nonparametric statistical method for determination of a confidence interval for the mean of a set of results obtained in a laboratory intercomparison

    International Nuclear Information System (INIS)

    Veglia, A.

    1981-08-01

    In cases where sets of data are obviously not normally distributed, the application of a nonparametric method for the estimation of a confidence interval for the mean seems to be more suitable than some other methods because such a method requires few assumptions about the population of data. A two-step statistical method is proposed which can be applied to any set of analytical results: elimination of outliers by a nonparametric method based on Tchebycheff's inequality, and determination of a confidence interval for the mean by a non-parametric method based on binominal distribution. The method is appropriate only for samples of size n>=10

  12. The Confidence-Accuracy Relationship for Eyewitness Identification Decisions: Effects of Exposure Duration, Retention Interval, and Divided Attention

    Science.gov (United States)

    Palmer, Matthew A.; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

    2013-01-01

    Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the…

  13. Development and Evaluation of a Confidence-Weighting Computerized Adaptive Testing

    Science.gov (United States)

    Yen, Yung-Chin; Ho, Rong-Guey; Chen, Li-Ju; Chou, Kun-Yi; Chen, Yan-Lin

    2010-01-01

    The purpose of this study was to examine whether the efficiency, precision, and validity of computerized adaptive testing (CAT) could be improved by assessing confidence differences in knowledge that examinees possessed. We proposed a novel polytomous CAT model called the confidence-weighting computerized adaptive testing (CWCAT), which combined a…

  14. 46 CFR 57.06-2 - Production test plate interval of testing.

    Science.gov (United States)

    2010-10-01

    ... WELDING AND BRAZING Production Tests § 57.06-2 Production test plate interval of testing. (a) At least one... 46 Shipping 2 2010-10-01 2010-10-01 false Production test plate interval of testing. 57.06-2... follows: (1) When the extent of welding on a single vessel exceeds 50 lineal feet of either or both...

  15. Effects of Forgetting Phenomenon on Surveillance Test Interval

    International Nuclear Information System (INIS)

    Lee, Ho-Joong; Jang, Seung-Cheol

    2007-01-01

    Technical Specifications (TS) requirements for nuclear power plants (NPPs) define Surveillance Requirements (SRs) to assure safety during operation. SRs include surveillance test intervals (STIs) and the optimization of the STIs is one of the main issues in risk-informed applications. Surveillance tests are required in NPPs to detect failures in standby equipment to assure their availability in an accident. However, operating experience of the plants suggests that, in addition to the beneficial effects of detecting latent faults, the tests also may have adverse effects on plant operation or equipment; e.g., plant transient caused by the test and wear-out of safety system equipment due to repeated testing. Recent studies have quantitatively evaluated both the beneficial and adverse effects of testing to decide on an acceptable test interval. The purpose of this research is to investigate the effects of forgetting phenomenon on STI. It is a fundamental human characteristic that a person engaged in a repetitive task will improve his performance over time. The learning phenomenon is observed by the decrease in operation time per unit as operators gain experience by performing additional tasks. However, once there is a break of sufficient length, forgetting starts to take place. In surveillance tests, the most common factor to determine the amount of forgetting is the length of STI, where the longer the STI, the greater the amount of forgetting

  16. Perpetrator admissions and earwitness renditions: the effects of retention interval and rehearsal on accuracy of and confidence in memory for criminal accounts

    OpenAIRE

    Boydell, Carroll

    2008-01-01

    While much research has explored how well earwitnesses can identify the voice of a perpetrator, little research has examined how well they can recall details from a perpetrator’s confession. This study examines the accuracy-confidence correlation for memory for details from a perpetrator’s verbal account of a crime, as well as the effects of two variables commonly encountered in a criminal investigation (rehearsal and length of retention interval) on that correlation. Results suggest that con...

  17. Confidence intervals and hypothesis testing for the Permutation Entropy with an application to epilepsy

    Science.gov (United States)

    Traversaro, Francisco; O. Redelico, Francisco

    2018-04-01

    In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.

  18. Risk based test interval and maintenance optimisation - Application and uses

    International Nuclear Information System (INIS)

    Sparre, E.

    1999-10-01

    The project is part of an IAEA co-ordinated Research Project (CRP) on 'Development of Methodologies for Optimisation of Surveillance Testing and Maintenance of Safety Related Equipment at NPPs'. The purpose of the project is to investigate the sensitivity of the results obtained when performing risk based optimisation of the technical specifications. Previous projects have shown that complete LPSA models can be created and that these models allow optimisation of technical specifications. However, these optimisations did not include any in depth check of the result sensitivity with regards to methods, model completeness etc. Four different test intervals have been investigated in this study. Aside from an original, nominal, optimisation a set of sensitivity analyses has been performed and the results from these analyses have been compared to the original optimisation. The analyses indicate that the result of an optimisation is rather stable. However, it is not possible to draw any certain conclusions without performing a number of sensitivity analyses. Significant differences in the optimisation result were discovered when analysing an alternative configuration. Also deterministic uncertainties seem to affect the result of an optimisation largely. The sensitivity of failure data uncertainties is important to investigate in detail since the methodology is based on the assumption that the unavailability of a component is dependent on the length of the test interval

  19. Confidence Testing of Shell 405 and S-405 Catalysts in a Monopropellant Hydrazine Thruster

    Science.gov (United States)

    McRight, Patrick; Popp, Chris; Pierce, Charles; Turpin, Alicia; Urbanchock, Walter; Wilson, Mike

    2005-01-01

    As part of the transfer of catalyst manufacturing technology from Shell Chemical Company (Shell 405 catalyst manufactured in Houston, Texas) to Aerojet (S-405 manufactured in Redmond, Washington), Aerojet demonstrated the equivalence of S-405 and Shell 405 at beginning of life. Some US aerospace users expressed a desire to conduct a preliminary confidence test to assess end-of-life characteristics for S-405. NASA Marshall Space Flight Center (MSFC) and Aerojet entered a contractual agreement in 2004 to conduct a confidence test using a pair of 0.2-lbf MR-103G monopropellant hydrazine thrusters, comparing S-405 and Shell 405 side by side. This paper summarizes the formulation of this test program, explains the test matrix, describes the progress of the test, and analyzes the test results. This paper also includes a discussion of the limitations of this test and the ramifications of the test results for assessing the need for future qualification testing in particular hydrazine thruster applications.

  20. Prediction of the distillation temperatures of crude oils using ¹H NMR and support vector regression with estimated confidence intervals.

    Science.gov (United States)

    Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J

    2015-09-01

    This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Does interaction matter? Testing whether a confidence heuristic can replace interaction in collective decision-making.

    Science.gov (United States)

    Bang, Dan; Fusaroli, Riccardo; Tylén, Kristian; Olsen, Karsten; Latham, Peter E; Lau, Jennifer Y F; Roepstorff, Andreas; Rees, Geraint; Frith, Chris D; Bahrami, Bahador

    2014-05-01

    In a range of contexts, individuals arrive at collective decisions by sharing confidence in their judgements. This tendency to evaluate the reliability of information by the confidence with which it is expressed has been termed the 'confidence heuristic'. We tested two ways of implementing the confidence heuristic in the context of a collective perceptual decision-making task: either directly, by opting for the judgement made with higher confidence, or indirectly, by opting for the faster judgement, exploiting an inverse correlation between confidence and reaction time. We found that the success of these heuristics depends on how similar individuals are in terms of the reliability of their judgements and, more importantly, that for dissimilar individuals such heuristics are dramatically inferior to interaction. Interaction allows individuals to alleviate, but not fully resolve, differences in the reliability of their judgements. We discuss the implications of these findings for models of confidence and collective decision-making. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Optimization of Allowed Outage Time and Surveillance Test Intervals

    Energy Technology Data Exchange (ETDEWEB)

    Al-Dheeb, Mujahed; Kang, Sunkoo; Kim, Jonghyun [KEPCO international nuclear graduate school, Ulsan (Korea, Republic of)

    2015-10-15

    The primary purpose of surveillance testing is to assure that the components of standby safety systems will be operable when they are needed in an accident. By testing these components, failures can be detected that may have occurred since the last test or the time when the equipment was last known to be operational. The probability a system or system component performs a specified function or mission under given conditions at a prescribed time is called availability (A). Unavailability (U) as a risk measure is just the complementary probability to A(t). The increase of U means the risk is increased as well. D and T have an important impact on components, or systems, unavailability. The extension of D impacts the maintenance duration distributions for at-power operations, making them longer. This, in turn, increases the unavailability due to maintenance in the systems analysis. As for T, overly-frequent surveillances can result in high system unavailability. This is because the system may be taken out of service often due to the surveillance itself and due to the repair of test-caused failures of the component. The test-caused failures include those incurred by wear and tear of the component due to the surveillances. On the other hand, as the surveillance interval increases, the component's unavailability will grow because of increased occurrences of time-dependent random failures. In that situation, the component cannot be relied upon, and accordingly the system unavailability will increase. Thus, there should be an optimal component surveillance interval in terms of the corresponding system availability. This paper aims at finding the optimal T and D which result in minimum unavailability which in turn reduces the risk. Applying the methodology in section 2 to find the values of optimal T and D for two components, i.e., safety injection pump (SIP) and turbine driven aux feedwater pump (TDAFP). Section 4 is addressing interaction between D and T. In general

  3. Optimization of Allowed Outage Time and Surveillance Test Intervals

    International Nuclear Information System (INIS)

    Al-Dheeb, Mujahed; Kang, Sunkoo; Kim, Jonghyun

    2015-01-01

    The primary purpose of surveillance testing is to assure that the components of standby safety systems will be operable when they are needed in an accident. By testing these components, failures can be detected that may have occurred since the last test or the time when the equipment was last known to be operational. The probability a system or system component performs a specified function or mission under given conditions at a prescribed time is called availability (A). Unavailability (U) as a risk measure is just the complementary probability to A(t). The increase of U means the risk is increased as well. D and T have an important impact on components, or systems, unavailability. The extension of D impacts the maintenance duration distributions for at-power operations, making them longer. This, in turn, increases the unavailability due to maintenance in the systems analysis. As for T, overly-frequent surveillances can result in high system unavailability. This is because the system may be taken out of service often due to the surveillance itself and due to the repair of test-caused failures of the component. The test-caused failures include those incurred by wear and tear of the component due to the surveillances. On the other hand, as the surveillance interval increases, the component's unavailability will grow because of increased occurrences of time-dependent random failures. In that situation, the component cannot be relied upon, and accordingly the system unavailability will increase. Thus, there should be an optimal component surveillance interval in terms of the corresponding system availability. This paper aims at finding the optimal T and D which result in minimum unavailability which in turn reduces the risk. Applying the methodology in section 2 to find the values of optimal T and D for two components, i.e., safety injection pump (SIP) and turbine driven aux feedwater pump (TDAFP). Section 4 is addressing interaction between D and T. In general

  4. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  5. The impact of communication barriers on diagnostic confidence and ancillary testing in the emergency department.

    Science.gov (United States)

    Garra, Gregory; Albino, Hiram; Chapman, Heather; Singer, Adam J; Thode, Henry C

    2010-06-01

    Communication barriers (CBs) compromise the diagnostic power of the medical interview and may result in increased reliance on diagnostic tests or incorrect test ordering. The prevalence and degree to which these barriers affect diagnosis, testing, and treatment are unknown. To quantify and characterize CBs encountered in the Emergency Department (ED), and assess the effect of CBs on initial diagnosis and perceived reliance on ancillary testing. This was a prospective survey completed by emergency physicians after initial adult patient encounters. CB severity, diagnostic confidence, and reliance on ancillary testing were quantified on a 100-mm Visual Analog Scale (VAS) from least (0) to most (100). Data were collected on 417 ED patient encounters. CBs were reported in 46%; with a mean severity of 50 mm on a 100-mm VAS with endpoints of "perfect communication and "no communication." Language was the most commonly reported form of CB (28%). More than one CB was identified in 6%. The 100-mm VAS rating of diagnostic confidence was lower in patients with perceived CBs (64 mm) vs. those without CBs (80 mm), p Communication barriers in our ED setting were common, and resulted in lower diagnostic confidence and increased perception that ancillary tests are needed to narrow the diagnosis. Copyright 2010 Elsevier Inc. All rights reserved.

  6. Integration testing through reusing representative unit test cases for high-confidence medical software.

    Science.gov (United States)

    Shin, Youngsul; Choi, Yunja; Lee, Woo Jin

    2013-06-01

    As medical software is getting larger-sized, complex, and connected with other devices, finding faults in integrated software modules gets more difficult and time consuming. Existing integration testing typically takes a black-box approach, which treats the target software as a black box and selects test cases without considering internal behavior of each software module. Though it could be cost-effective, this black-box approach cannot thoroughly test interaction behavior among integrated modules and might leave critical faults undetected, which should not happen in safety-critical systems such as medical software. This work anticipates that information on internal behavior is necessary even for integration testing to define thorough test cases for critical software and proposes a new integration testing method by reusing test cases used for unit testing. The goal is to provide a cost-effective method to detect subtle interaction faults at the integration testing phase by reusing the knowledge obtained from unit testing phase. The suggested approach notes that the test cases for the unit testing include knowledge on internal behavior of each unit and extracts test cases for the integration testing from the test cases for the unit testing for a given test criteria. The extracted representative test cases are connected with functions under test using the state domain and a single test sequence to cover the test cases is produced. By means of reusing unit test cases, the tester has effective test cases to examine diverse execution paths and find interaction faults without analyzing complex modules. The produced test sequence can have test coverage as high as the unit testing coverage and its length is close to the length of optimal test sequences. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Limited test data: The choice between confidence limits and inverse probability

    International Nuclear Information System (INIS)

    Nichols, P.

    1975-01-01

    For a unit which has been successfully designed to a high standard of reliability, any test programme of reasonable size will result in only a small number of failures. In these circumstances the failure rate estimated from the tests will depend on the statistical treatment applied. When a large number of units is to be manufactured, an unexpected high failure rate will certainly result in a large number of failures, so it is necessary to guard against optimistic unrepresentative test results by using a confidence limit approach. If only a small number of production units is involved, failures may not occur even with a higher than expected failure rate, and so one may be able to accept a method which allows for the possibility of either optimistic or pessimistic test results, and in this case an inverse probability approach, based on Bayes' theorem, might be used. The paper first draws attention to an apparently significant difference in the numerical results from the two methods, particularly for the overall probability of several units arranged in redundant logic. It then discusses a possible objection to the inverse method, followed by a demonstration that, for a large population and a very reasonable choice of prior probability, the inverse probability and confidence limit methods give the same numerical result. Finally, it is argued that a confidence limit approach is overpessimistic when a small number of production units is involved, and that both methods give the same answer for a large population. (author)

  8. Downside Business Confidence Spillovers in Europe: Evidence from Causality-in-Risk Tests

    OpenAIRE

    Atukeren, Erdal; Cevik, Emrah Ismail; Korkmaz, Turhan

    2015-01-01

    This paper employs Hong et al.’s (2009) extreme risk spillovers test to investigate the bilateral business confidence spillovers between Greece, Italy, Spain, Portugal, France, and Germany. After controlling for domestic economic developments in each country and common international factors, downside risk spillovers are detected as a causal feedback between Spain and Portugal and unilaterally from Spain to Italy. Extremely low business sentiments in France, Germany, and Greece are mostly due ...

  9. Optimal Testing Intervals in the Squatting Test to Determine Baroreflex Sensitivity

    OpenAIRE

    Ishitsuka, S.; Kusuyama, N.; Tanaka, M.

    2014-01-01

    The recently introduced “squatting test” (ST) utilizes a simple postural change to perturb the blood pressure and to assess baroreflex sensitivity (BRS). In our study, we estimated the reproducibility of and the optimal testing interval between the STs in healthy volunteers. Thirty-four subjects free of cardiovascular disorders and taking no medication were instructed to perform the repeated ST at 30-sec, 1-min, and 3-min intervals in duplicate in a random sequence, while the systolic blood p...

  10. AlphaCI: un programa de cálculo de intervalos de confianza para el coeficiente alfa de Cronbach AlphaCI: a computer program for computing confidence intervals around Cronbach's alfa coefficient

    Directory of Open Access Journals (Sweden)

    Rubén Ledesma

    2004-06-01

    Full Text Available El coeficiente alfa de Cronbach es el modo más habitual de estimar la fiabilidad de pruebas basadas en Teoría Clásica de los Test. En dicha estimación, los investigadores usualmente omiten informar intervalos de confianza para el coeficiente, un aspecto no solo recomendado por los especialistas, sino también requerido explícitamente en las normas editoriales de algunas revistas especializadas. Esta situación puede atribuirse a que los métodos de estimación de intervalos de confianza son poco conocidos, además de no estar disponibles al usuario en los programas estadísticos más populares. Así, en este trabajo se presenta un programa desarrollado dentro del sistema estadístico ViSta que permite calcular intervalos de confianza basados en el enfoque clásico y mediante la técnica bootstrap. Se espera promover la inclusión de intervalos de confianza para medidas de fiabilidad, facilitando el acceso a las herramientas necesarias para su aplicación. El programa es gratuito y puede obtenerse enviando un mail de solicitud al autor del trabajo.Cronbach's alpha coefficient is the most popular way of estimating reliability in measurement scales based on Classic Test Theory. When estimating it, researchers usually omit to report confidence intervals of this coefficient, as it is not only recommended by experts, but also required by some journal's guidelines. This situation is because the different methods of estimating confidence intervals are not well-known by researchers, as well as they are not being available among the most popular statistical packages. Therefore, this paper describes a computer program integrated into the ViSta statistical system, which allows computing confidence intervals based on the classical approach and using bootstrap technique. It is hoped that this work promotes the inclusion of confidence intervals of reliability measures, by increasing the availability of the required computer tools. The program is free and

  11. Factorial-based response-surface modeling with confidence intervals for optimizing thermal-optical transmission analysis of atmospheric black carbon

    International Nuclear Information System (INIS)

    Conny, J.M.; Norris, G.A.; Gould, T.R.

    2009-01-01

    Thermal-optical transmission (TOT) analysis measures black carbon (BC) in atmospheric aerosol on a fibrous filter. The method pyrolyzes organic carbon (OC) and employs laser light absorption to distinguish BC from the pyrolyzed OC; however, the instrument does not necessarily separate the two physically. In addition, a comprehensive temperature protocol for the analysis based on the Beer-Lambert Law remains elusive. Here, empirical response-surface modeling was used to show how the temperature protocol in TOT analysis can be modified to distinguish pyrolyzed OC from BC based on the Beer-Lambert Law. We determined the apparent specific absorption cross sections for pyrolyzed OC (σ Char ) and BC (σ BC ), which accounted for individual absorption enhancement effects within the filter. Response-surface models of these cross sections were derived from a three-factor central-composite factorial experimental design: temperature and duration of the high-temperature step in the helium phase, and the heating increase in the helium-oxygen phase. The response surface for σ BC , which varied with instrument conditions, revealed a ridge indicating the correct conditions for OC pyrolysis in helium. The intersection of the σ BC and σ Char surfaces indicated the conditions where the cross sections were equivalent, satisfying an important assumption upon which the method relies. 95% confidence interval surfaces defined a confidence region for a range of pyrolysis conditions. Analyses of wintertime samples from Seattle, WA revealed a temperature between 830 deg. C and 850 deg. C as most suitable for the helium high-temperature step lasting 150 s. However, a temperature as low as 750 deg. C could not be rejected statistically

  12. "Normality of Residuals Is a Continuous Variable, and Does Seem to Influence the Trustworthiness of Confidence Intervals: A Response to, and Appreciation of, Williams, Grajales, and Kurkiewicz (2013"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2013-09-01

    Full Text Available Osborne and Waters (2002 focused on checking some of the assumptions of multiple linear.regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that.regression models estimated using ordinary least squares require the assumption of normally.distributed errors, but not the assumption of normally distributed response or predictor variables..They go on to discuss estimate bias and provide a helpful summary of the assumptions of multiple.regression when using ordinary least squares. While we were not as precise as we could have been.when discussing assumptions of normality, the critical issue of the 2002 paper remains -' researchers.often do not check on or report on the assumptions of their statistical methods. This response.expands on the points made by Williams, advocates a thorough examination of data prior to.reporting results, and provides an example of how incremental improvements in meeting the.assumption of normality of residuals incrementally improves the accuracy of confidence intervals.

  13. Qualification Testing Versus Quantitative Reliability Testing of PV - Gaining Confidence in a Rapidly Changing Technology: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, Sarah [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Repins, Ingrid L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hacke, Peter L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jordan, Dirk [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Kempe, Michael D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Whitfield, Kent [Underwriters Laboratories; Phillips, Nancy [DuPont; Sample, Tony [European Commission; Monokroussos, Christos [TUV Rheinland; Hsi, Edward [Swiss RE; Wohlgemuth, John [PowerMark Corporation; Seidel, Peter [First Solar; Jahn, Ulrike [TUV Rheinland; Tanahashi, Tadanori [National Institute of Advanced Industrial Science and Technology; Chen, Yingnan [China General Certification Center; Jaeckel, Bengt [Underwriters Laboratories; Yamamichi, Masaaki [RTS Corporation

    2017-10-05

    Continued growth of PV system deployment would be enhanced by quantitative, low-uncertainty predictions of the degradation and failure rates of PV modules and systems. The intended product lifetime (decades) far exceeds the product development cycle (months), limiting our ability to reduce the uncertainty of the predictions for this rapidly changing technology. Yet, business decisions (setting insurance rates, analyzing return on investment, etc.) require quantitative risk assessment. Moving toward more quantitative assessments requires consideration of many factors, including the intended application, consequence of a possible failure, variability in the manufacturing, installation, and operation, as well as uncertainty in the measured acceleration factors, which provide the basis for predictions based on accelerated tests. As the industry matures, it is useful to periodically assess the overall strategy for standards development and prioritization of research to provide a technical basis both for the standards and the analysis related to the application of those. To this end, this paper suggests a tiered approach to creating risk assessments. Recent and planned potential improvements in international standards are also summarized.

  14. Bayes Factor Approaches for Testing Interval Null Hypotheses

    NARCIS (Netherlands)

    Morey, Richard D.; Rouder, Jeffrey N.

    2011-01-01

    Psychological theories are statements of constraint. The role of hypothesis testing in psychology is to test whether specific theoretical constraints hold in data. Bayesian statistics is well suited to the task of finding supporting evidence for constraint, because it allows for comparing evidence

  15. Local confidence limits for IMRT and VMAT techniques: a study based on TG119 test suite

    International Nuclear Information System (INIS)

    Thomas, M.; Chandroth, M.

    2014-01-01

    The aim of this study was to generate a local confidence limit (CL) for intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) techniques used at Waikato Regional Cancer Centre. This work was carried out based on the American Association of Physicists in Medicine (AAPM) Task Group (TG) 119 report. The AAPM TG 119 report recommends CLs as a bench mark for IMRT commissioning and delivery based on its multiple institutions planning and dosimetry comparisons. In this study the locally obtained CLs were compared to TG119 benchmarks. Furthermore, the same bench mark was used to test the capabilities and quality of the VMAT technique in our clinic. The TG 119 test suite consists of two primary and four clinical tests for evaluating the accuracy of IMRT planning and dose delivery systems. Pre defined structure sets contoured on computed tomography images were downloaded from AAPM website and were transferred to a locally designed phantom. For each test case two plans were generated using IMRT and VMAT optimisation. Dose prescriptions and planning objectives recommended by TG119 report were followed to generate the test plans in Eclipse Treatment Planning System. For each plan the point dose measurements were done using an ion chamber at high dose and low dose regions. The planar dose distribution was analysed for percentage of points passing the gamma criteria of 3 %/3 mm, for both the composite plan and individual fields of each plan. The CLs were generated based on the results from the gamma analysis and point dose measurements. For IMRT plans, the CLs obtained were (1) from point dose measurements: 2.49 % at high dose region and 2.95 % for the low dose region (2) from gamma analysis: 2.12 % for individual fields and 5.9 % for the composite plan. For VMAT plans, the CLs obtained were (1) from point dose measurements: 2.56 % at high dose region and 2.6 % for the low dose region (2) from gamma analysis: 1.46 % for individual fields and 0

  16. Easy and Informative: Using Confidence-Weighted True-False Items for Knowledge Tests in Psychology Courses

    Science.gov (United States)

    Dutke, Stephan; Barenberg, Jonathan

    2015-01-01

    We introduce a specific type of item for knowledge tests, confidence-weighted true-false (CTF) items, and review experiences of its application in psychology courses. A CTF item is a statement about the learning content to which students respond whether the statement is true or false, and they rate their confidence level. Previous studies using…

  17. nigerian students' self-confidence in responding to statements

    African Journals Online (AJOL)

    Temechegn

    Altogether the test is made up of 40 items covering students' ability to recall definition ... confidence interval within which student have confidence in their choice of the .... is mentioned these equilibrium systems come to memory of the learner.

  18. Optimal test intervals for shutdown systems for the Cernavoda nuclear power station

    International Nuclear Information System (INIS)

    Negut, Gh.; Laslau, F.

    1993-01-01

    Cernavoda nuclear power station required a complete PSA study. As a part of this study, an important goal to enhance the effectiveness of the plant operation is to establish optimal test intervals for the important engineering safety systems. The paper presents, briefly, the current methods to optimize the test intervals. For this reason it was used Vesely methods to establish optimal test intervals and Frantic code to survey the influence of the test intervals on system availability. The applications were done on the Shutdown System no. 1, a shutdown system provided whit solid rods and on Shutdown System no. 2 provided with injecting poison. The shutdown systems receive nine total independent scram signals that dictate the test interval. Fault trees for the both safety systems were developed. For the fault tree solutions an original code developed in our Institute was used. The results, intended to be implemented in the technical specifications for test and operation of Cernavoda NPS are presented

  19. A some aspects of medical demographical situation in the regions, confidant to Semipalatinsk former test site

    International Nuclear Information System (INIS)

    Slazhneva, T.I.; Korchevskij, A.A.; Tret'yakova, S.N.; Pozdnyakova, A.P.

    1993-01-01

    It had been analysed the data of mortality index and average future life span (AFLS).The data was devided in sex and age groups of Pavlodar region (Kazakstan) for the period of 1970, 1979, 1989 and given in comparison with Semipalatinsk region (Kazakstan) and Former Soviet Union. It was discovered peculiarities of demographic index dynamics for last decades: downfall of average life span of population from 1970 to 1979 with further increasing in 1989. In Semipalatinsk region the AFLS of men was decreasing to 2,19 year, women - to 1,24 year; in Pavlodar region the AFLS of men was decreasing to 3,87 year, women - to 4,3 year. Relative compensation of this effect was being marked to 1989 year: from 1979 to 1989 the AFLS index of Pavlodar region men increased to 2,93 year, women - to 1,83 year. Similar oscillations were being followed up for all age groups. Special attention is drawing to the infants mortality dynamic in the regions, confidant to Semipalatinsk test site. Radical ascent of the infants mortality in 1970-1983 period leaded to shaping of excluding unfavourable indexes (71,9 % for 1000 burned in 1975). Analysis confirmed the information of demographic indexes, as integral characteristics of population health levels and ecological equilibrium rate in the regions

  20. A Methodology for Evaluation of Inservice Test Intervals for Pumps and Motor Operated Valves

    International Nuclear Information System (INIS)

    McElhaney, K.L.

    1999-01-01

    The nuclear industry has begun efforts to reevaluate inservice tests (ISTs) for key components such as pumps and valves. At issue are two important questions--What kinds of tests provide the most meaningful information about component health, and what periodic test intervals are appropriate? In the past, requirements for component testing were prescribed by the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code. The tests and test intervals specified in the Code were generic in nature and test intervals were relatively short. Operating experience has shown, however, that performance and safety improvements and cost savings could be realized by tailoring IST programs to similar components with comparable safety importance and service conditions. In many cases, test intervals may be lengthened, resulting in cost savings for utilities and their customers

  1. Nonverbal Communication of Confidence in Soccer Referees: An Experimental Test of Darwin's Leakage Hypothesis.

    Science.gov (United States)

    Furley, Philip; Schweizer, Geoffrey

    2016-12-01

    The goal of the present paper was to investigate whether soccer referees' nonverbal behavior (NVB) differed based on the difficulty of their decisions and whether perceivers could detect these systematic variations. On the one hand, communicating confidence via NVB is emphasized in referee training. On the other hand, it seems feasible from a theoretical point of view that particularly following relatively difficult decisions referees have problems controlling their NVB. We conducted three experiments to investigate this question. Experiment 1 (N = 40) and Experiment 2 (N = 60) provided evidence that perceivers regard referees' NVB as less confident following ambiguous decisions as compared with following unambiguous decisions. Experiment 3 (N = 58) suggested that perceivers were more likely to debate with the referee when referees nonverbally communicated less confidence. We discuss consequences for referee training.

  2. Optimal test intervals of standby components based on actual plant-specific data

    International Nuclear Information System (INIS)

    Jones, R.B.; Bickel, J.H.

    1987-01-01

    Based on standard reliability analysis techniques, both under testing and over testing affect the availability of standby components. If tests are performed too often, unavailability is increased since the equipment is being used excessively. Conversely if testing is performed too infrequently, the likelihood of component unavailability is also increased due to the formation of rust, heat or radiation damage, dirt infiltration, etc. Thus from a physical perspective, an optimal test interval should exist which minimizes unavailability. This paper illustrates the application of an unavailability model that calculates optimal testing intervals for components with a failure database. (orig./HSCH)

  3. The construction of categorization judgments: using subjective confidence and response latency to test a distributed model.

    Science.gov (United States)

    Koriat, Asher; Sorka, Hila

    2015-01-01

    The classification of objects to natural categories exhibits cross-person consensus and within-person consistency, but also some degree of between-person variability and within-person instability. What is more, the variability in categorization is also not entirely random but discloses systematic patterns. In this study, we applied the Self-Consistency Model (SCM, Koriat, 2012) to category membership decisions, examining the possibility that confidence judgments and decision latency track the stable and variable components of categorization responses. The model assumes that category membership decisions are constructed on the fly depending on a small set of clues that are sampled from a commonly shared population of pertinent clues. The decision and confidence are based on the balance of evidence in favor of a positive or a negative response. The results confirmed several predictions derived from SCM. For each participant, consensual responses to items were more confident than non-consensual responses, and for each item, participants who made the consensual response tended to be more confident than those who made the nonconsensual response. The difference in confidence between consensual and nonconsensual responses increased with the proportion of participants who made the majority response for the item. A similar pattern was observed for response speed. The pattern of results obtained for cross-person consensus was replicated by the results for response consistency when the responses were classified in terms of within-person agreement across repeated presentations. These results accord with the sampling assumption of SCM, that confidence and response speed should be higher when the decision is consistent with what follows from the entire population of clues than when it deviates from it. Results also suggested that the context for classification can bias the sample of clues underlying the decision, and that confidence judgments mirror the effects of context on

  4. Solar Alpha Rotary Joint (SARJ) Lubrication Interval Test and Evaluation (LITE). Post-Test Grease Analysis

    Science.gov (United States)

    Golden, Johnny L.; Martinez, James E.; Devivar, Rodrigo V.

    2015-01-01

    The Solar Alpha Rotary Joint (SARJ) is a mechanism of the International Space Station (ISS) that orients the solar power generating arrays toward the sun as the ISS orbits our planet. The orientation with the sun must be maintained to fully charge the ISS batteries and maintain all the other ISS electrical systems operating properly. In 2007, just a few months after full deployment, the starboard SARJ developed anomalies that warranted a full investigation including ISS Extravehicular Activity (EVA). The EVA uncovered unexpected debris that was due to degradation of a nitride layer on the SARJ bearing race. ISS personnel identified the failure root-cause and applied an aerospace grease to lubricate the area associated with the anomaly. The corrective action allowed the starboard SARJ to continue operating within the specified engineering parameters. The SARJ LITE (Lubrication Interval Test and Evaluation) program was initiated by NASA, Lockheed Martin, and Boeing to simulate the operation of the ISS SARJ for an extended time. The hardware was designed to test and evaluate the exact material components used aboard the ISS SARJ, but in a controlled area where engineers could continuously monitor the performance. After running the SARJ LITE test for an equivalent of 36+ years of continuous use, the test was opened to evaluate the metallography and lubrication. We have sampled the SARJ LITE rollers and plate to fully assess the grease used for lubrication. Chemical and thermal analysis of these samples has generated information that has allowed us to assess the location, migration, and current condition of the grease. The collective information will be key toward understanding and circumventing any performance deviations involving the ISS SARJ in the years to come.

  5. Analysis of unavailability related to demand failures as a function of the testing interval

    International Nuclear Information System (INIS)

    Carretero, J.A.; Pereira, M.B.; Perez Lobo, E.M.

    1998-01-01

    The unavailability related to the demand failure of a component is the sum of the contributions of the failures in demand and in waiting. An important point in PSAs is the calculation of unavailabilities of the basic events of demand failure. Several criteria are used for this, with the objective of simplifying said quantification. The information available from two nuclear power plants has been analysed, in order to determine the tendency in the models in demand and in waiting, as a function of the test intervals, the following conclusions were obtained: - There is a clear tendency for the possibility of failure in demand to increase as the interval between tests increases - The test intervals considered in PSAs are not always coherent with the estimates of real demand; this implies a penalty when using the in waiting model, due to the underlying conservatism Therefore, increasing the intervals between tests over time (a tendency studied in nuclear power plants)could cause demand due to tests to b e significantly less than that due to real actuations. This implies a need to apply test intervals based on historic demands and not on those due to historic tests, in order to avoid conservatism. (Author)

  6. Predictors of Willingness to Read in English: Testing a Model Based on Possible Selves and Self-Confidence

    Science.gov (United States)

    Khajavy, Gholam Hassan; Ghonsooly, Behzad

    2017-01-01

    The aim of the present study is twofold. First, it tests a model of willingness to read (WTR) based on L2 motivation and communication confidence (communication anxiety and perceived communicative competence). Second, it applies the recent theory of L2 motivation proposed by Dörnyei [2005. "The Psychology of Language Learner: Individual…

  7. Effects of human errors on the determination of surveillance test interval

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Koo, Bon Hyun

    1990-01-01

    This paper incorporates the effects of human error relevant to the periodic test on the unavailability of the safety system as well as the component unavailability. Two types of possible human error during the test are considered. One is the possibility that a good safety system is inadvertently left in a bad state after the test (Type A human error) and the other is the possibility that bad safety system is undetected upon the test (Type B human error). An event tree model is developed for the steady-state unavailability of safety system to determine the effects of human errors on the component unavailability and the test interval. We perform the reliability analysis of safety injection system (SIS) by applying aforementioned two types of human error to safety injection pumps. Results of various sensitivity analyses show that; 1) the appropriate test interval decreases and steady-state unavailability increases as the probabilities of both types of human errors increase, and they are far more sensitive to Type A human error than Type B and 2) the SIS unavailability increases slightly as the probability of Type B human error increases, and significantly as the probability of Type A human error increases. Therefore, to avoid underestimation, the effects of human error should be incorporated in the system reliability analysis which aims at the relaxations of the surveillance test intervals, and Type A human error has more important effect on the unavailability and surveillance test interval

  8. Exploration of analysis methods for diagnostic imaging tests: problems with ROC AUC and confidence scores in CT colonography.

    Science.gov (United States)

    Mallett, Susan; Halligan, Steve; Collins, Gary S; Altman, Doug G

    2014-01-01

    Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests.

  9. Reactor safety impact of functional test intervals: an application of Bayesian decision theory

    International Nuclear Information System (INIS)

    Buoni, F.B.

    1978-01-01

    Functional test intervals for important nuclear reactor systems can be obtained by viewing safety assessment as a decision process and functional testing as a Bayesian learning or information process. A preposterior analysis is used as the analytical model to find the preposterior expected reliability of a system as a function of test intervals. Persistent and transitory failure models are shown to yield different results. Functional tests of systems subject to persistent failure are effective in maintaining system reliability goals. Functional testing is not effective for systems subject to transitory failure; preventive maintenance must be used. A Bayesian posterior analysis of testing data can discriminate between persistent and transitory failure. The role of functional testing is seen to be an aid in assessing the future performance of reactor systems

  10. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    Science.gov (United States)

    Kim, Yuneung; Lim, Johan; Park, DoHwan

    2015-11-01

    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Browns Ferry Nuclear Plant: variation in test intervals for high-pressure coolant injection (HPCI) system

    International Nuclear Information System (INIS)

    Christie, R.F.; Stetkar, J.W.

    1985-01-01

    The change in availability of the high-pressure coolant injection system (HPCIS) due to a change in pump and valve test interval from monthly to quarterly was analyzed. This analysis started by using the HPCIS base line evaluation produced as part of the Browns Ferry Nuclear Plant (BFN) Probabilistic Risk Assessment (PRA). The base line evaluation showed that the dominant contributors to the unavailability of the HPCI system are hardware failures and the resultant downtime for unscheduled maintenance. The effect of changing the pump and valve test interval from monthly to quarterly was analyzed by considering the system unavailability due to hardware failures, the unavailability due to testing, and the unavailability due to human errors that potentially could occur during testing. The magnitude of the changes in unavailability affected by the change in test interval are discussed. The analysis showed a small increase in the availability of the HPCIS to respond to loss of coolant accidents (LOCAs) and a small decrease in the availability of the HPCIS to respond to transients which require HPCIS actuation. In summary, the increase in test interval from monthly to quarterly does not significantly impact the overall HPCIS availability

  12. Ventral striatal activity correlates with memory confidence for old- and new-responses in a difficult recognition test.

    Directory of Open Access Journals (Sweden)

    Ulrike Schwarze

    Full Text Available Activity in the ventral striatum has frequently been associated with retrieval success, i.e., it is higher for hits than correct rejections. Based on the prominent role of the ventral striatum in the reward circuit, its activity has been interpreted to reflect the higher subjective value of hits compared to correct rejections in standard recognition tests. This hypothesis was supported by a recent study showing that ventral striatal activity is higher for correct rejections than hits when the value of rejections is increased by external incentives. These findings imply that the striatal response during recognition is context-sensitive and modulated by the adaptive significance of "oldness" or "newness" to the current goals. The present study is based on the idea that not only external incentives, but also other deviations from standard recognition tests which affect the subjective value of specific response types should modulate striatal activity. Therefore, we explored ventral striatal activity in an unusually difficult recognition test that was characterized by low levels of confidence and accuracy. Based on the human uncertainty aversion, in such a recognition context, the subjective value of all high confident decisions is expected to be higher than usual, i.e., also rejecting items with high certainty is deemed rewarding. In an accompanying behavioural experiment, participants rated the pleasantness of each recognition response. As hypothesized, ventral striatal activity correlated in the current unusually difficult recognition test not only with retrieval success, but also with confidence. Moreover, participants indicated that they were more satisfied by higher confidence in addition to perceived oldness of an item. Taken together, the results are in line with the hypothesis that ventral striatal activity during recognition codes the subjective value of different response types that is modulated by the context of the recognition test.

  13. The patients' perspective of international normalized ratio self-testing, remote communication of test results and confidence to move to self-management.

    Science.gov (United States)

    Grogan, Anne; Coughlan, Michael; Prizeman, Geraldine; O'Connell, Niamh; O'Mahony, Nora; Quinn, Katherine; McKee, Gabrielle

    2017-12-01

    To elicit the perceptions of patients, who self-tested their international normalized ratio and communicated their results via a text or phone messaging system, to determine their satisfaction with the education and support that they received and to establish their confidence to move to self-management. Self-testing of international normalized ratio has been shown to be reliable and is fast becoming common practice. As innovations are introduced to point of care testing, more research is needed to elicit patients' perceptions of the self-testing process. This three site study used a cross-sectional prospective descriptive survey. Three hundred and thirty patients who were prescribed warfarin and using international normalized ratio self-testing were invited to take part in the study. The anonymous survey examined patient profile, patients' usage, issues, perceptions, confidence and satisfaction with using the self-testing system and their preparedness for self-management of warfarin dosage. The response rate was 57% (n = 178). Patients' confidence in self-testing was high (90%). Patients expressed a high level of satisfaction with the support received, but expressed the need for more information on support groups, side effects of warfarin, dietary information and how to dispose of needles. When asked if they felt confident to adjust their own warfarin levels 73% agreed. Chi-squared tests for independence revealed that none of the patient profile factors examined influenced this confidence. The patients cited the greatest advantages of the service were reduced burden, more autonomy, convenience and ease of use. The main disadvantages cited were cost and communication issues. Patients were satisfied with self-testing. The majority felt they were ready to move to self-management. The introduction of innovations to remote point of care testing, such as warfarin self-testing, needs to have support at least equal to that provided in a hospital setting. © 2017 John

  14. Hypothesis Testing of Inclusion of the Tolerance Interval for the Assessment of Food Safety.

    Directory of Open Access Journals (Sweden)

    Hungyen Chen

    Full Text Available In the testing of food quality and safety, we contrast the contents of the newly proposed food (genetically modified food against those of conventional foods. Because the contents vary largely between crop varieties and production environments, we propose a two-sample test of substantial equivalence that examines the inclusion of the tolerance intervals of the two populations, the population of the contents of the proposed food, which we call the target population, and the population of the contents of the conventional food, which we call the reference population. Rejection of the test hypothesis guarantees that the contents of the proposed foods essentially do not include outliers in the population of the contents of the conventional food. The existing tolerance interval (TI0 is constructed to have at least a pre-specified level of the coverage probability. Here, we newly introduce the complementary tolerance interval (TI1 that is guaranteed to have at most a pre-specified level of the coverage probability. By applying TI0 and TI1 to the samples from the target population and the reference population respectively, we construct a test statistic for testing inclusion of the two tolerance intervals. To examine the performance of the testing procedure, we conducted a simulation that reflects the effects of gene and environment, and residual from a crop experiment. As a case study, we applied the hypothesis testing to test if the distribution of the protein content of rice in Kyushu area is included in the distribution of the protein content in the other areas in Japan.

  15. Confidence in Nuclear Weapons as Numbers Decrease and Time Since Testing Increases

    Science.gov (United States)

    Adams, Marvin

    2011-04-01

    As numbers and types of nuclear weapons are reduced, the U.S. objective is to maintain a safe, secure and effective nuclear deterrent without nuclear-explosive testing. A host of issues combine to make this a challenge. An evolving threat environment may prompt changes to security systems. Aging of weapons has led to ``life extension programs'' that produce weapons that differ in some ways from the originals. Outdated and changing facilities pose difficulties for life-extension, surveillance, and dismantlement efforts. A variety of factors can make it a challenge to recruit, develop, and retain outstanding people with the skills and experience that are needed to form the foundation of a credible deterrent. These and other issues will be discussed in the framework of proposals to reduce and perhaps eliminate nuclear weapons.

  16. Sex differences in mathematical achievement: Grades, national test, and self-confidence

    Directory of Open Access Journals (Sweden)

    Egorova, Marina S.

    2016-09-01

    Full Text Available Academic achievement, which is inherently an indicator of progress in the curriculum, can also be viewed as an indirect measure of cognitive development, social adaptation, and motivational climate characteristics. In addition to its direct application, academic achievement is used as a mediating factor in the study of various phenomena, from the etiology of learning disabilities to social inequality. Analysis of sex differences in mathematical achievement is considered particularly important for exploring academic achievement, since creating an adequate educational environment with equal opportunities for boys and girls serves as a prerequisite for improving the overall mathematical and technical literacy that is crucial for modern society, creates balanced professional opportunities, and destroys traditional stereotypes about the roles of men and women in society. The objective of our research was to analyze sex differences in mathematical achievement among high school students and to compare various methods for diagnosing academic performance, such as school grades, test scores, and self-concept. The results were obtained through two population studies whose samples are representative of the Russian population in the relevant age group. Study 1 looked at sex differences in math grades among twins (n = 1,234 pairs and singletons (n = 2,227 attending high school. The sample of Study 2 comprised all twins who took the Unified State Examination in 2010–2012. The research analyzed sex differences in USE math scores across the entire sample and within the extreme subgroups. It also explored differences between boys and girls in opposite-sex dizygotic (DZ twin pairs. The key results were as follows. No difference in mathematical achievement was observed between twins and singletons. Sex differences were found in all measures of mathematical achievement. Girls had higher school grades in math than boys, while boys outperformed girls in USE math

  17. Bootstrap confidence intervals for principal response curves

    NARCIS (Netherlands)

    Timmerman, Marieke E.; Ter Braak, Cajo J. F.

    2008-01-01

    The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

  18. Bootstrap Confidence Intervals for Principal Response Curves

    NARCIS (Netherlands)

    Timmerman, M.E.; Braak, ter C.J.F.

    2008-01-01

    The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

  19. Consumers report lower confidence in their genetics knowledge following direct-to-consumer personal genomic testing.

    Science.gov (United States)

    Carere, Deanna Alexis; Kraft, Peter; Kaphingst, Kimberly A; Roberts, J Scott; Green, Robert C

    2016-01-01

    The aim of this study was to measure changes to genetics knowledge and self-efficacy following personal genomic testing (PGT). New customers of 23andMe and Pathway Genomics completed a series of online surveys. We measured genetics knowledge (nine true/false items) and genetics self-efficacy (five Likert-scale items) before receipt of results and 6 months after results and used paired methods to evaluate change over time. Correlates of change (e.g., decision regret) were identified using linear regression. 998 PGT customers (59.9% female; 85.8% White; mean age 46.9 ± 15.5 years) were included in our analyses. Mean genetics knowledge score was 8.15 ± 0.95 (out of 9) at baseline and 8.25 ± 0.92 at 6 months (P = 0.0024). Mean self-efficacy score was 29.06 ± 5.59 (out of 35) at baseline and 27.7 ± 5.46 at 6 months (P reported lower self-efficacy following PGT. Change in self-efficacy was positively associated with health-care provider consultation (P = 0.0042), impact of PGT on perceived control over one's health (P consumers in response to receiving complex genetic information.Genet Med 18 1, 65-72.

  20. Long-term maintenance of immediate or delayed extinction is determined by the extinction-test interval

    OpenAIRE

    Johnson, Justin S.; Escobar, Martha; Kimble, Whitney L.

    2010-01-01

    Short acquisition-extinction intervals (immediate extinction) can lead to either more or less spontaneous recovery than long acquisition-extinction intervals (delayed extinction). Using rat subjects, we observed less spontaneous recovery following immediate than delayed extinction (Experiment 1). However, this was the case only if a relatively long extinction-test interval was used; a relatively short extinction-test interval yielded the opposite result (Experiment 2). Previous data appear co...

  1. Test interval optimization of safety systems of nuclear power plant using fuzzy-genetic approach

    International Nuclear Information System (INIS)

    Durga Rao, K.; Gopika, V.; Kushwaha, H.S.; Verma, A.K.; Srividya, A.

    2007-01-01

    Probabilistic safety assessment (PSA) is the most effective and efficient tool for safety and risk management in nuclear power plants (NPP). PSA studies not only evaluate risk/safety of systems but also their results are very useful in safe, economical and effective design and operation of NPPs. The latter application is popularly known as 'Risk-Informed Decision Making'. Evaluation of technical specifications is one such important application of Risk-Informed decision making. Deciding test interval (TI), one of the important technical specifications, with the given resources and risk effectiveness is an optimization problem. Uncertainty is inherently present in the availability parameters such as failure rate and repair time due to the limitation in assessing these parameters precisely. This paper presents a solution to test interval optimization problem with uncertain parameters in the model with fuzzy-genetic approach along with a case of application from a safety system of Indian pressurized heavy water reactor (PHWR)

  2. Development and interval testing of a naturalistic driving methodology to evaluate driving behavior in clinical research.

    Science.gov (United States)

    Babulal, Ganesh M; Addison, Aaron; Ghoshal, Nupur; Stout, Sarah H; Vernon, Elizabeth K; Sellan, Mark; Roe, Catherine M

    2016-01-01

    Background : The number of older adults in the United States will double by 2056. Additionally, the number of licensed drivers will increase along with extended driving-life expectancy. Motor vehicle crashes are a leading cause of injury and death in older adults. Alzheimer's disease (AD) also negatively impacts driving ability and increases crash risk. Conventional methods to evaluate driving ability are limited in predicting decline among older adults. Innovations in GPS hardware and software can monitor driving behavior in the actual environments people drive in. Commercial off-the-shelf (COTS) devices are affordable, easy to install and capture large volumes of data in real-time. However, adapting these methodologies for research can be challenging. This study sought to adapt a COTS device and determine an interval that produced accurate data on the actual route driven for use in future studies involving older adults with and without AD.  Methods : Three subjects drove a single course in different vehicles at different intervals (30, 60 and 120 seconds), at different times of day, morning (9:00-11:59AM), afternoon (2:00-5:00PM) and night (7:00-10pm). The nine datasets were examined to determine the optimal collection interval. Results : Compared to the 120-second and 60-second intervals, the 30-second interval was optimal in capturing the actual route driven along with the lowest number of incorrect paths and affordability weighing considerations for data storage and curation. Discussion : Use of COTS devices offers minimal installation efforts, unobtrusive monitoring and discreet data extraction.  However, these devices require strict protocols and controlled testing for adoption into research paradigms.  After reliability and validity testing, these devices may provide valuable insight into daily driving behaviors and intraindividual change over time for populations of older adults with and without AD.  Data can be aggregated over time to look at changes

  3. A Methodology for Evaluation of Inservice Test Intervals for Pumps and Motor-Operated Valves

    International Nuclear Information System (INIS)

    Cox, D.F.; Haynes, H.D.; McElhaney, K.L.; Otaduy, P.J.; Staunton, R.H.; Vesely, W.E.

    1999-01-01

    Recent nuclear industry reevaluation of component inservice testing (IST) requirements is resulting in requests for IST interval extensions and changes to traditional IST programs. To evaluate these requests, long-term component performance and the methods for mitigating degradation need to be understood. Determining the appropriate IST intervals, along with component testing, monitoring, trending, and maintenance effects, has become necessary. This study provides guidelines to support the evaluation of IST intervals for pumps and motor-operated valves (MOVs). It presents specific engineering information pertinent to the performance and monitoring/testing of pumps and MOVs, provides an analytical methodology for assessing the bounding effects of aging on component margin behavior, and identifies basic elements of an overall program to help ensure component operability. Guidance for assessing probabilistic methods and the risk importance and safety consequences of the performance of pumps and MOVs has not been specifically included within the scope of this report, but these elements may be included in licensee change requests

  4. Decomposing the interaction between retention interval and study/test practice: the role of retrievability.

    Science.gov (United States)

    Jang, Yoonhee; Wixted, John T; Pecher, Diane; Zeelenberg, René; Huber, David E

    2012-01-01

    Even without feedback, test practice enhances delayed performance compared to study practice, but the size of the effect is variable across studies. We investigated the benefit of testing, separating initially retrievable items from initially nonretrievable items. In two experiments, an initial test determined item retrievability. Retrievable or nonretrievable items were subsequently presented for repeated study or test practice. Collapsing across items, in Experiment 1, we obtained the typical cross-over interaction between retention interval and practice type. For retrievable items, however, the cross-over interaction was quantitatively different, with a small study benefit for an immediate test and a larger testing benefit after a delay. For nonretrievable items, there was a large study benefit for an immediate test, but one week later there was no difference between the study and test practice conditions. In Experiment 2, initially nonretrievable items were given additional study followed by either an immediate test or even more additional study, and one week later performance did not differ between the two conditions. These results indicate that the effect size of study/test practice is due to the relative contribution of retrievable and nonretrievable items.

  5. Oscillatory dynamics of an intravenous glucose tolerance test model with delay interval

    Science.gov (United States)

    Shi, Xiangyun; Kuang, Yang; Makroglou, Athena; Mokshagundam, Sriprakash; Li, Jiaxu

    2017-11-01

    Type 2 diabetes mellitus (T2DM) has become prevalent pandemic disease in view of the modern life style. Both diabetic population and health expenses grow rapidly according to American Diabetes Association. Detecting the potential onset of T2DM is an essential focal point in the research of diabetes mellitus. The intravenous glucose tolerance test (IVGTT) is an effective protocol to determine the insulin sensitivity, glucose effectiveness, and pancreatic β-cell functionality, through the analysis and parameter estimation of a proper differential equation model. Delay differential equations have been used to study the complex physiological phenomena including the glucose and insulin regulations. In this paper, we propose a novel approach to model the time delay in IVGTT modeling. This novel approach uses two parameters to simulate not only both discrete time delay and distributed time delay in the past interval, but also the time delay distributed in a past sub-interval. Normally, larger time delay, either a discrete or a distributed delay, will destabilize the system. However, we find that time delay over a sub-interval might not. We present analytically some basic model properties, which are desirable biologically and mathematically. We show that this relatively simple model provides good fit to fluctuating patient data sets and reveals some intriguing dynamics. Moreover, our numerical simulation results indicate that our model may remove the defect in well known Minimal Model, which often overestimates the glucose effectiveness index.

  6. A Comparative Risk Assessment of Extended Integrated Leak Rate Testing Intervals

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Ji Yong; Hwang, Seok Won; Lee, Byung Sik [Korea Hydro and Nuclear Power Co., Daejeon (Korea, Republic of)

    2009-10-15

    This paper presents the risk impacts of extending the Integrated Leak Rate Testing (ILRT) intervals (from five years to ten years) of Yonggwang (YGN) Unit 1 and 2. These risk impacts depended on the annual variances of meteorological data and resident population. Main comparisons were performed between the initial risk assessment (2005) for the purpose of extending ILRT interval and risk reassessment (2009) where the changed plant internal configurations (core inventory and radioisotope release fraction) and plant external alterations (wind directions, rainfall and population distributions) were monitored. The reassessment showed that there was imperceptible risk increase when the ILRT interval was extended compared to the initial risk assessment. In addition, the increased value of the Large Early Release Frequency (LERF) also satisfied the acceptance guideline proposed on Reg. Guide 1.174. The MACCS II code was used for evaluating the offsite consequence analysis. The primary risk index were used as the Probabilistic Population Dose (PPD) by considering the early effects within 80 km. The Probabilistic Safety Assessment (PSA) of YGN 1 and 2 was applied to evaluate the accident frequency of each source term category and the used PSA scope was limited to internal event.

  7. Prognostic value of QTc interval dispersion changes during exercise testing in hypertensive men

    Directory of Open Access Journals (Sweden)

    Đorđević Dragan

    2008-01-01

    Full Text Available INTRODUCTION The prognostic significance of QTc dispersion changes during exercise testing (ET in patients with left ventricular hypertrophy is not clear. OBJECTIVE The aim was to study the dynamics of QTc interval dispersion (QTcd in patients (pts with left ventricular hypertrophy (LVH during the exercise testing and its prognostic significance. METHOD In the study we included 55 men (aged 53 years with hypertensive left ventricular hypertrophy and a negative ET (LVH group, 20 men (aged 58 years with a positive ET and 20 healthy men (aged 55 years. There was no statistically significant difference in the left ventricular mass index (LVMI between LVH group and ILVH group (160.9±14.9 g/m2 and 152.8±22.7 g/m2. The first ECG was done before the ET and the second one was done during the first minute of recovery, with calculation of QTc dispersion. The patients were followed during five years for new cardiovascular events. RESULTS During the ET, the QTcd significantly increased in LVH group (56.8±18.0 - 76.7±22.6 ms; p<0.001. A statistically significant correlation was found between the amount of ST segment depression at the end of ET and QTc dispersion at the beginning and at the end of ET (r=0.673 and r=0.698; p<0.01. The QTc dispersion was increased in 35 (63.6% patients and decreased in 20 (36.4% patients during the ET. Three patients (5.4% in the first group had adverse cardiovascular events during the five-year follow-up. A multiple stepwise regression model was formed by including age, LVMI, QTc interval, QTc dispersion and change of QTc dispersion during the ET. There was no prognostic significance of QTc interval and QTc dispersion during five-year follow-up in regard to adverse cardiovascular events, but prognostic value was found for LVMI (coefficient β=0.480; p<0.001. CONCLUSION The increase of QTc interval dispersion is common in men with positive ET for myocardial ischemia and there is a correlation between QTc dispersion and

  8. A study on assessment methodology of surveillance test interval and allowed outage time

    International Nuclear Information System (INIS)

    Che, Moo Seong; Cheong, Chang Hyeon; Lee, Byeong Cheol

    1996-07-01

    The objectives of this study is the development of methodology by which assessing the optimizes Surveillance Test Interval(STI) and Allowed Outage Time(AOT) using PSA method that can supplement the current deterministic methods and the improvement of Korea nuclear power plants safety. In the first year of this study, the survey about the assessment methodologies, modeling and results performed by domestic and international researches is performed as the basic step before developing the assessment methodology of this study. The assessment methodology that supplement the revealed problems in many other studies is presented and the application of new methodology into the example system assures the feasibility of this method

  9. A study on assessment methodology of surveillance test interval and allowed outage time

    Energy Technology Data Exchange (ETDEWEB)

    Che, Moo Seong; Cheong, Chang Hyeon; Lee, Byeong Cheol [Seoul Nationl Univ., Seoul (Korea, Republic of)] (and others)

    1996-07-15

    The objectives of this study is the development of methodology by which assessing the optimizes Surveillance Test Interval(STI) and Allowed Outage Time(AOT) using PSA method that can supplement the current deterministic methods and the improvement of Korea nuclear power plants safety. In the first year of this study, the survey about the assessment methodologies, modeling and results performed by domestic and international researches is performed as the basic step before developing the assessment methodology of this study. The assessment methodology that supplement the revealed problems in many other studies is presented and the application of new methodology into the example system assures the feasibility of this method.

  10. Trimester specific reference intervals for thyroid function tests in normal Indian pregnant women.

    Science.gov (United States)

    Sekhri, Tarun; Juhi, Juhi Agarwal; Wilfred, Reena; Kanwar, Ratnesh S; Sethi, Jyoti; Bhadra, Kuntal; Nair, Sirimavo; Singh, Satveer

    2016-01-01

    Accurate assessment of thyroid function during pregnancy is critical, for initiation of thyroid hormone therapy, as well as for adjustment of thyroid hormone dose in hypothyroid cases. We evaluated pregnant women who had no past history of thyroid disorders and studied their thyroid function in each trimester. 86 normal pregnant women in the first trimester of pregnancy were selected for setting reference intervals. All were healthy, euthyroid and negative for thyroid peroxidase antibody (TPOAb). These women were serially followed throughout pregnancy. 124 normal nonpregnant subjects were selected for comparison. Thyrotropin (TSH), free thyroxine (FT4), free triiodothyronine (FT3) and anti-TPO were measured using Roche Elecsys 1010 analyzer. Urinary iodine content was determined by simple microplate method. The 2.5th and 97.5th percentiles were calculated as the reference intervals for thyroid hormone levels during each trimester. SPSS (version 14.0, SPSS Inc., Chicago, IL, USA) was used for data processing and analysis. The reference intervals for the first, second and third trimesters for the following parameters: TSH 0.09-6.65, 0.51-6.66, 0.91-4.86 µIU/mL, FT4 9.81-18.53, 8.52-19.43, 7.39-18.28 pM/L and FT3 3.1-6.35, 2.39-5.12, 2.57-5.68 pM/L respectively. Thyroid hormone concentrations significantly differed during pregnancy at different stages of gestation. The pregnant women in the study had median urinary iodine concentration of 150-200 µg/l during each trimester. The trimester-specific reference intervals for thyroid tests during pregnancy have been established for pregnant Indian women serially followed during pregnancy using 2.5th and 97.5th percentiles.

  11. The effect of a daily quiz (TOPday) on self-confidence, enthusiasm, and test results for biomechanics.

    Science.gov (United States)

    Tanck, Esther; Maessen, Martijn F H; Hannink, Gerjon; van Kuppeveld, Sascha M H F; Bolhuis, Sanneke; Kooloos, Jan G M

    2014-01-01

    Many students in Biomedical Sciences have difficulty understanding biomechanics. In a second-year course, biomechanics is taught in the first week and examined at the end of the fourth week. Knowledge is retained longer if the subject material is repeated. However, how does one encourage students to repeat the subject matter? For this study, we developed 'two opportunities to practice per day (TOPday)', consisting of multiple-choice questions on biomechanics with immediate feedback, which were sent via e-mail. We investigated the effect of TOPday on self-confidence, enthusiasm, and test results for biomechanics. All second-year students (n = 95) received a TOPday of biomechanics on every regular course day with increasing difficulty during the course. At the end of the course, a non-anonymous questionnaire was conducted. The students were asked how many TOPday questions they completed (0-6 questions [group A]; 7-18 questions [group B]; 19-24 questions [group C]). Other questions included the appreciation for TOPday, and increase (no/yes) in self-confidence and enthusiasm for biomechanics. Seventy-eight students participated in the examination and completed the questionnaire. The appreciation for TOPday in group A (n = 14), B (n = 23) and C (n = 41) was 7.0 (95 % CI 6.5-7.5), 7.4 (95 % CI 7.0-7.8), and 7.9 (95 % CI 7.6-8.1), respectively (p biomechanics due to TOPday. In addition, they had a higher test result for biomechanics (p biomechanics on the other.

  12. Learned helplessness: effects of response requirement and interval between treatment and testing.

    Science.gov (United States)

    Hunziker, M H L; Dos Santos, C V

    2007-11-01

    Three experiments investigated learned helplessness in rats manipulating response requirements, shock duration, and intervals between treatment and testing. In Experiment 1, rats previously exposed to uncontrollable or no shocks were tested under one of four different contingencies of negative reinforcement: FR 1 or FR 2 escape contingency for running, and FR1 escape contingency for jumping (differing for the maximum shock duration of 10s or 30s). The results showed that the uncontrollable shocks produced a clear operant learning deficit (learned helplessness effect) only when the animals were tested under the jumping FR 1 escape contingency with 10-s max shock duration. Experiment 2 isolated of the effects of uncontrollability from shock exposure per se and showed that the escape deficit observed using the FR 1 escape jumping response (10-s shock duration) was produced by the uncontrollability of shock. Experiment 3 showed that using the FR 1 jumping escape contingency in the test, the learned helplessness effect was observed one, 14 or 28 days after treatment. These results suggest that running may not be an appropriate test for learned helplessness, and that many diverging results found in the literature might be accounted for by the confounding effects of respondent and operant contingencies present when running is required of rats.

  13. The Effect of Retention Interval Task Difficulty on Young Children's Prospective Memory: Testing the Intention Monitoring Hypothesis

    Science.gov (United States)

    Mahy, Caitlin E. V.; Moses, Louis J.

    2015-01-01

    The current study examined the impact of retention interval task difficulty on 4- and 5-year-olds' prospective memory (PM) to test the hypothesis that children periodically monitor their intentions during the retention interval and that disrupting this monitoring may result in poorer PM performance. In addition, relations among PM, working memory,…

  14. QTc interval prolongation in children with Turner syndrome: the results of exercise testing and 24-h ECG.

    Science.gov (United States)

    Dalla Pozza, Robert; Bechtold, Susanne; Urschel, Simon; Netz, Heinrich; Schwarz, Hans-Peter

    2009-01-01

    Turner syndrome (TS) is the most common sex chromosome abnormality in females. Recently, a prolongation of the rate-corrected QT (QTc) interval in the electrocardiogram (ECG) of TS patients has been reported. A prolonged QTc interval has been correlated to an increased risk for sudden cardiac death, and medical treatment is warranted in patients with congenital long QT syndrome (LQTS). Additionally, several drugs of common use are contraindicated in LQTS because of their effects on myocardial repolarization. The importance of the QTc prolongation in TS patients is not known at present. Eighteen TS patients with a prolonged QTc interval (group 1) and 11 TS patients with a normal QTc interval (group 2) (mean age 12.6+/-3.1 vs. 11.8+/-2.1 years, respectively) were tested. The QTc interval was calculated during exercise testing and during 24-h ECG recordings. None of the patients experienced adverse cardiac events during the tests. The mean QTc interval decreased from 0.467 to 0.432 s in group 1 and from 0.432 to 0.412 s in group 2. During the 24-h ECG, the maximum QTc interval was significantly prolonged in group 1 (0.51 vs. 0.465 s, pinformation about the cardiac risk in the single TS patient with a prolonged QTc interval. This helps in counseling these girls, as clear therapeutic guidelines are currently lacking.

  15. The Model Confidence Set

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

    The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS......, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine...... the MCS of the best in terms of in-sample likelihood criteria....

  16. A study on assessment methodology of surveillance test interval and Allowed Outage Time

    International Nuclear Information System (INIS)

    Che, Moo Seong; Cheong, Chang Hyeon; Ryu, Yeong Woo; Cho, Jae Seon; Heo, Chang Wook; Kim, Do Hyeong; Kim, Joo Yeol; Kim, Yun Ik; Yang, Hei Chang

    1997-07-01

    Objectives of this study is the development of methodology by which assesses the optimization of Surveillance Test Interval(STI) and Allowed Outage Time(AOT) using PSA method that can supplement the current deterministic methods and the improvement of Korean nuclear power plants safety. In the first year of this study, the survey about the assessment methodologies, modeling and results performed by domestic and international researches are performed as the basic step before developing the assessment methodology of this study. The assessment methodology that supplement the revealed problems in many other studies is presented and the application of new methodology into the example system assures the feasibility of this method. In the second year of this study, the sensitivity analyses about the failure factors of the components are performed in the bases of the assessment methodologies of the first study, the interaction modeling of the STI and AOT is quantified. And the reliability assessment methodology about the diesel generator is reviewed and applied to the PSA code

  17. A study on assessment methodology of surveillance test interval and Allowed Outage Time

    Energy Technology Data Exchange (ETDEWEB)

    Che, Moo Seong; Cheong, Chang Hyeon; Ryu, Yeong Woo; Cho, Jae Seon; Heo, Chang Wook; Kim, Do Hyeong; Kim, Joo Yeol; Kim, Yun Ik; Yang, Hei Chang [Seoul National Univ., Seoul (Korea, Republic of)

    1997-07-15

    Objectives of this study is the development of methodology by which assesses the optimization of Surveillance Test Interval(STI) and Allowed Outage Time(AOT) using PSA method that can supplement the current deterministic methods and the improvement of Korean nuclear power plants safety. In the first year of this study, the survey about the assessment methodologies, modeling and results performed by domestic and international researches are performed as the basic step before developing the assessment methodology of this study. The assessment methodology that supplement the revealed problems in many other studies is presented and the application of new methodology into the example system assures the feasibility of this method. In the second year of this study, the sensitivity analyses about the failure factors of the components are performed in the bases of the assessment methodologies of the first study, the interaction modeling of the STI and AOT is quantified. And the reliability assessment methodology about the diesel generator is reviewed and applied to the PSA code.

  18. A study on the optimization of test interval for check valves of Ulchin Unit 3 using the risk-informed in-service testing approach

    International Nuclear Information System (INIS)

    Kang, D. I.; Kim, K. Y.; Yang, Z. A.; Ha, J. J.

    2002-01-01

    We optimized the test interval for check valves of Ulchin Unit 3 using the risk-informed in-service testing (IST) approach. First, we categorized the IST check valves for Ulchin Unit 3 according to their contributions to the safety of Ulchin Unit 3. Next, we performed the risk analysis on the relaxation of test interval for check valves identified as relatively low important to the safety of Ulchin Unit 3 to identify the maximum increasable test interval of them. Finally, we estimated the number of tests of IST check valves to be performed due to the changes of test interval. These study results are as follows: The categorization of IST check valve importance; the number of the HSSCs is 24(11.48%), the ISSCs is 40 (19.14%), and the LSSCs is 462(69.38%). The maximum increasable test interval; 6 times of current test interval of ISSCs2 and 40 times of that of LSSCs. The number of tests of IST check valves to be performed during 6 refueling time can be reduced from 7692 to 1333 ( 82.7%)

  19. Clinimetric properties of the Tinetti Mobility Test, Four Square Step Test, Activities-specific Balance Confidence Scale, and spatiotemporal gait measures in individuals with Huntington's disease.

    Science.gov (United States)

    Kloos, Anne D; Fritz, Nora E; Kostyk, Sandra K; Young, Gregory S; Kegelmeyer, Deb A

    2014-09-01

    Individuals with Huntington's disease (HD) experience balance and gait problems that lead to falls. Clinicians currently have very little information about the reliability and validity of outcome measures to determine the efficacy of interventions that aim to reduce balance and gait impairments in HD. This study examined the reliability and concurrent validity of spatiotemporal gait measures, the Tinetti Mobility Test (TMT), Four Square Step Test (FSST), and Activities-specific Balance Confidence (ABC) Scale in individuals with HD. Participants with HD [n = 20; mean age ± SD=50.9 ± 13.7; 7 male] were tested on spatiotemporal gait measures and the TMT, FSST, and ABC Scale before and after a six week period to determine test-retest reliability and minimal detectable change (MDC) values. Linear relationships between gait and clinical measures were estimated using Pearson's correlation coefficients. Spatiotemporal gait measures, the TMT total and the FSST showed good to excellent test-retest reliability (ICC > 0.75). MDC values were 0.30 m/s and 0.17 m/s for velocity in forward and backward walking respectively, four points for the TMT, and 3s for the FSST. The TMT and FSST were highly correlated with most spatiotemporal measures. The ABC Scale demonstrated lower reliability and less concurrent validity than other measures. The high test-retest reliability over a six week period and concurrent validity between the TMT, FSST, and spatiotemporal gait measures suggest that the TMT and FSST may be useful outcome measures for future intervention studies in ambulatory individuals with HD. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Constrained optimization of test intervals using a steady-state genetic algorithm

    International Nuclear Information System (INIS)

    Martorell, S.; Carlos, S.; Sanchez, A.; Serradell, V.

    2000-01-01

    There is a growing interest from both the regulatory authorities and the nuclear industry to stimulate the use of Probabilistic Risk Analysis (PRA) for risk-informed applications at Nuclear Power Plants (NPPs). Nowadays, special attention is being paid on analyzing plant-specific changes to Test Intervals (TIs) within the Technical Specifications (TSs) of NPPs and it seems to be a consensus on the need of making these requirements more risk-effective and less costly. Resource versus risk-control effectiveness principles formally enters in optimization problems. This paper presents an approach for using the PRA models in conducting the constrained optimization of TIs based on a steady-state genetic algorithm (SSGA) where the cost or the burden is to be minimized while the risk or performance is constrained to be at a given level, or vice versa. The paper encompasses first with the problem formulation, where the objective function and constraints that apply in the constrained optimization of TIs based on risk and cost models at system level are derived. Next, the foundation of the optimizer is given, which is derived by customizing a SSGA in order to allow optimizing TIs under constraints. Also, a case study is performed using this approach, which shows the benefits of adopting both PRA models and genetic algorithms, in particular for the constrained optimization of TIs, although it is also expected a great benefit of using this approach to solve other engineering optimization problems. However, care must be taken in using genetic algorithms in constrained optimization problems as it is concluded in this paper

  1. Quality specifications for the extra-analytical phase of laboratory testing: Reference intervals and decision limits.

    Science.gov (United States)

    Ceriotti, Ferruccio

    2017-07-01

    Reference intervals and decision limits are a critical part of the clinical laboratory report. The evaluation of their correct use represents a tool to verify the post analytical quality. Four elements are identified as indicators. 1. The use of decision limits for lipids and glycated hemoglobin. 2. The use, whenever possible, of common reference values. 3. The presence of gender-related reference intervals for at least the following common serum measurands (besides obviously the fertility relate hormones): alkaline phosphatase (ALP), alanine aminotransferase (ALT), creatine kinase (CK), creatinine, gamma-glutamyl transferase (GGT), IgM, ferritin, iron, transferrin, urate, red blood cells (RBC), hemoglobin (Hb) and hematocrit (Hct). 4. The presence of age-related reference intervals. The problem of specific reference intervals for elderly people is discussed, but their use is not recommended; on the contrary it is necessary the presence of pediatric age-related reference intervals at least for the following common serum measurands: ALP, amylase, creatinine, inorganic phosphate, lactate dehydrogenase, aspartate aminotransferase, urate, insulin like growth factor 1, white blood cells, RBC, Hb, Hct, alfa-fetoprotein and fertility related hormones. The lack of such reference intervals may imply significant risks for the patients. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  2. Gait in children with cerebral palsy : observer reliability of Physician Rating Scale and Edinburgh Visual Gait Analysis Interval Testing scale

    NARCIS (Netherlands)

    Maathuis, KGB; van der Schans, CP; van Iperen, A; Rietman, HS; Geertzen, JHB

    2005-01-01

    The aim of this study was to test the inter- and intra-observer reliability of the Physician Rating Scale (PRS) and the Edinburgh Visual Gait Analysis Interval Testing (GAIT) scale for use in children with cerebral palsy (CP). Both assessment scales are quantitative observational scales, evaluating

  3. Logrank Test and Interval Overlap Test for Bactericera cockerelli (Hemiptera: Triozidae) Under Different Fertilization Treatments for 7705 Tomato Hybrid

    Science.gov (United States)

    Vargas-Madríz, Haidel; Bautista-Martínez, Néstor; Vera-Graziano, Jorge; Sánchez-García, Prometeo; García-Gutiérrez, Cipriano; Sánchez-Soto, Saúl; de Jesús García-Avila, Clemente

    2014-01-01

    Abstract It is known that some nutrients can have both negative and positive effects on some populations of insects. To test this, the Logrank test and the Interval Overlap Test were evaluated for two crop cycles (February–May and May–August) of the 7705 tomato hybrid, and the effect on the psyllid, Bactericera cockerelli (Sulc.) (Hemiptera: Triozidae), was examined under greenhouse conditions. Tomato plants were in polythene bags and irrigated with the following solutions: T1—Steiner solution, T2—Steiner solution with nitrogen reduced to 25%, T3—Steiner solution with potassium reduced to 25%, and T4—Steiner solution with calcium reduced to 25%. In the Logrank test, a significant difference was found when comparing the survival parameters of B. cockerelli generated from the treatment cohorts: T1–T2; T1–T3; T1–T4; T2–T3; and T3–T4, while no significant differences were found in the T2–T4 comparison in the February–May cycle. In the May–August cycle, significant differences were found when comparing the survival parameters generated from the treatment cohorts: T1–T2; T1–T3; and T1–T4, while no significant differences were found in the T2–T3; T2–T4; and T3–T4 comparisons of survival parameters of B. cockerelli fed with the 7705 tomato hybrid. Also, the Interval Overlap Test was done on the treatment cohorts (T1, T2, T3, and T4) in the February–May and May–August cycles. T1 and T2 compare similarly in both cycles when feeding on the treatments up to 36 d. Similarly, in T1 and T3, the behavior of the insect is similar when feeding on the treatments up to 40 and 73 d, respectively. Comparisons T2–T3 and T2–T4 are similar when feeding on both treatments up to 42, 38 and 37, 63 d, respectively. Finally, the T3–T4 comparison was similar when feeding in both treatments up to 20 and 46 d, respectively. RESUMEN. Se sabe que algunos nutrientes pueden tener efectos tanto negativos como positivos en algunas poblaciones de

  4. Developing confidence in a coupled TH model based on the results of experiment by using engineering scale test facility, 'COUPLE'

    International Nuclear Information System (INIS)

    Fujisaki, Kiyoshi; Suzuki, Hideaki; Fujita, Tomoo

    2008-03-01

    It is necessary to understand quantitative changes of near-field conditions and processes over time and space for modeling the near-field evolution after emplacement of engineered barriers. However, the coupled phenomena in near-field are complicated because thermo-, hydro-, mechanical, chemical processes will interact each other. The question is, therefore, whether the applied model will represent the coupled behavior adequately or not. In order to develop confidence in the modeling, it is necessary to compare with results of coupled behavior experiments in laboratory or in site. In this report, we evaluated the applicability of a coupled T-H model under the conditions of simulated near-field for the results of coupled T-H experiment in laboratory. As a result, it has been shown that the fitting by the modeling with the measured data is reasonable under this condition. (author)

  5. Human error considerations and annunciator effects in determining optimal test intervals for periodically inspected standby systems

    International Nuclear Information System (INIS)

    McWilliams, T.P.; Martz, H.F.

    1981-01-01

    This paper incorporates the effects of four types of human error in a model for determining the optimal time between periodic inspections which maximizes the steady state availability for standby safety systems. Such safety systems are characteristic of nuclear power plant operations. The system is modeled by means of an infinite state-space Markov chain. Purpose of the paper is to demonstrate techniques for computing steady-state availability A and the optimal periodic inspection interval tau* for the system. The model can be used to investigate the effects of human error probabilities on optimal availability, study the benefits of annunciating the standby-system, and to determine optimal inspection intervals. Several examples which are representative of nuclear power plant applications are presented

  6. Self-care confidence may be more important than cognition to influence self-care behaviors in adults with heart failure: Testing a mediation model.

    Science.gov (United States)

    Vellone, Ercole; Pancani, Luca; Greco, Andrea; Steca, Patrizia; Riegel, Barbara

    2016-08-01

    Cognitive impairment can reduce the self-care abilities of heart failure patients. Theory and preliminary evidence suggest that self-care confidence may mediate the relationship between cognition and self-care, but further study is needed to validate this finding. The aim of this study was to test the mediating role of self-care confidence between specific cognitive domains and heart failure self-care. Secondary analysis of data from a descriptive study. Three out-patient sites in Pennsylvania and Delaware, USA. A sample of 280 adults with chronic heart failure, 62 years old on average and mostly male (64.3%). Data on heart failure self-care and self-care confidence were collected with the Self-Care of Heart Failure Index 6.2. Data on cognition were collected by trained research assistants using a neuropsychological test battery measuring simple and complex attention, processing speed, working memory, and short-term memory. Sociodemographic data were collected by self-report. Clinical information was abstracted from the medical record. Mediation analysis was performed with structural equation modeling and indirect effects were evaluated with bootstrapping. Most participants had at least 1 impaired cognitive domain. In mediation models, self-care confidence consistently influenced self-care and totally mediated the relationship between simple attention and self-care and between working memory and self-care (comparative fit index range: .929-.968; root mean squared error of approximation range: .032-.052). Except for short-term memory, which had a direct effect on self-care maintenance, the other cognitive domains were unrelated to self-care. Self-care confidence appears to be an important factor influencing heart failure self-care even in patients with impaired cognition. As few studies have successfully improved cognition, interventions addressing confidence should be considered as a way to improve self-care in this population. Copyright © 2016 Elsevier Ltd. All

  7. Impact of proof test interval and coverage on probability of failure of safety instrumented function

    International Nuclear Information System (INIS)

    Jin, Jianghong; Pang, Lei; Hu, Bin; Wang, Xiaodong

    2016-01-01

    Highlights: • Introduction of proof test coverage makes the calculation of the probability of failure for SIF more accurate. • The probability of failure undetected by proof test is independently defined as P TIF and calculated. • P TIF is quantified using reliability block diagram and simple formula of PFD avg . • Improving proof test coverage and adopting reasonable test period can reduce the probability of failure for SIF. - Abstract: Imperfection of proof test can result in the safety function failure of safety instrumented system (SIS) at any time in its life period. IEC61508 and other references ignored or only elementarily analyzed the imperfection of proof test. In order to further study the impact of the imperfection of proof test on the probability of failure for safety instrumented function (SIF), the necessity of proof test and influence of its imperfection on system performance was first analyzed theoretically. The probability of failure for safety instrumented function resulted from the imperfection of proof test was defined as probability of test independent failures (P TIF ), and P TIF was separately calculated by introducing proof test coverage and adopting reliability block diagram, with reference to the simplified calculation formula of average probability of failure on demand (PFD avg ). Research results show that: the shorter proof test period and the higher proof test coverage indicate the smaller probability of failure for safety instrumented function. The probability of failure for safety instrumented function which is calculated by introducing proof test coverage will be more accurate.

  8. Comparing biomarker measurements to a normal range: when to use standard error of the mean (SEM) or standard deviation (SD) confidence intervals tests

    Science.gov (United States)

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around...

  9. Effects of an intensive clinical skills course on senior nursing students' self-confidence and clinical competence: A quasi-experimental post-test study.

    Science.gov (United States)

    Park, Soohyun

    2018-02-01

    To foster nursing professionals, nursing education requires the integration of knowledge and practice. Nursing students in their senior year experience considerable stress in performing the core nursing skills because, typically, they have limited opportunities to practice these skills in their clinical practicum. Therefore, nurse educators should revise the nursing curricula to focus on core nursing skills. To identify the effect of an intensive clinical skills course for senior nursing students on their self-confidence and clinical competence. A quasi-experimental post-test study. A university in South Korea during the 2015-2016 academic year. A convenience sample of 162 senior nursing students. The experimental group (n=79) underwent the intensive clinical skills course, whereas the control group (n=83) did not. During the course, students repeatedly practiced the 20 items that make up the core basic nursing skills using clinical scenarios. Participants' self-confidence in the core clinical nursing skills was measured using a 10-point scale, while their clinical competence with these skills was measured using the core clinical nursing skills checklist. Independent t-test and chi-square tests were used to analyze the data. The mean scores in self-confidence and clinical competence were higher in the experimental group than in the control group. This intensive clinical skills courses had a positive effect on senior nursing students' self-confidence and clinical competence for the core clinical nursing skills. This study emphasizes the importance of reeducation using a clinical skills course during the transition from student to nursing professional. Copyright © 2017. Published by Elsevier Ltd.

  10. MK-801 and memantine act differently on short-term memory tested with different time-intervals in the Morris water maze test.

    Science.gov (United States)

    Duda, Weronika; Wesierska, Malgorzata; Ostaszewski, Pawel; Vales, Karel; Nekovarova, Tereza; Stuchlik, Ales

    2016-09-15

    N-methyl-d-aspartate receptors (NMDARs) play a crucial role in spatial memory formation. In neuropharmacological studies their functioning strongly depends on testing conditions and the dosage of NMDAR antagonists. The aim of this study was to assess the immediate effects of NMDAR block by (+)MK-801 or memantine on short-term allothetic memory. Memory was tested in a working memory version of the Morris water maze test. In our version of the test, rats underwent one day of training with 8 trials, and then three experimental days when rats were injected intraperitoneally with low- 5 (MeL), high - 20 (MeH) mg/kg memantine, 0.1mg/kg MK-801 or 1ml/kg saline (SAL) 30min before testing, for three consecutive days. On each experimental day there was just one acquisition and one test trial, with an inter-trial interval of 5 or 15min. During training the hidden platform was relocated after each trial and during the experiment after each day. The follow-up effect was assessed on day 9. Intact rats improved their spatial memory across the one training day. With a 5min interval MeH rats had longer latency then all rats during retrieval. With a 15min interval the MeH rats presented worse working memory measured as retrieval minus acquisition trial for path than SAL and MeL and for latency than MeL rats. MK-801 rats had longer latency than SAL during retrieval. Thus, the high dose of memantine, contrary to low dose of MK-801 disrupts short-term memory independent on the time interval between acquisition and retrieval. This shows that short-term memory tested in a working memory version of water maze is sensitive to several parameters: i.e., NMDA receptor antagonist type, dosage and the time interval between learning and testing. Copyright © 2016. Published by Elsevier B.V.

  11. A Comparative Test of the Interval-Scale Properties of Magnitude Estimation and Case III Scaling and Recommendations for Equal-Interval Frequency Response Anchors.

    Science.gov (United States)

    Schriesheim, Chester A.; Novelli, Luke, Jr.

    1989-01-01

    Differences between recommended sets of equal-interval response anchors derived from scaling techniques using magnitude estimations and Thurstone Case III pair-comparison treatment of complete ranks were compared. Differences in results for 205 undergraduates reflected differences in the samples as well as in the tasks and computational…

  12. Development, validity and reliability testing of the East Midlands Evaluation Tool (EMET) for measuring impacts on trainees' confidence and competence following end of life care training.

    Science.gov (United States)

    Whittaker, B; Parry, R; Bird, L; Watson, S; Faull, C

    2017-02-02

    To develop, test and validate a versatile questionnaire, the East Midlands Evaluation Tool (EMET), for measuring effects of end of life care training events on trainees' self-reported confidence and competence. A paper-based questionnaire was designed on the basis of the English Department of Health's core competences for end of life care, with sections for completion pretraining, immediately post-training and also for longer term follow-up. Preliminary versions were field tested at 55 training events delivered by 13 organisations to 1793 trainees working in diverse health and social care backgrounds. Iterative rounds of development aimed to maximise relevance to events and trainees. Internal consistency was assessed by calculating interitem correlations on questionnaire responses during field testing. Content validity was assessed via qualitative content analysis of (1) responses to questionnaires completed by field tester trainers and (2) field notes from a workshop with a separate cohort of experienced trainers. Test-retest reliability was assessed via repeat administration to a cohort of student nurses. The EMET comprises 27 items with Likert-scaled responses supplemented with questions seeking free-text responses. It measures changes in self-assessed confidence and competence on 5 subscales: communication skills; assessment and care planning; symptom management; advance care planning; overarching values and knowledge. Test-retest reliability was found to be good, as was internal consistency: the questions successfully assess different aspects of the same underlying concept. The EMET provides a time-efficient, reliable and flexible means of evaluating effects of training on self-reported confidence and competence in the key elements of end of life care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. The idiosyncratic nature of confidence.

    Science.gov (United States)

    Navajas, Joaquin; Hindocha, Chandni; Foda, Hebah; Keramati, Mehdi; Latham, Peter E; Bahrami, Bahador

    2017-11-01

    Confidence is the 'feeling of knowing' that accompanies decision making. Bayesian theory proposes that confidence is a function solely of the perceived probability of being correct. Empirical research has suggested, however, that different individuals may perform different computations to estimate confidence from uncertain evidence. To test this hypothesis, we collected confidence reports in a task where subjects made categorical decisions about the mean of a sequence. We found that for most individuals, confidence did indeed reflect the perceived probability of being correct. However, in approximately half of them, confidence also reflected a different probabilistic quantity: the perceived uncertainty in the estimated variable. We found that the contribution of both quantities was stable over weeks. We also observed that the influence of the perceived probability of being correct was stable across two tasks, one perceptual and one cognitive. Overall, our findings provide a computational interpretation of individual differences in human confidence.

  14. Results from an Interval Management (IM) Flight Test and Its Potential Benefit to Air Traffic Management Operations

    Science.gov (United States)

    Baxley, Brian; Swieringa, Kurt; Berckefeldt, Rick; Boyle, Dan

    2017-01-01

    NASA's first Air Traffic Management Technology Demonstration (ATD-1) subproject successfully completed a 19-day flight test of an Interval Management (IM) avionics prototype. The prototype was built based on IM standards, integrated into two test aircraft, and then flown in real-world conditions to determine if the goals of improving aircraft efficiency and airport throughput during high-density arrival operations could be met. The ATD-1 concept of operation integrates advanced arrival scheduling, controller decision support tools, and the IM avionics to enable multiple time-based arrival streams into a high-density terminal airspace. IM contributes by calculating airspeeds that enable an aircraft to achieve a spacing interval behind the preceding aircraft. The IM avionics uses its data (route of flight, position, etc.) and Automatic Dependent Surveillance-Broadcast (ADS-B) state data from the Target aircraft to calculate this airspeed. The flight test demonstrated that the IM avionics prototype met the spacing accuracy design goal for three of the four IM operation types tested. The primary issue requiring attention for future IM work is the high rate of IM speed commands and speed reversals. In total, during this flight test, the IM avionics prototype showed significant promise in contributing to the goals of improving aircraft efficiency and airport throughput.

  15. Component unavailability versus inservice test (IST) interval: Evaluations of component aging effects with applications to check valves

    International Nuclear Information System (INIS)

    Vesely, W.E.; Poole, A.B.

    1997-07-01

    Methods are presented for calculating component unavailabilities when inservice test (IST) intervals are changed and when component aging is explicitly included. The methods extend usual approaches for calculating unavailability and risk effects of changing IST intervals which utilize Probabilistic Risk Assessment (PRA) methods that do not explicitly include component aging. Different IST characteristics are handled including ISTs which are followed by corrective maintenances which completely renew or partially renew the component. ISTs which are not followed by maintenance activities needed to renew the component are also handled. Any downtime associated with IST, including the test downtime and the following maintenance downtime, is included in the unavailability evaluations. A range of component aging behaviors is studied including both linear and nonlinear aging behaviors. Based upon evaluations completed to date, pooled failure data on check valves show relatively small aging (e.g., less than 7% per year). However, data from some plant systems could be evidence for larger aging rates occurring in time periods less than 5 years. The methods are utilized in this report to carry out a range of sensitivity evaluations to evaluate aging effects for different possible applications. Based on the sensitivity evaluations, summary tables are constructed showing how optimal IST interval ranges for check valves can vary relative to different aging behaviors which might exist. The evaluations are also used to identify IST intervals for check valves which are robust to component aging effects. General insights on aging effects are also extracted. These sensitivity studies and extracted results provide useful information which can be supplemented or be updated with plant specific information. The models and results can also be input to PRAs to determine associated risk implications

  16. Analysis of well test data from selected intervals in Leuggern deep borehole

    International Nuclear Information System (INIS)

    Karasaki, K.

    1990-07-01

    Applicability of the PTST technique was verified by conducting a sensitivity study to the various parameters. The study showed that for ranges of skin parameters the true formation permeability was still successfully estimated using the PTST analysis technique. The analysis technique was then applied to field data from the deep borehole in Leuggern, Northern Switzerland. The analysis indicated that the formation permeability may be as much as one order of magnitude larger than the value based on no-skin analysis. Swabbing data from the Leuggern deep borehole were also analyzed assuming that they are constant pressure tests. The analysis of the swabbing data indicates that the formation transmissivity is as much as 20 times larger than the previously obtained value. This study is part of an investigation of the feasibility of geologic isolation of nuclear wastes being carried out by the US Department of Energy and the National Cooperative for the Storage of Radioactive Waste of Switzerland

  17. A study on assessment methodology of surveillance test interval and allowed outage time

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Chang Hyun; You, Young Woo; Cho, Jae Seon; Huh, Chang Wook; Kim, Do Hyoung; Kim, Ju Youl; Kim, Yoon Ik; Yang, Hui Chang; Park, Kang Min [Seoul National Univ., Seoul (Korea, Republic of)

    1998-03-15

    The objectives of this study is the development of methodology by which assesses the optimization of Surveillance Test Internal(STI) and Allowed Outage Time(AOT) using PSA method that can supplement the current deterministic methods and the improvement of Korean nuclear power plant safety. In this study, the survey about the assessment methodologies, modelings and results performed by domestic and international researches are performed as the basic step before developing the assessment methodology of this study. The assessment methodology that supplement the revealed problems in many other studies is presented and the application of new methodology into the example system assures the feasibility of this method. The sensitivity analyses about the failure factors of the components are performed in the bases of the and AOT is quantified. And the reliability assessment methodology about the diesel generator is reviewed and applied to the PSA code. The qualitative assessment for the STI/AOR of RPS/ESFAS assured safety the most important system in the nuclear power plant are performed.

  18. Effect of Remote Back-Up Protection System Failure on the Optimum Routine Test Time Interval of Power System Protection

    Directory of Open Access Journals (Sweden)

    Y Damchi

    2013-12-01

    Full Text Available Appropriate operation of protection system is one of the effective factors to have a desirable reliability in power systems, which vitally needs routine test of protection system. Precise determination of optimum routine test time interval (ORTTI plays a vital role in predicting the maintenance costs of protection system. In the most previous studies, ORTTI has been determined while remote back-up protection system was considered fully reliable. This assumption is not exactly correct since remote back-up protection system may operate incorrectly or fail to operate, the same as the primary protection system. Therefore, in order to determine the ORTTI, an extended Markov model is proposed in this paper considering failure probability for remote back-up protection system. In the proposed Markov model of the protection systems, monitoring facility is taken into account. Moreover, it is assumed that the primary and back-up protection systems are maintained simultaneously. Results show that the effect of remote back-up protection system failures on the reliability indices and optimum routine test intervals of protection system is considerable.

  19. Considerations about expected a posteriori estimation in adaptive testing: adaptive a priori, adaptive correction for bias, and adaptive integration interval.

    Science.gov (United States)

    Raiche, Gilles; Blais, Jean-Guy

    2009-01-01

    In a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.

  20. Peak oxygen uptake in a sprint interval testing protocol vs. maximal oxygen uptake in an incremental testing protocol and their relationship with cross-country mountain biking performance.

    Science.gov (United States)

    Hebisz, Rafał; Hebisz, Paulina; Zatoń, Marek; Michalik, Kamil

    2017-04-01

    In the literature, the exercise capacity of cyclists is typically assessed using incremental and endurance exercise tests. The aim of the present study was to confirm whether peak oxygen uptake (V̇O 2peak ) attained in a sprint interval testing protocol correlates with cycling performance, and whether it corresponds to maximal oxygen uptake (V̇O 2max ) determined by an incremental testing protocol. A sample of 28 trained mountain bike cyclists executed 3 performance tests: (i) incremental testing protocol (ITP) in which the participant cycled to volitional exhaustion, (ii) sprint interval testing protocol (SITP) composed of four 30 s maximal intensity cycling bouts interspersed with 90 s recovery periods, (iii) competition in a simulated mountain biking race. Oxygen uptake, pulmonary ventilation, work, and power output were measured during the ITP and SITP with postexercise blood lactate and hydrogen ion concentrations collected. Race times were recorded. No significant inter-individual differences were observed in regards to any of the ITP-associated variables. However, 9 individuals presented significantly increased oxygen uptake, pulmonary ventilation, and work output in the SITP compared with the remaining cyclists. In addition, in this group of 9 cyclists, oxygen uptake in SITP was significantly higher than in ITP. After the simulated race, this group of 9 cyclists achieved significantly better competition times (99.5 ± 5.2 min) than the other cyclists (110.5 ± 6.7 min). We conclude that mountain bike cyclists who demonstrate higher peak oxygen uptake in a sprint interval testing protocol than maximal oxygen uptake attained in an incremental testing protocol demonstrate superior competitive performance.

  1. Pacemaker patients’ perspective and experiences in a pacemaker outpatient clinic in relation to test intervals of the pacemaker

    DEFF Research Database (Denmark)

    Lauberg, Astrid; Hansen, Tina; Pedersen, Trine Pernille Dahl

    an evident decline in quality of life regarding psychological and social aspects 6 month after the implantation in terms of cognitive function, work ability, and sexual activity. Mlynarski et al (2009) have found correlations between pacemaker implantation and anxiety and depression. Aim The aim...... the pacemaker and psychological reactions. Patients with pacemakers older than 3 months lacked communication with fellowmen. Conclusion The patients express receiving competent and professional treatment when visiting the outpatient clinic, there seems to be a discrepancy between the long test intervals...... and the critical period in which anxiety and depression may occur. Minor problems and questions may grow into fatal conditions if the patients are not offered an opportunity to discuss this with experts. Patients are not informed that it is possible to discuss problems that imply psychological topics and they do...

  2. Optimization of the test intervals of a nuclear safety system by genetic algorithms, solution clustering and fuzzy preference assignment

    International Nuclear Information System (INIS)

    Zio, E.; Bazzo, R.

    2010-01-01

    In this paper, a procedure is developed for identifying a number of representative solutions manageable for decision-making in a multiobjective optimization problem concerning the test intervals of the components of a safety system of a nuclear power plant. Pareto Front solutions are identified by a genetic algorithm and then clustered by subtractive clustering into 'families'. On the basis of the decision maker's preferences, each family is then synthetically represented by a 'head of the family' solution. This is done by introducing a scoring system that ranks the solutions with respect to the different objectives: a fuzzy preference assignment is employed to this purpose. Level Diagrams are then used to represent, analyze and interpret the Pareto Fronts reduced to the head-of-the-family solutions

  3. Confidence in Numerical Simulations

    International Nuclear Information System (INIS)

    Hemez, Francois M.

    2015-01-01

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to ''forecast,'' that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists ''think.'' This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. ''Confidence'' derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  4. Confidant Relations in Italy

    Directory of Open Access Journals (Sweden)

    Jenny Isaacs

    2015-02-01

    Full Text Available Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91% reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture.

  5. Statistical intervals a guide for practitioners

    CERN Document Server

    Hahn, Gerald J

    2011-01-01

    Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati

  6. Risk-based evaluation of allowed outage time and surveillance test interval extensions for nuclear power plants

    International Nuclear Information System (INIS)

    Gibelli, Sonia Maria Orlando

    2008-03-01

    The main goal of this work is, through the use of Probabilistic Safety Analysis (PSA), to evaluate Technical Specification (TS) Allowed Outage Times (AOT) and Surveillance Test Intervals (STI) extensions for Angra 1 nuclear power plant. PSA has been incorporated as an additional tool, required as part of NPP licensing process. The risk measure used in this work is the Core Damage Frequency (CDF), obtained from the Angra 1 PSA Level 1. AOT and STI extensions are calculated for the Safety Injection System (SIS), Service water System (SAS) and Auxiliary Feedwater System (AFS) through the use of SAPHIRE code. In order to compensate for the risk increase caused by the extensions, compensatory measures as test of redundant train prior to entering maintenance and staggered test strategy are proposed. Results have shown that the proposed AOT extensions are acceptable for the SIS and SAS with the implementation of compensatory measures. The proposed AOT extension is not acceptable for the AFS. The STI extensions are acceptable for all three systems. (author)

  7. Globalization of consumer confidence

    Directory of Open Access Journals (Sweden)

    Çelik Sadullah

    2017-01-01

    Full Text Available The globalization of world economies and the importance of nowcasting analysis have been at the core of the recent literature. Nevertheless, these two strands of research are hardly coupled. This study aims to fill this gap through examining the globalization of the consumer confidence index (CCI by applying conventional and unconventional econometric methods. The US CCI is used as the benchmark in tests of comovement among the CCIs of several developing and developed countries, with the data sets divided into three sub-periods: global liquidity abundance, the Great Recession, and postcrisis. The existence and/or degree of globalization of the CCIs vary according to the period, whereas globalization in the form of coherence and similar paths is observed only during the Great Recession and, surprisingly, stronger in developing/emerging countries.

  8. Targeting Low Career Confidence Using the Career Planning Confidence Scale

    Science.gov (United States)

    McAuliffe, Garrett; Jurgens, Jill C.; Pickering, Worth; Calliotte, James; Macera, Anthony; Zerwas, Steven

    2006-01-01

    The authors describe the development and validation of a test of career planning confidence that makes possible the targeting of specific problem issues in employment counseling. The scale, developed using a rational process and the authors' experience with clients, was tested for criterion-related validity against 2 other measures. The scale…

  9. The impact of athlete leaders on team members’ team outcome confidence: A test of mediation by team identification and collective efficacy

    OpenAIRE

    Fransen, Katrien; Coffee, Pete; Vanbeselaere, Norbert; Slater, Matthew; De Cuyper, Bert; Boen, Filip

    2014-01-01

    Research on the effect of athlete leadership on pre-cursors of team performance such as team confidence is sparse. To explore the underlying mechanisms of how athlete leaders impact their team’s confidence, an online survey was completed by 2,867 players and coaches from nine different team sports in Flanders (Belgium). We distinguished between two types of team confidence: collective efficacy, assessed by the CEQS subscales of Effort, Persistence, Preparation, and Unity; and team outcome con...

  10. Confidence Intervals for Omega Coefficient: Proposal for Calculus.

    Science.gov (United States)

    Ventura-León, José Luis

    2018-01-01

    La confiabilidad es entendida como una propiedad métrica de las puntuaciones de un instrumento de medida. Recientemente se viene utilizando el coeficiente omega (ω) para la estimación de la confiabilidad. No obstante, la medición nunca es exacta por la influencia del error aleatorio, por esa razón es necesario calcular y reportar el intervalo de confianza (IC) que permite encontrar en valor verdadero en un rango de medida. En ese contexto, el artículo plantea una forma de estimar el IC mediante el método de bootstrap para facilitar este procedimiento se brindan códigos de R (un software de acceso libre) para que puedan realizarse los cálculos de una forma amigable. Se espera que el artículo sirva de ayuda a los investigadores de ámbito de salud.

  11. Secure and Usable Bio-Passwords based on Confidence Interval

    OpenAIRE

    Aeyoung Kim; Geunshik Han; Seung-Hyun Seo

    2017-01-01

    The most popular user-authentication method is the password. Many authentication systems try to enhance their security by enforcing a strong password policy, and by using the password as the first factor, something you know, with the second factor being something you have. However, a strong password policy and a multi-factor authentication system can make it harder for a user to remember the password and login in. In this paper a bio-password-based scheme is proposed as a unique authenticatio...

  12. Intervals of confidence: Uncertain accounts of global hunger

    NARCIS (Netherlands)

    Yates-Doerr, E.

    2015-01-01

    Global health policy experts tend to organize hunger through scales of ‘the individual’, ‘the community’ and ‘the global’. This organization configures hunger as a discrete, measurable object to be scaled up or down with mathematical certainty. This article offers a counter to this approach, using

  13. A quick method to calculate QTL confidence interval

    Indian Academy of Sciences (India)

    2011-08-19

    Aug 19, 2011 ... experimental design and analysis to reveal the real molecular nature of the ... strap sample form the bootstrap distribution of QTL location. The 2.5 and ..... ative probability to harbour a true QTL, hence x-LOD rule is not stable ... Darvasi A. and Soller M. 1997 A simple method to calculate resolv- ing power ...

  14. Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients

    Science.gov (United States)

    Andersson, Björn; Xin, Tao

    2018-01-01

    In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…

  15. An approximate confidence interval for recombination fraction in ...

    African Journals Online (AJOL)

    user

    2011-02-14

    Feb 14, 2011 ... whose parents are not in the pedigree) and θ be the recombination fraction. ( )|. P x g is the penetrance probability, that is, the probability that an individual with genotype g has phenotype x . Let (. ) | , k k k f m. P g g g be the transmission probability, that is, the probability that an individual having genotype k.

  16. Evaluation of the Trail Making Test and interval timing as measures of cognition in healthy adults: comparisons by age, education, and gender.

    Science.gov (United States)

    Płotek, Włodzimierz; Łyskawa, Wojciech; Kluzik, Anna; Grześkowiak, Małgorzata; Podlewski, Roland; Żaba, Zbigniew; Drobnik, Leon

    2014-02-03

    Human cognitive functioning can be assessed using different methods of testing. Age, level of education, and gender may influence the results of cognitive tests. The well-known Trail Making Test (TMT), which is often used to measure the frontal lobe function, and the experimental test of Interval Timing (IT) were compared. The methods used in IT included reproduction of auditory and visual stimuli, with the subsequent production of the time intervals of 1-, 2-, 5-, and 7-seconds durations with no pattern. Subjects included 64 healthy adult volunteers aged 18-63 (33 women, 31 men). Comparisons were made based on age, education, and gender. TMT was performed quickly and was influenced by age, education, and gender. All reproduced visual and produced intervals were shortened and the reproduction of auditory stimuli was more complex. Age, education, and gender have more pronounced impact on the cognitive test than on the interval timing test. The reproduction of the short auditory stimuli was more accurate in comparison to other modalities used in the IT test. The interval timing, when compared to the TMT, offers an interesting possibility of testing. Further studies are necessary to confirm the initial observation.

  17. Long-Term Maintenance of Immediate or Delayed Extinction Is Determined by the Extinction-Test Interval

    Science.gov (United States)

    Johnson, Justin S.; Escobar, Martha; Kimble, Whitney L.

    2010-01-01

    Short acquisition-extinction intervals (immediate extinction) can lead to either more or less spontaneous recovery than long acquisition-extinction intervals (delayed extinction). Using rat subjects, we observed less spontaneous recovery following immediate than delayed extinction (Experiment 1). However, this was the case only if a relatively…

  18. Improving allowed outage time and surveillance test interval requirements: a study of their interactions using probabilistic methods

    International Nuclear Information System (INIS)

    Martorell, S.A.; Serradell, V.G.; Samanta, P.K.

    1995-01-01

    Technical Specifications (TS) define the limits and conditions for operating nuclear plants safely. We selected the Limiting Conditions for Operations (LCO) and Surveillance Requirements (SR), both within TS, as the main items to be evaluated using probabilistic methods. In particular, we focused on the Allowed Outage Time (AOT) and Surveillance Test Interval (STI) requirements in LCO and SR, respectively. Already, significant operating and design experience has accumulated revealing several problems which require modifications in some TS rules. Developments in Probabilistic Safety Assessment (PSA) allow the evaluation of effects due to such modifications in AOT and STI from a risk point of view. Thus, some changes have already been adopted in some plants. However, the combined effect of several changes in AOT and STI, i.e. through their interactions, is not addressed. This paper presents a methodology which encompasses, along with the definition of AOT and STI interactions, the quantification of interactions in terms of risk using PSA methods, an approach for evaluating simultaneous AOT and STI modifications, and an assessment of strategies for giving flexibility to plant operation through simultaneous changes on AOT and STI using trade-off-based risk criteria

  19. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.

    Science.gov (United States)

    Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E

    2015-10-01

    Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.

  20. Raising Confident Kids

    Science.gov (United States)

    ... First Aid & Safety Doctors & Hospitals Videos Recipes for Kids Kids site Sitio para niños How the Body ... Videos for Educators Search English Español Raising Confident Kids KidsHealth / For Parents / Raising Confident Kids What's in ...

  1. Confidence in Numerical Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to “forecast,” that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists “think.” This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. “Confidence” derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  2. Unexplained Graft Dysfunction after Heart Transplantation—Role of Novel Molecular Expression Test Score and QTc-Interval: A Case Report

    Directory of Open Access Journals (Sweden)

    Khurram Shahzad

    2010-01-01

    Full Text Available In the current era of immunosuppressive medications there is increased observed incidence of graft dysfunction in the absence of known histological criteria of rejection after heart transplantation. A noninvasive molecular expression diagnostic test was developed and validated to rule out histological acute cellular rejection. In this paper we present for the first time, longitudinal pattern of changes in this novel diagnostic test score along with QTc-interval in a patient who was admitted with unexplained graft dysfunction. Patient presented with graft failure with negative findings on all known criteria of rejection including acute cellular rejection, antibody mediated rejection and cardiac allograft vasculopathy. The molecular expression test score showed gradual increase and QTc-interval showed gradual prolongation with the gradual decline in graft function. This paper exemplifies that in patients presenting with unexplained graft dysfunction, GEP test score and QTc-interval correlate with the changes in the graft function.

  3. The time interval distribution of sand–dust storms in theory: testing with observational data for Yanchi, China

    International Nuclear Information System (INIS)

    Liu, Guoliang; Zhang, Feng; Hao, Lizhen

    2012-01-01

    We previously introduced a time record model for use in studying the duration of sand–dust storms. In the model, X is the normalized wind speed and Xr is the normalized wind speed threshold for the sand–dust storm. X is represented by a random signal with a normal Gaussian distribution. The storms occur when X ≥ Xr. From this model, the time interval distribution of N = Aexp(−bt) can be deduced, wherein N is the number of time intervals with length greater than t, A and b are constants, and b is related to Xr. In this study, sand–dust storm data recorded in spring at the Yanchi meteorological station in China were analysed to verify whether the time interval distribution of the sand–dust storms agrees with the above time interval distribution. We found that the distribution of the time interval between successive sand–dust storms in April agrees well with the above exponential equation. However, the interval distribution for the sand–dust storm data for the entire spring period displayed a better fit to the Weibull equation and depended on the variation of the sand–dust storm threshold wind speed. (paper)

  4. Notes on testing equality and interval estimation in Poisson frequency data under a three-treatment three-period crossover trial.

    Science.gov (United States)

    Lui, Kung-Jong; Chang, Kuang-Chao

    2016-10-01

    When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.

  5. The effect of the inter-phase delay interval in the spontaneous object recognition test for pigs

    DEFF Research Database (Denmark)

    Kornum, Birgitte Rahbek; Thygesen, Kristin Sjølie; Nielsen, Thomas Rune

    2007-01-01

    In the neuroscience community interest for using the pig is growing. Several disease models have been developed creating a need for validation of behavioural paradigms in these animals. Here, we report the effect of different inter-phase delay intervals on the performance of Göttingen minipigs...

  6. Short-interval test-retest interrater reliability of the Dutch version of the structured clinical interview for DSM-IV personality disorders (SCID-II)

    NARCIS (Netherlands)

    Weertman, A; ArntZ, A; Dreessen, L; van Velzen, C; Vertommen, S

    2003-01-01

    This study examined the short-interval test-retest reliability of the Structured Clinical Interview (SCID-II: First, Spitzer, Gibbon, & Williams, 1995) for DSM-IV personality disorders (PDs). The SCID-II was administered to 69 in- and outpatients on two occasions separated by 1 to 6 weeks. The

  7. Testing equality and interval estimation in binary responses when high dose cannot be used first under a three-period crossover design.

    Science.gov (United States)

    Lui, Kung-Jong; Chang, Kuang-Chao

    2015-01-01

    When comparing two doses of a new drug with a placebo, we may consider using a crossover design subject to the condition that the high dose cannot be administered before the low dose. Under a random-effects logistic regression model, we focus our attention on dichotomous responses when the high dose cannot be used first under a three-period crossover trial. We derive asymptotic test procedures for testing equality between treatments. We further derive interval estimators to assess the magnitude of the relative treatment effects. We employ Monte Carlo simulation to evaluate the performance of these test procedures and interval estimators in a variety of situations. We use the data taken as a part of trial comparing two different doses of an analgesic with a placebo for the relief of primary dysmenorrhea to illustrate the use of the proposed test procedures and estimators.

  8. The use of regression analysis in determining reference intervals for low hematocrit and thrombocyte count in multiple electrode aggregometry and platelet function analyzer 100 testing of platelet function.

    Science.gov (United States)

    Kuiper, Gerhardus J A J M; Houben, Rik; Wetzels, Rick J H; Verhezen, Paul W M; Oerle, Rene van; Ten Cate, Hugo; Henskens, Yvonne M C; Lancé, Marcus D

    2017-11-01

    Low platelet counts and hematocrit levels hinder whole blood point-of-care testing of platelet function. Thus far, no reference ranges for MEA (multiple electrode aggregometry) and PFA-100 (platelet function analyzer 100) devices exist for low ranges. Through dilution methods of volunteer whole blood, platelet function at low ranges of platelet count and hematocrit levels was assessed on MEA for four agonists and for PFA-100 in two cartridges. Using (multiple) regression analysis, 95% reference intervals were computed for these low ranges. Low platelet counts affected MEA in a positive correlation (all agonists showed r 2 ≥ 0.75) and PFA-100 in an inverse correlation (closure times were prolonged with lower platelet counts). Lowered hematocrit did not affect MEA testing, except for arachidonic acid activation (ASPI), which showed a weak positive correlation (r 2 = 0.14). Closure time on PFA-100 testing was inversely correlated with hematocrit for both cartridges. Regression analysis revealed different 95% reference intervals in comparison with originally established intervals for both MEA and PFA-100 in low platelet or hematocrit conditions. Multiple regression analysis of ASPI and both tests on the PFA-100 for combined low platelet and hematocrit conditions revealed that only PFA-100 testing should be adjusted for both thrombocytopenia and anemia. 95% reference intervals were calculated using multiple regression analysis. However, coefficients of determination of PFA-100 were poor, and some variance remained unexplained. Thus, in this pilot study using (multiple) regression analysis, we could establish reference intervals of platelet function in anemia and thrombocytopenia conditions on PFA-100 and in thrombocytopenia conditions on MEA.

  9. Normal probability plots with confidence.

    Science.gov (United States)

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. New precise value for the muon magnetic moment and sensitive test of the theory of the hfs interval in muonium

    International Nuclear Information System (INIS)

    Casperson, D.E.; Crane, T.W.; Denison, A.B.; Egan, P.O.; Hughes, V.W.; Mariam, F.G.; Orth, H.; Reist, H.W.; Souder, P.A.; Stambaugh, R.D.; Thompson, P.A.; zu Putlitz, G.

    1977-01-01

    Measurements of Zeeman transitions in the ground state of muonium at strong magnetic field have yielded values for the hfs interval, Δν = 4463 302.35(52) kHz (0.12 ppm) and for the muon magnetic moment, μ/sub μ//μ/sub p/ = 3.183 3403(44) (1.4 ppm), fo considerably higher precision than previous results. The theoretical expression for Δν, including our measured value of μ/sub μ//μ/sub p/, disagrees with the experimental value by 2.5 standard deviations. The electronic g/sub J/ density shift for muonium in Kr has been measured

  11. A strategy for determination of test intervals of k-out-of-n multi-channel systems

    International Nuclear Information System (INIS)

    Cho, S.; Jiang, J.

    2007-01-01

    State space models for determination of the optimal test frequencies for k-out-of-n multi channel systems are developed in this paper. The analytic solutions for the optimal surveillance test frequencies are derived using the Markov process technique. The solutions show that an optimal test frequency which maximizes the target probability can be determined by decomposing the system states to 3 states based on the system configuration and success criteria. Examples of quantification of the state probabilities and the optimal test frequencies of a three-channel system and a four-channel system with different success criteria are presented. The strategy for finding the optimal test frequency developed in this paper can generally be applicable to any k-out-of-n multi-channel standby systems that involve complex testing schemes. (author)

  12. Reclaim your creative confidence.

    Science.gov (United States)

    Kelley, Tom; Kelley, David

    2012-12-01

    Most people are born creative. But over time, a lot of us learn to stifle those impulses. We become warier of judgment, more cautious more analytical. The world seems to divide into "creatives" and "noncreatives," and too many people resign themselves to the latter category. And yet we know that creativity is essential to success in any discipline or industry. The good news, according to authors Tom Kelley and David Kelley of IDEO, is that we all can rediscover our creative confidence. The trick is to overcome the four big fears that hold most of us back: fear of the messy unknown, fear of judgment, fear of the first step, and fear of losing control. The authors use an approach based on the work of psychologist Albert Bandura in helping patients get over their snake phobias: You break challenges down into small steps and then build confidence by succeeding on one after another. Creativity is something you practice, say the authors, not just a talent you are born with.

  13. Detection of lung cancer through low-dose CT screening (NELSON): a prespecified analysis of screening test performance and interval cancers.

    Science.gov (United States)

    Horeweg, Nanda; Scholten, Ernst Th; de Jong, Pim A; van der Aalst, Carlijn M; Weenink, Carla; Lammers, Jan-Willem J; Nackaerts, Kristiaan; Vliegenthart, Rozemarijn; ten Haaf, Kevin; Yousaf-Khan, Uraujh A; Heuvelmans, Marjolein A; Thunnissen, Erik; Oudkerk, Matthijs; Mali, Willem; de Koning, Harry J

    2014-11-01

    Low-dose CT screening is recommended for individuals at high risk of developing lung cancer. However, CT screening does not detect all lung cancers: some might be missed at screening, and others can develop in the interval between screens. The NELSON trial is a randomised trial to assess the effect of screening with increasing screening intervals on lung cancer mortality. In this prespecified analysis, we aimed to assess screening test performance, and the epidemiological, radiological, and clinical characteristics of interval cancers in NELSON trial participants assigned to the screening group. Eligible participants in the NELSON trial were those aged 50-75 years, who had smoked 15 or more cigarettes per day for more than 25 years or ten or more cigarettes for more than 30 years, and were still smoking or had quit less than 10 years ago. We included all participants assigned to the screening group who had attended at least one round of screening. Screening test results were based on volumetry using a two-step approach. Initially, screening test results were classified as negative, indeterminate, or positive based on nodule presence and volume. Subsequently, participants with an initial indeterminate result underwent follow-up screening to classify their final screening test result as negative or positive, based on nodule volume doubling time. We obtained information about all lung cancer diagnoses made during the first three rounds of screening, plus an additional 2 years of follow-up from the national cancer registry. We determined epidemiological, radiological, participant, and tumour characteristics by reassessing medical files, screening CTs, and clinical CTs. The NELSON trial is registered at www.trialregister.nl, number ISRCTN63545820. 15,822 participants were enrolled in the NELSON trial, of whom 7915 were assigned to low-dose CT screening with increasing interval between screens, and 7907 to no screening. We included 7155 participants in our study, with

  14. Confidence bands for inverse regression models

    International Nuclear Information System (INIS)

    Birke, Melanie; Bissantz, Nicolai; Holzmann, Hajo

    2010-01-01

    We construct uniform confidence bands for the regression function in inverse, homoscedastic regression models with convolution-type operators. Here, the convolution is between two non-periodic functions on the whole real line rather than between two periodic functions on a compact interval, since the former situation arguably arises more often in applications. First, following Bickel and Rosenblatt (1973 Ann. Stat. 1 1071–95) we construct asymptotic confidence bands which are based on strong approximations and on a limit theorem for the supremum of a stationary Gaussian process. Further, we propose bootstrap confidence bands based on the residual bootstrap and prove consistency of the bootstrap procedure. A simulation study shows that the bootstrap confidence bands perform reasonably well for moderate sample sizes. Finally, we apply our method to data from a gel electrophoresis experiment with genetically engineered neuronal receptor subunits incubated with rat brain extract

  15. Models and procedures for interval evaluating the results of control of knowledge in computer systems testing of Navy

    Directory of Open Access Journals (Sweden)

    D. A. Pechnikov

    2018-01-01

    Full Text Available To implement effective military and professional training of Navy specialists, a corresponding educational and material base is needed. As a result of the reduction in the 1990s in the branches of the military-industrial complex developing weapons and equipment for the Navy, the latest models of this technology are now produced not in batches, but in individual copies. The question of the production of training and training samples is not worth it at all. Under these conditions, only virtual analogues of military equipment and weapons, developed by means of information technology, i.e., training and training systems (TOS, can be considered as the only means capable of providing military-professional training. At the modern level of the development of information technologies, testing is the only universal technical means of monitoring the knowledge of students. Procedures for knowledge control in modern computer testing systems do not meet the requirements for them according to the following characteristics: 1 the absence of the possibility of evaluating the error of the test results; 2 the absence of the possibility of stopping testing when the specified reliability of its results is achieved. In order to effectively implement the means of operational criteria-based pedagogical control of knowledge in the process of training specialists of the Navy and to enable joint analysis and processing of evaluations of learning outcomes, it is advisable to implement the following practical recommendations: 1. Formulating the teacher's preferences system regarding the quality of trainee training and the teacher's preferences system in relation to The significance of single test tasks in the test should be considered as the most important The essential steps in preparing a test for practical use. 2. The teacher who first enters the input of his preference systems should check their actual compliance on a sample of 5-10 such test results that cover the full

  16. The effect of an enrolled nursing registration pathway program on undergraduate nursing students' confidence level: A pre- and post-test study.

    Science.gov (United States)

    Crevacore, Carol; Jonas-Dwyer, Diana; Nicol, Pam

    2016-04-01

    In the latter half of the 20th century, registered nurse education moved to university degree level. As a result, there has been a reduction in access for students to clinical experience. In numerous studies, nursing graduates have reported that they do not feel prepared for practice. The importance of maximising every learning opportunity during nursing school is paramount. At Edith Cowan University, a program was initiated that allows students to become enrolled nurses at the midway point of their degree to enable them to work and therefore gain experience in the clinical practice setting during their education. This study investigated the effect of the program on the nursing students' perception of their clinical abilities and explored their ability to link theory to practice. The research design for this study was a quasi-experimental, prospective observational cohort study. The study included 39 second-year nursing students not enrolled in the program (Group 1), 45 second-year nursing students enrolled in the program (Group 2), and 28 third-year nursing students who completed the program and are working as enrolled nurses (Group 3). Participants were asked to complete a Five Dimension of Nursing Scale questionnaire. The quantitative analyses showed that students in Group 1 had statistically significant higher pre-questionnaire perceived abilities across all domains, except in two dimensions when compared to Group 2. The post-questionnaire analysis showed that Group 1 had statistically significant lower perceived abilities in four of the five dimensions compared to Group 2. Group 1 also had significantly lower abilities in all dimensions compared to Group 3. Group 3 had a significantly higher perception of their clinical abilities compared to Group 2. This study highlights the value of meaningful employment for undergraduate nursing students by providing opportunities to increase confidence in clinical abilities. Copyright © 2016 Elsevier Ltd. All rights

  17. Improvement of risk informed surveillance test interval for the safety related instrument and control system of Ulchin units 3 and 4

    International Nuclear Information System (INIS)

    Jang, Seung Cheol; Lee, Yun Hwan; Lee, Seung Joon; Han, Sang Hoon

    2012-05-01

    The purpose of this research is the development of various methodologies necessary for the licensing of the risk informed surveillance test interval(STI) improvement for the safety related I and C systems in UCN 3 and 4, for instance, reactor protection system (RPS), engineered safety features actuation system (ESFAS), ESF auxiliary relay cabinet (ARC), and core protection calculator (CPC). The technical adequacy of the methodology was sufficiently verified through the application to the following STI changes. o CPC channel functional test (change from 1 month to 3 months including safety channel and log power test) o RPS channel functional test (change from 1 month to 3 months) o RPS logic and trip channel test (change from 1 month to 3 months. 1 month for RPS manual actuation test) o ESFAS channel functional test (change from 1 month to 3 months) o ESFAS logic and trip channel test (change from 1 month to 3 months) o ESF auxiliary relay test (change from 1 month to 3 months with staggered test. Manual actuation at the ESF ARC is added as a backup of ESF actuation signals during emergency operation

  18. Improvement of risk informed surveillance test interval for the safety related instrumentation and control system of Yonggwang units 3 and 4

    International Nuclear Information System (INIS)

    Jang, Seung Cheol; Lee, Yun Hwan; Lee, Seung Joon; Han, Sang Hoon

    2012-05-01

    The purpose of this research is the development of various methodologies necessary for the licensing of the risk informed surveillance test interval(STI) improvement for the safety related I and C systems in YGN 3 and 4, for instance, reactor protection system (RPS), engineered safety features actuation system (ESFAS), ESF auxiliary relay cabinet (ARC), and core protection calculator (CPC). The technical adequacy of the methodology was sufficiently verified through the application to the following STI changes. o CPC channel functional test (change from 1 month to 3 months including safety channel and log power test) o RPS channel functional test (change from 1 month to 3 months) o RPS logic and trip channel test (change from 1 month to 3 months. 1 month for RPS manual actuation test) o ESFAS channel functional test (change from 1 month to 3 months) o ESFAS logic and trip channel test (change from 1 month to 3 months) o ESF auxiliary relay test (change from 1 month to 3 months with staggered test. Manual actuation at the ESF ARC is added as a backup of ESF actuation signals during emergency operation

  19. We will be champions: Leaders' confidence in 'us' inspires team members' team confidence and performance.

    Science.gov (United States)

    Fransen, K; Steffens, N K; Haslam, S A; Vanbeselaere, N; Vande Broek, G; Boen, F

    2016-12-01

    The present research examines the impact of leaders' confidence in their team on the team confidence and performance of their teammates. In an experiment involving newly assembled soccer teams, we manipulated the team confidence expressed by the team leader (high vs neutral vs low) and assessed team members' responses and performance as they unfolded during a competition (i.e., in a first baseline session and a second test session). Our findings pointed to team confidence contagion such that when the leader had expressed high (rather than neutral or low) team confidence, team members perceived their team to be more efficacious and were more confident in the team's ability to win. Moreover, leaders' team confidence affected individual and team performance such that teams led by a highly confident leader performed better than those led by a less confident leader. Finally, the results supported a hypothesized mediational model in showing that the effect of leaders' confidence on team members' team confidence and performance was mediated by the leader's perceived identity leadership and members' team identification. In conclusion, the findings of this experiment suggest that leaders' team confidence can enhance members' team confidence and performance by fostering members' identification with the team. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Psychometric testing on the NLN Student Satisfaction and Self-Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses.

    Science.gov (United States)

    Franklin, Ashley E; Burns, Paulette; Lee, Christopher S

    2014-10-01

    In 2006, the National League for Nursing published three measures related to novice nurses' beliefs about self-confidence, scenario design, and educational practices associated with simulation. Despite the extensive use of these measures, little is known about their reliability and validity. The psychometric properties of the Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire were studied among a sample of 2200 surveys completed by novice nurses from a liberal arts university in the southern United States. Psychometric tests included item analysis, confirmatory and exploratory factor analyses in randomly-split subsamples, concordant and discordant validity, and internal consistency. All three measures have sufficient reliability and validity to be used in education research. There is room for improvement in content validity with the Student Satisfaction and Self-Confidence in Learning and Simulation Design Scale. This work provides robust evidence to ensure that judgments made about self-confidence after simulation, simulation design and educational practices are valid and reliable. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Self-confidence and metacognitive processes

    Directory of Open Access Journals (Sweden)

    Kleitman Sabina

    2005-01-01

    Full Text Available This paper examines the status of Self-confidence trait. Two studies strongly suggest that Self-confidence is a component of metacognition. In the first study, participants (N=132 were administered measures of Self-concept, a newly devised Memory and Reasoning Competence Inventory (MARCI, and a Verbal Reasoning Test (VRT. The results indicate a significant relationship between confidence ratings on the VRT and the Reasoning component of MARCI. The second study (N=296 employed an extensive battery of cognitive tests and several metacognitive measures. Results indicate the presence of robust Self-confidence and Metacognitive Awareness factors, and a significant correlation between them. Self-confidence taps not only processes linked to performance on items that have correct answers, but also beliefs about events that may never occur.

  2. A Sensitivity Study of Human Errors in Optimizing Surveillance Test Interval (STI) and Allowed Outage Time (AOT) of Standby Safety System

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Shin, Won Ky; You, Young Woo; Yang, Hui Chang

    1998-01-01

    In most cases, the surveillance test intervals (STIs), allowed outage times (AOTS) and testing strategies of safety components in nuclear power plant are prescribed in plant technical specifications. And, in general, it is required that standby safety system shall be redundant (i.e., composed of multiple components) and these components are tested by either staggered test strategy or sequential test strategy. In this study, a linear model is presented to incorporate the effects of human errors associated with test into the evaluation of unavailability. The average unavailabilities of 1/4, 2/4 redundant systems are computed considering human error and testing strategy. The adverse effects of test on system unavailability, such as component wear and test-induced transient have been modelled. The final outcome of this study would be the optimized human error domain from 3-D human error sensitivity analysis by selecting finely classified segment. The results of sensitivity analysis show that the STI and AOT can be optimized provided human error probability is maintained within allowable range. (authors)

  3. Slug Test Characterization Results for Multi-Test/Depth Intervals Conducted During the Drilling of CERCLA Operable Unit OU ZP-1 Wells 299-W11-43, 299-W15-50, and 299-W18-16

    Energy Technology Data Exchange (ETDEWEB)

    Spane, Frank A.; Newcomer, Darrell R.

    2010-06-21

    The following report presents test descriptions and analysis results for multiple, stress level slug tests that were performed at selected test/depth intervals within three Operable Unit (OU) ZP-1 wells: 299-W11-43 (C4694/Well H), 299-W15-50 (C4302/Well E), and 299-W18-16 (C4303/Well D). These wells are located within south-central region of the Hanford Site 200-West Area (Figure 1.1). The test intervals were characterized as the individual boreholes were advanced to their final drill depths. The primary objective of the hydrologic tests was to provide information pertaining to the areal variability and vertical distribution of hydraulic conductivity with depth at these locations within the OU ZP-1 area. This type of characterization information is important for predicting/simulating contaminant migration (i.e., numerical flow/transport modeling) and designing proper monitor well strategies for OU and Waste Management Area locations.

  4. Slug Test Characterization Results for Multi-Test/Depth Intervals Conducted During the Drilling of CERCLA Operable Unit OU UP-1 Wells 299-W19-48, 699-30-66, and 699-36-70B

    Energy Technology Data Exchange (ETDEWEB)

    Spane, Frank A.; Newcomer, Darrell R.

    2010-06-15

    This report presents test descriptions and analysis results for multiple, stress-level slug tests that were performed at selected test/depth intervals within three Operable Unit (OU) UP-1 wells: 299-W19-48 (C4300/Well K), 699-30-66 (C4298/Well R), and 699-36-70B (C4299/Well P). These wells are located within, adjacent to, and to the southeast of the Hanford Site 200-West Area. The test intervals were characterized as the individual boreholes were advanced to their final drill depths. The primary objective of the hydrologic tests was to provide information pertaining to the areal variability and vertical distribution of hydraulic conductivity with depth at these locations within the OU UP-1 area. This type of characterization information is important for predicting/simulating contaminant migration (i.e., numerical flow/transport modeling) and designing proper monitor well strategies for OU and Waste Management Area locations.

  5. Improving the performance of the Egyptian second testing nuclear research reactor using interval type-2 fuzzy logic controller tuned by modified biogeography-based optimization

    Energy Technology Data Exchange (ETDEWEB)

    Sayed, M.M., E-mail: M.M.Sayed@ieee.org; Saad, M.S.; Emara, H.M.; Abou El-Zahab, E.E.

    2013-09-15

    Highlights: • A modified version of the BBO was proposed. • A novel method for interval type-2 FLC design tuned by MBBO was proposed. • The performance of the ETRR-2 was improved by using IT2FLC tuned by MBBO. -- Abstract: Power stabilization is a critical issue in nuclear reactors. The conventional proportional derivative (PD) controller is currently used in the Egyptian second testing research reactor (ETRR-2). In this paper, we propose a modified biogeography-based optimization (MBBO) algorithm to design the interval type-2 fuzzy logic controller (IT2FLC) to improve the performance of the Egyptian second testing research reactor (ETRR-2). Biogeography-based optimization (BBO) is a novel evolutionary algorithm that is based on the mathematical models of biogeography. Biogeography is the study of the geographical distribution of biological organisms. In the BBO model, problem solutions are represented as islands, and the sharing of features between solutions is represented as immigration and emigration between the islands. A modified version of the BBO is applied to design the IT2FLC to get the optimal parameters of the membership functions of the controller. We test the optimal IT2FLC obtained by modified biogeography-based optimization (MBBO) using the integral square error (ISE) and is compared with the currently used PD controller.

  6. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  7. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  8. Evaluating Measures of Optimism and Sport Confidence

    Science.gov (United States)

    Fogarty, Gerard J.; Perera, Harsha N.; Furst, Andrea J.; Thomas, Patrick R.

    2016-01-01

    The psychometric properties of the Life Orientation Test-Revised (LOT-R), the Sport Confidence Inventory (SCI), and the Carolina SCI (CSCI) were examined in a study involving 260 athletes. The study aimed to test the dimensional structure, convergent and divergent validity, and invariance over competition level of scores generated by these…

  9. Diverse interpretations of confidence building

    International Nuclear Information System (INIS)

    Macintosh, J.

    1998-01-01

    This paper explores the variety of operational understandings associated with the term 'confidence building'. Collectively, these understandings constitute what should be thought of as a 'family' of confidence building approaches. This unacknowledged and generally unappreciated proliferation of operational understandings that function under the rubric of confidence building appears to be an impediment to effective policy. The paper's objective is to analyze these different understandings, stressing the important differences in their underlying assumptions. In the process, the paper underlines the need for the international community to clarify its collective thinking about what it means when it speaks of 'confidence building'. Without enhanced clarity, it will be unnecessarily difficult to employ the confidence building approach effectively due to the lack of consistent objectives and common operating assumptions. Although it is not the intention of this paper to promote a particular account of confidence building, dissecting existing operational understandings should help to identify whether there are fundamental elements that define what might be termed 'authentic' confidence building. Implicit here is the view that some operational understandings of confidence building may diverge too far from consensus models to count as meaningful members of the confidence building family. (author)

  10. Correct Bayesian and frequentist intervals are similar

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1986-01-01

    This paper argues that Bayesians and frequentists will normally reach numerically similar conclusions, when dealing with vague data or sparse data. It is shown that both statistical methodologies can deal reasonably with vague data. With sparse data, in many important practical cases Bayesian interval estimates and frequentist confidence intervals are approximately equal, although with discrete data the frequentist intervals are somewhat longer. This is not to say that the two methodologies are equally easy to use: The construction of a frequentist confidence interval may require new theoretical development. Bayesians methods typically require numerical integration, perhaps over many variables. Also, Bayesian can easily fall into the trap of over-optimism about their amount of prior knowledge. But in cases where both intervals are found correctly, the two intervals are usually not very different. (orig.)

  11. Interval selection with machine-dependent intervals

    OpenAIRE

    Bohmova K.; Disser Y.; Mihalak M.; Widmayer P.

    2013-01-01

    We study an offline interval scheduling problem where every job has exactly one associated interval on every machine. To schedule a set of jobs, exactly one of the intervals associated with each job must be selected, and the intervals selected on the same machine must not intersect.We show that deciding whether all jobs can be scheduled is NP-complete already in various simple cases. In particular, by showing the NP-completeness for the case when all the intervals associated with the same job...

  12. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

    Science.gov (United States)

    Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

  13. How do regulators measure public confidence?

    International Nuclear Information System (INIS)

    Schmitt, A.; Besenyei, E.

    2006-01-01

    The conclusions and recommendations of this session can be summarized this way. - There are some important elements of confidence: visibility, satisfaction, credibility and reputation. The latter can consist of trust, positive image and knowledge of the role the organisation plays. A good reputation is hard to achieve but easy to lose. - There is a need to define what public confidence is and what to measure. The difficulty is that confidence is a matter of perception of the public, so what we try to measure is the perception. - It is controversial how to take into account the results of confidence measurement because of the influence of the context. It is not an exact science, results should be examined cautiously and surveys should be conducted frequently, at least every two years. - Different experiences were explained: - Quantitative surveys - among the general public or more specific groups like the media; - Qualitative research - with test groups and small panels; - Semi-quantitative studies - among stakeholders who have regular contracts with the regulatory body. It is not clear if the results should be shared with the public or just with other authorities and governmental organisations. - Efforts are needed to increase visibility, which is a prerequisite for confidence. - A practical example of organizing an emergency exercise and an information campaign without taking into account the real concerns of the people was given to show how public confidence can be decreased. - We learned about a new method - the so-called socio-drama - which addresses another issue also connected to confidence - the notion of understanding between stakeholders around a nuclear site. It is another way of looking at confidence in a more restricted group. (authors)

  14. Nuclear power: restoring public confidence

    International Nuclear Information System (INIS)

    Arnold, L.

    1986-01-01

    The paper concerns a one day conference on nuclear power organised by the Centre for Science Studies and Science Policy, Lancaster, April 1986. Following the Chernobyl reactor accident, the conference concentrated on public confidence in nuclear power. Causes of lack of public confidence, public perceptions of risk, and the effect of Chernobyl in the United Kingdom, were all discussed. A Select Committee on the Environment examined the problems of radioactive waste disposal. (U.K.)

  15. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    Science.gov (United States)

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial

  16. High confidence in falsely recognizing prototypical faces.

    Science.gov (United States)

    Sampaio, Cristina; Reinke, Victoria; Mathews, Jeffrey; Swart, Alexandra; Wallinger, Stephen

    2018-06-01

    We applied a metacognitive approach to investigate confidence in recognition of prototypical faces. Participants were presented with sets of faces constructed digitally as deviations from prototype/base faces. Participants were then tested with a simple recognition task (Experiment 1) or a multiple-choice task (Experiment 2) for old and new items plus new prototypes, and they showed a high rate of confident false alarms to the prototypes. Confidence and accuracy relationship in this face recognition paradigm was found to be positive for standard items but negative for the prototypes; thus, it was contingent on the nature of the items used. The data have implications for lineups that employ match-to-suspect strategies.

  17. Parents' obesity-related behavior and confidence to support behavioral change in their obese child: data from the STAR study.

    Science.gov (United States)

    Arsenault, Lisa N; Xu, Kathleen; Taveras, Elsie M; Hacker, Karen A

    2014-01-01

    Successful childhood obesity interventions frequently focus on behavioral modification and involve parents or family members. Parental confidence in supporting behavior change may be an element of successful family-based prevention efforts. We aimed to determine whether parents' own obesity-related behaviors were related to their confidence in supporting their child's achievement of obesity-related behavioral goals. Cross-sectional analyses of data collected at baseline of a randomized control trial testing a treatment intervention for obese children (n = 787) in primary care settings (n = 14). Five obesity-related behaviors (physical activity, screen time, sugar-sweetened beverage, sleep duration, fast food) were self-reported by parents for themselves and their child. Behaviors were dichotomized on the basis of achievement of behavioral goals. Five confidence questions asked how confident the parent was in helping their child achieve each goal. Logistic regression modeling high confidence was conducted with goal achievement and demographics as independent variables. Parents achieving physical activity or sleep duration goals were significantly more likely to be highly confident in supporting their child's achievement of those goals (physical activity, odds ratio 1.76; 95% confidence interval 1.19-2.60; sleep, odds ratio 1.74; 95% confidence interval 1.09-2.79) independent of sociodemographic variables and child's current behavior. Parental achievements of TV watching and fast food goals were also associated with confidence, but significance was attenuated after child's behavior was included in models. Parents' own obesity-related behaviors are factors that may affect their confidence to support their child's behavior change. Providers seeking to prevent childhood obesity should address parent/family behaviors as part of their obesity prevention strategies. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  18. Confidence in critical care nursing.

    Science.gov (United States)

    Evans, Jeanne; Bell, Jennifer L; Sweeney, Annemarie E; Morgan, Jennifer I; Kelly, Helen M

    2010-10-01

    The purpose of the study was to gain an understanding of the nursing phenomenon, confidence, from the experience of nurses in the nursing subculture of critical care. Leininger's theory of cultural care diversity and universality guided this qualitative descriptive study. Questions derived from the sunrise model were used to elicit nurses' perspectives about cultural and social structures that exist within the critical care nursing subculture and the influence that these factors have on confidence. Twenty-eight critical care nurses from a large Canadian healthcare organization participated in semistructured interviews about confidence. Five themes arose from the descriptions provided by the participants. The three themes, tenuously navigating initiation rituals, deliberately developing holistic supportive relationships, and assimilating clinical decision-making rules were identified as social and cultural factors related to confidence. The remaining two themes, preserving a sense of security despite barriers and accommodating to diverse challenges, were identified as environmental factors related to confidence. Practice and research implications within the culture of critical care nursing are discussed in relation to each of the themes.

  19. Professional confidence: a concept analysis.

    Science.gov (United States)

    Holland, Kathlyn; Middleton, Lyn; Uys, Leana

    2012-03-01

    Professional confidence is a concept that is frequently used and or implied in occupational therapy literature, but often without specifying its meaning. Rodgers's Model of Concept Analysis was used to analyse the term "professional confidence". Published research obtained from a federated search in four health sciences databases was used to inform the concept analysis. The definitions, attributes, antecedents, and consequences of professional confidence as evidenced in the literature are discussed. Surrogate terms and related concepts are identified, and a model case of the concept provided. Based on the analysis, professional confidence can be described as a dynamic, maturing personal belief held by a professional or student. This includes an understanding of and a belief in the role, scope of practice, and significance of the profession, and is based on their capacity to competently fulfil these expectations, fostered through a process of affirming experiences. Developing and fostering professional confidence should be nurtured and valued to the same extent as professional competence, as the former underpins the latter, and both are linked to professional identity.

  20. Convex Interval Games

    NARCIS (Netherlands)

    Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

    2008-01-01

    In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for

  1. Confidence-Based Learning in Investment Analysis

    Science.gov (United States)

    Serradell-Lopez, Enric; Lara-Navarra, Pablo; Castillo-Merino, David; González-González, Inés

    The aim of this study is to determine the effectiveness of using multiple choice tests in subjects related to the administration and business management. To this end we used a multiple-choice test with specific questions to verify the extent of knowledge gained and the confidence and trust in the answers. The tests were performed in a group of 200 students at the bachelor's degree in Business Administration and Management. The analysis made have been implemented in one subject of the scope of investment analysis and measured the level of knowledge gained and the degree of trust and security in the responses at two different times of the course. The measurements have been taken into account different levels of difficulty in the questions asked and the time spent by students to complete the test. The results confirm that students are generally able to obtain more knowledge along the way and get increases in the degree of trust and confidence in the answers. It is confirmed as the difficulty level of the questions set a priori by the heads of the subjects are related to levels of security and confidence in the answers. It is estimated that the improvement in the skills learned is viewed favourably by businesses and are especially important for job placement of students.

  2. Exact nonparametric confidence bands for the survivor function.

    Science.gov (United States)

    Matthews, David

    2013-10-12

    A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan-Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.

  3. Methodology for building confidence measures

    Science.gov (United States)

    Bramson, Aaron L.

    2004-04-01

    This paper presents a generalized methodology for propagating known or estimated levels of individual source document truth reliability to determine the confidence level of a combined output. Initial document certainty levels are augmented by (i) combining the reliability measures of multiply sources, (ii) incorporating the truth reinforcement of related elements, and (iii) incorporating the importance of the individual elements for determining the probability of truth for the whole. The result is a measure of confidence in system output based on the establishing of links among the truth values of inputs. This methodology was developed for application to a multi-component situation awareness tool under development at the Air Force Research Laboratory in Rome, New York. Determining how improvements in data quality and the variety of documents collected affect the probability of a correct situational detection helps optimize the performance of the tool overall.

  4. Alan Greenspan, the confidence strategy

    Directory of Open Access Journals (Sweden)

    Edwin Le Heron

    2006-12-01

    Full Text Available To evaluate the Greenspan era, we nevertheless need to address three questions: Is his success due to talent or just luck? Does he have a system of monetary policy or is he himself the system? What will be his legacy? Greenspan was certainly lucky, but he was also clairvoyant. Above all, he has developed a profoundly original monetary policy. His confidence strategy is clearly opposed to the credibility strategy developed in central banks and the academic milieu after 1980, but also inflation targeting, which today constitutes the mainstream monetary policy regime. The question of his legacy seems more nuanced. However, Greenspan will remain 'for a considerable period of time' a highly heterodox and original central banker. His political vision, his perception of an uncertain world, his pragmatism and his openness form the structure of a powerful alternative system, the confidence strategy, which will leave its mark on the history of monetary policy.

  5. Identifying the bad guy in a lineup using confidence judgments under deadline pressure.

    Science.gov (United States)

    Brewer, Neil; Weber, Nathan; Wootton, David; Lindsay, D Stephen

    2012-10-01

    Eyewitness-identification tests often culminate in witnesses not picking the culprit or identifying innocent suspects. We tested a radical alternative to the traditional lineup procedure used in such tests. Rather than making a positive identification, witnesses made confidence judgments under a short deadline about whether each lineup member was the culprit. We compared this deadline procedure with the traditional sequential-lineup procedure in three experiments with retention intervals ranging from 5 min to 1 week. A classification algorithm that identified confidence criteria that optimally discriminated accurate from inaccurate decisions revealed that decision accuracy was 24% to 66% higher under the deadline procedure than under the traditional procedure. Confidence profiles across lineup stimuli were more informative than were identification decisions about the likelihood that an individual witness recognized the culprit or correctly recognized that the culprit was not present. Large differences between the maximum and the next-highest confidence value signaled very high accuracy. Future support for this procedure across varied conditions would highlight a viable alternative to the problematic lineup procedures that have traditionally been used by law enforcement.

  6. Beyond hypercorrection: remembering corrective feedback for low-confidence errors.

    Science.gov (United States)

    Griffiths, Lauren; Higham, Philip A

    2018-02-01

    Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.

  7. Graphical interpretation of confidence curves in rankit plots

    DEFF Research Database (Denmark)

    Hyltoft Petersen, Per; Blaabjerg, Ole; Andersen, Marianne

    2004-01-01

    A well-known transformation from the bell-shaped Gaussian (normal) curve to a straight line in the rankit plot is investigated, and a tool for evaluation of the distribution of reference groups is presented. It is based on the confidence intervals for percentiles of the calculated Gaussian distri...

  8. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    Science.gov (United States)

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  9. Leadership by Confidence in Teams

    OpenAIRE

    Kobayashi, Hajime; Suehiro, Hideo

    2008-01-01

    We study endogenous signaling by analyzing a team production problem with endogenous timing. Each agent of the team is privately endowed with some level of confidence about team productivity. Each of them must then commit a level of effort in one of two periods. At the end of each period, each agent observes his partner' s move in this period. Both agents are rewarded by a team output determined by team productivity and total invested effort. Each agent must personally incur the cost of effor...

  10. Towards confidence in transport safety

    International Nuclear Information System (INIS)

    Robison, R.W.

    1992-01-01

    The U.S. Department of Energy (US DOE) plans to demonstrate to the public that high-level waste can be transported safely to the proposed repository. The author argues US DOE should begin now to demonstrate its commitment to safety by developing an extraordinary safety program for nuclear cargo it is now shipping. The program for current shipments should be developed with State, Tribal, and local officials. Social scientists should be involved in evaluating the effect of the safety program on public confidence. The safety program developed in cooperation with western states for shipments to the Waste Isolation Pilot plant is a good basis for designing that extraordinary safety program

  11. Is consumer confidence an indicator of JSE performance?

    OpenAIRE

    Kamini Solanki; Yudhvir Seetharam

    2014-01-01

    While most studies examine the impact of business confidence on market performance, we instead focus on the consumer because consumer spending habits are a natural extension of trading activity on the equity market. This particular study examines investor sentiment as measured by the Consumer Confidence Index in South Africa and its effect on the Johannesburg Stock Exchange (JSE). We employ Granger causality tests to investigate the relationship across time between the Consumer Confidence Ind...

  12. Workshop on confidence limits. Proceedings

    International Nuclear Information System (INIS)

    James, F.; Lyons, L.; Perrin, Y.

    2000-01-01

    The First Workshop on Confidence Limits was held at CERN on 17-18 January 2000. It was devoted to the problem of setting confidence limits in difficult cases: number of observed events is small or zero, background is larger than signal, background not well known, and measurements near a physical boundary. Among the many examples in high-energy physics are searches for the Higgs, searches for neutrino oscillations, B s mixing, SUSY, compositeness, neutrino masses, and dark matter. Several different methods are on the market: the CL s methods used by the LEP Higgs searches; Bayesian methods; Feldman-Cousins and modifications thereof; empirical and combined methods. The Workshop generated considerable interest, and attendance was finally limited by the seating capacity of the CERN Council Chamber where all the sessions took place. These proceedings contain all the papers presented, as well as the full text of the discussions after each paper and of course the last session which was a discussion session. The list of participants and the 'required reading', which was expected to be part of the prior knowledge of all participants, are also included. (orig.)

  13. CIMP status of interval colon cancers: another piece to the puzzle.

    Science.gov (United States)

    Arain, Mustafa A; Sawhney, Mandeep; Sheikh, Shehla; Anway, Ruth; Thyagarajan, Bharat; Bond, John H; Shaukat, Aasma

    2010-05-01

    Colon cancers diagnosed in the interval after a complete colonoscopy may occur due to limitations of colonoscopy or due to the development of new tumors, possibly reflecting molecular and environmental differences in tumorigenesis resulting in rapid tumor growth. In a previous study from our group, interval cancers (colon cancers diagnosed within 5 years of a complete colonoscopy) were almost four times more likely to demonstrate microsatellite instability (MSI) than non-interval cancers. In this study we extended our molecular analysis to compare the CpG island methylator phenotype (CIMP) status of interval and non-interval colorectal cancers and investigate the relationship between the CIMP and MSI pathways in the pathogenesis of interval cancers. We searched our institution's cancer registry for interval cancers, defined as colon cancers that developed within 5 years of a complete colonoscopy. These were frequency matched in a 1:2 ratio by age and sex to patients with non-interval cancers (defined as colon cancers diagnosed on a patient's first recorded colonoscopy). Archived cancer specimens for all subjects were retrieved and tested for CIMP gene markers. The MSI status of subjects identified between 1989 and 2004 was known from our previous study. Tissue specimens of newly identified cases and controls (between 2005 and 2006) were tested for MSI. There were 1,323 cases of colon cancer diagnosed over the 17-year study period, of which 63 were identified as having interval cancer and matched to 131 subjects with non-interval cancer. Study subjects were almost all Caucasian men. CIMP was present in 57% of interval cancers compared to 33% of non-interval cancers (P=0.004). As shown previously, interval cancers were more likely than non-interval cancers to occur in the proximal colon (63% vs. 39%; P=0.002), and have MSI 29% vs. 11%, P=0.004). In multivariable logistic regression model, proximal location (odds ratio (OR) 1.85; 95% confidence interval (CI) 1

  14. The Great Recession and confidence in homeownership

    OpenAIRE

    Anat Bracha; Julian Jamison

    2013-01-01

    Confidence in homeownership shifts for those who personally experienced real estate loss during the Great Recession. Older Americans are confident in the value of homeownership. Younger Americans are less confident.

  15. Programming with Intervals

    Science.gov (United States)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  16. Public confidence and nuclear energy

    International Nuclear Information System (INIS)

    Chaussade, J.P.

    1990-01-01

    Today in France there are 54 nuclear power units in operation at 18 sites. They supply 75% of all electricity produced, 12% of which is exported to neighbouring countries, and play an important role in the French economy. For the French, nuclear power is a fact of life, and most accept it. However, the accident of Chernobyl has made public opinion more sensitive, and the public relations work has had to be reconsidered carefully with a view to increase the confidence of the French public in nuclear power, anticipating media crises and being equipped to deal with such crises. The three main approaches are the following: keeping the public better informed, providing clear information at time of crisis and international activities

  17. Knowledge, Self Confidence and Courage

    DEFF Research Database (Denmark)

    Selberg, Hanne; Steenberg Holtzmann, Jette; Hovedskov, Jette

    . Results The students identified their major learning outcomes as transfer of operational skills, experiencing self-efficacy and enhanced understanding of the patients' perspective.Involving simulated patients in the training of technical skills contributed to the development of the students' communication......Knowledge, self confidence and courage – long lasting learning outcomes through simulation in a clinical context. Hanne Selberg1, Jette Hovedskov2, Jette Steenberg Holtzmann2 The significance and methodology of the researchThe study focuses on simulation alongside the clinical practice and linked...... Development, Clinical Lecturer, Metropolitan University College, Faculty of Nursing, Email: hase@phoe.dk, phone: +45-72282830. 2. Jette Hovedskov, RN, Development Consultant, Glostrup University Hospital, Department of Development Email : jeho@glo.regionh.dk ,phone: +45- 43232090 3. Jette Holtzmann Steenberg...

  18. Animal Spirits and Extreme Confidence: No Guts, No Glory?

    NARCIS (Netherlands)

    M.G. Douwens-Zonneveld (Mariska)

    2012-01-01

    textabstractThis study investigates to what extent extreme confidence of either management or security analysts may impact financial or operating performance. We construct a multidimensional degree of company confidence measure from a wide range of corporate decisions. We empirically test this

  19. Modeling Confidence and Response Time in Recognition Memory

    Science.gov (United States)

    Ratcliff, Roger; Starns, Jeffrey J.

    2009-01-01

    A new model for confidence judgments in recognition memory is presented. In the model, the match between a single test item and memory produces a distribution of evidence, with better matches corresponding to distributions with higher means. On this match dimension, confidence criteria are placed, and the areas between the criteria under the…

  20. Dynamic visual noise reduces confidence in short-term memory for visual information.

    Science.gov (United States)

    Kemps, Eva; Andrade, Jackie

    2012-05-01

    Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.

  1. Confidence limits for parameters of Poisson and binomial distributions

    International Nuclear Information System (INIS)

    Arnett, L.M.

    1976-04-01

    The confidence limits for the frequency in a Poisson process and for the proportion of successes in a binomial process were calculated and tabulated for the situations in which the observed values of the frequency or proportion and an a priori distribution of these parameters are available. Methods are used that produce limits with exactly the stated confidence levels. The confidence interval [a,b] is calculated so that Pr [a less than or equal to lambda less than or equal to b c,μ], where c is the observed value of the parameter, and μ is the a priori hypothesis of the distribution of this parameter. A Bayesian type analysis is used. The intervals calculated are narrower and appreciably different from results, known to be conservative, that are often used in problems of this type. Pearson and Hartley recognized the characteristics of their methods and contemplated that exact methods could someday be used. The calculation of the exact intervals requires involved numerical analyses readily implemented only on digital computers not available to Pearson and Hartley. A Monte Carlo experiment was conducted to verify a selected interval from those calculated. This numerical experiment confirmed the results of the analytical methods and the prediction of Pearson and Hartley that their published tables give conservative results

  2. Confidence mediates the sex difference in mental rotation performance.

    Science.gov (United States)

    Estes, Zachary; Felker, Sydney

    2012-06-01

    On tasks that require the mental rotation of 3-dimensional figures, males typically exhibit higher accuracy than females. Using the most common measure of mental rotation (i.e., the Mental Rotations Test), we investigated whether individual variability in confidence mediates this sex difference in mental rotation performance. In each of four experiments, the sex difference was reliably elicited and eliminated by controlling or manipulating participants' confidence. Specifically, confidence predicted performance within and between sexes (Experiment 1), rendering confidence irrelevant to the task reliably eliminated the sex difference in performance (Experiments 2 and 3), and manipulating confidence significantly affected performance (Experiment 4). Thus, confidence mediates the sex difference in mental rotation performance and hence the sex difference appears to be a difference of performance rather than ability. Results are discussed in relation to other potential mediators and mechanisms, such as gender roles, sex stereotypes, spatial experience, rotation strategies, working memory, and spatial attention.

  3. Asymptotically Honest Confidence Regions for High Dimensional

    DEFF Research Database (Denmark)

    Caner, Mehmet; Kock, Anders Bredahl

    While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

  4. Confidence building in safety assessments

    International Nuclear Information System (INIS)

    Grundfelt, Bertil

    1999-01-01

    Future generations should be adequately protected from damage caused by the present disposal of radioactive waste. This presentation discusses the core of safety and performance assessment: The demonstration and building of confidence that the disposal system meets the safety requirements stipulated by society. The major difficulty is to deal with risks in the very long time perspective of the thousands of years during which the waste is hazardous. Concern about these problems has stimulated the development of the safety assessment discipline. The presentation concentrates on two of the elements of safety assessment: (1) Uncertainty and sensitivity analysis, and (2) validation and review. Uncertainty is associated both with respect to what is the proper conceptual model and with respect to parameter values for a given model. A special kind of uncertainty derives from the variation of a property in space. Geostatistics is one approach to handling spatial variability. The simplest way of doing a sensitivity analysis is to offset the model parameters one by one and observe how the model output changes. The validity of the models and data used to make predictions is central to the credibility of safety assessments for radioactive waste repositories. There are several definitions of model validation. The presentation discusses it as a process and highlights some aspects of validation methodologies

  5. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    Science.gov (United States)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  6. Confidence bounds for nonlinear dose-response relationships

    DEFF Research Database (Denmark)

    Baayen, C; Hougaard, P

    2015-01-01

    An important aim of drug trials is to characterize the dose-response relationship of a new compound. Such a relationship can often be described by a parametric (nonlinear) function that is monotone in dose. If such a model is fitted, it is useful to know the uncertainty of the fitted curve...... intervals for the dose-response curve. These confidence bounds have better coverage than Wald intervals and are more precise and generally faster than bootstrap methods. Moreover, if monotonicity is assumed, the profile likelihood approach takes this automatically into account. The approach is illustrated...

  7. The effects of interstimulus interval on event-related indices of attention: an auditory selective attention test of perceptual load theory.

    Science.gov (United States)

    Gomes, Hilary; Barrett, Sophia; Duff, Martin; Barnhardt, Jack; Ritter, Walter

    2008-03-01

    We examined the impact of perceptual load by manipulating interstimulus interval (ISI) in two auditory selective attention studies that varied in the difficulty of the target discrimination. In the paradigm, channels were separated by frequency and target/deviant tones were softer in intensity. Three ISI conditions were presented: fast (300ms), medium (600ms) and slow (900ms). Behavioral (accuracy and RT) and electrophysiological measures (Nd, P3b) were observed. In both studies, participants evidenced poorer accuracy during the fast ISI condition than the slow suggesting that ISI impacted task difficulty. However, none of the three measures of processing examined, Nd amplitude, P3b amplitude elicited by unattended deviant stimuli, or false alarms to unattended deviants, were impacted by ISI in the manner predicted by perceptual load theory. The prediction based on perceptual load theory, that there would be more processing of irrelevant stimuli under conditions of low as compared to high perceptual load, was not supported in these auditory studies. Task difficulty/perceptual load impacts the processing of irrelevant stimuli in the auditory modality differently than predicted by perceptual load theory, and perhaps differently than in the visual modality.

  8. Digitally Available Interval-Specific Rock-Sample Data Compiled from Historical Records, Nevada Test Site and Vicinity, Nye County, Nevada.

    Energy Technology Data Exchange (ETDEWEB)

    David B. Wood

    2007-10-24

    Between 1951 and 1992, 828 underground tests were conducted on the Nevada Test Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada Test Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples cannot be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.

  9. Digitally Available Interval-Specific Rock-Sample Data Compiled from Historical Records, Nevada Test Site and Vicinity, Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    David B. Wood

    2009-10-08

    Between 1951 and 1992, underground nuclear weapons testing was conducted at 828 sites on the Nevada Test Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada Test Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples cannot be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.

  10. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.; Ghaeb, Jasim A.; Jazzar, Saleh; Saraereh, Omar A.

    2012-01-01

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate

  11. Optimizing lengths of confidence intervals: fourth-order efficiency in location models

    NARCIS (Netherlands)

    Klaassen, C.; Venetiaan, S.

    2010-01-01

    Under regularity conditions the maximum likelihood estimator of the location parameter in a location model is asymptotically efficient among translation equivariant estimators. Additional regularity conditions warrant third- and even fourth-order efficiency, in the sense that no translation

  12. Confidence Intervals for System Reliability and Availability of Maintained Systems Using Monte Carlo Techniques

    Science.gov (United States)

    1981-12-01

    DTIC _JUN ,I 51982 UNITED STATES AIR FORCE AIR UNIVERSITY E AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air-force Base,Ohio S 2 B 14 Best...t’re Air F:or- e -ns"it’.,, e of Technclogy Air Uv-ýerz.tyj in Partial 𔄁ulfilIThent Reýquirements fol- ,-hth D,ýýr.e c4" MastLer of’ OperaZ-ins...iesearc- VeTA 3 MohamedO ’’’’Jo SpD’ Fas.abal-la Lt. C ol. Egyplt.’.an Army Gradua~’p ( ler ons Research December 1981 Approcved fL~r pu>ý’ rclea.se

  13. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

    Science.gov (United States)

    Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

    2011-01-01

    Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

  14. Bayesian Methods and Confidence Intervals for Automatic Target Recognition of SAR Canonical Shapes

    Science.gov (United States)

    2014-03-27

    and DirectX [22]. The CUDA platform was developed by the NVIDIA Corporation to allow programmers access to the computational capabilities of the...were used for the intense repetitive computations. Developing CUDA software requires writing code for specialized compilers provided by NVIDIA and

  15. Statistical Significance, Effect Size Reporting, and Confidence Intervals: Best Reporting Strategies

    Science.gov (United States)

    Capraro, Robert M.

    2004-01-01

    With great interest the author read the May 2002 editorial in the "Journal for Research in Mathematics Education (JRME)" (King, 2002) regarding changes to the 5th edition of the "Publication Manual of the American Psychological Association" (APA, 2001). Of special note to him, and of great import to the field of mathematics education research, are…

  16. Technical Report on Modeling for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    Energy Technology Data Exchange (ETDEWEB)

    McLoughlin, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-11

    The overall aim of this project is to develop a software package, called MetaQuant, that can determine the constituents of a complex microbial sample and estimate their relative abundances by analysis of metagenomic sequencing data. The goal for Task 1 is to create a generative model describing the stochastic process underlying the creation of sequence read pairs in the data set. The stages in this generative process include the selection of a source genome sequence for each read pair, with probability dependent on its abundance in the sample. The other stages describe the evolution of the source genome from its nearest common ancestor with a reference genome, breakage of the source DNA into short fragments, and the errors in sequencing the ends of the fragments to produce read pairs.

  17. Practical Considerations about Expected A Posteriori Estimation in Adaptive Testing: Adaptive A Priori, Adaptive Correction for Bias, and Adaptive Integration Interval.

    Science.gov (United States)

    Raiche, Gilles; Blais, Jean-Guy

    In a computerized adaptive test (CAT), it would be desirable to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Decreasing the number of items is accompanied, however, by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. G. Raiche (2000) has…

  18. High Confidence Software and Systems Research Needs

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This White Paper presents a survey of high confidence software and systems research needs. It has been prepared by the High Confidence Software and Systems...

  19. Confidence Building Strategies in the Public Schools.

    Science.gov (United States)

    Achilles, C. M.; And Others

    1985-01-01

    Data from the Phi Delta Kappa Commission on Public Confidence in Education indicate that "high-confidence" schools make greater use of marketing and public relations strategies. Teacher attitudes were ranked first and administrator attitudes second by 409 respondents for both gain and loss of confidence in schools. (MLF)

  20. Haemostatic reference intervals in pregnancy

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna

    2010-01-01

    Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age-specific refe......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-specific reference intervals for coagulation tests during normal pregnancy. Eight hundred one women with expected normal pregnancies were included in the study. Of these women, 391 had no complications during pregnancy, vaginal delivery, or postpartum period. Plasma samples were obtained at gestational weeks 13......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

  1. Understanding public confidence in government to prevent terrorist attacks.

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, T. E.; Ramaprasad, A,; Samsa, M. E.; Decision and Information Sciences; Univ. of Illinois at Chicago

    2008-04-02

    A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode its confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the principal metrics used to assess the consequences of terrorist attacks. Hence, a detailed understanding of the variations in public confidence among individuals, terrorist event types, and as a function of time is critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data was collected from three groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery explosion attack, and cyber intrusions on financial institutions, resulting in identity theft. Our findings are: (a) although the aggregate confidence level is low, there are optimists and pessimists; (b) the subjects are discriminating in interpreting the nature of a terrorist attack, the time horizon, and its impact; (c) confidence recovery after a terrorist event has an incubation period; and (d) the patterns of recovery of confidence of the optimists and the pessimists are different. These findings can affect the strategy and policies to manage public confidence after a terrorist event.

  2. OFF! Clip-on Repellent Device With Metofluthrin Tested on Aedes aegypti (Diptera: Culicidae) for Mortality at Different Time Intervals and Distances.

    Science.gov (United States)

    Bibbs, Christopher S; Xue, Rui-De

    2016-03-01

    The OFF! Clip-on mosquito-repellent device was tested outdoors against Aedes aegypti (L.). A single treatment device was used against batches of caged adult, nonblood fed Ae. aegypti at multiple locations 0.3m from treatment center. Another set of cages was stationed 0.6m from treatment. A final set of cages was placed 0.9m away. Trials ran for durations of 5, 15, 30, and 60 min. Initial knockdown and mortality after 24 h was recorded. The devices had effective knockdown and mortality. This was not sustained at distances greater than 0.3m from the device.

  3. Sex differences in confidence influence patterns of conformity.

    Science.gov (United States)

    Cross, Catharine P; Brown, Gillian R; Morgan, Thomas J H; Laland, Kevin N

    2017-11-01

    Lack of confidence in one's own ability can increase the likelihood of relying on social information. Sex differences in confidence have been extensively investigated in cognitive tasks, but implications for conformity have not been directly tested. Here, we tested the hypothesis that, in a task that shows sex differences in confidence, an indirect effect of sex on social information use will also be evident. Participants (N = 168) were administered a mental rotation (MR) task or a letter transformation (LT) task. After providing an answer, participants reported their confidence before seeing the responses of demonstrators and being allowed to change their initial answer. In the MR, but not the LT, task, women showed lower levels of confidence than men, and confidence mediated an indirect effect of sex on the likelihood of switching answers. These results provide novel, experimental evidence that confidence is a general explanatory mechanism underpinning susceptibility to social influences. Our results have implications for the interpretation of the wider literature on sex differences in conformity. © 2016 The British Psychological Society.

  4. Overconfidence in Interval Estimates

    Science.gov (United States)

    Soll, Jack B.; Klayman, Joshua

    2004-01-01

    Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

  5. Testing Significance Testing

    Directory of Open Access Journals (Sweden)

    Joachim I. Krueger

    2018-04-01

    Full Text Available The practice of Significance Testing (ST remains widespread in psychological science despite continual criticism of its flaws and abuses. Using simulation experiments, we address four concerns about ST and for two of these we compare ST’s performance with prominent alternatives. We find the following: First, the 'p' values delivered by ST predict the posterior probability of the tested hypothesis well under many research conditions. Second, low 'p' values support inductive inferences because they are most likely to occur when the tested hypothesis is false. Third, 'p' values track likelihood ratios without raising the uncertainties of relative inference. Fourth, 'p' values predict the replicability of research findings better than confidence intervals do. Given these results, we conclude that 'p' values may be used judiciously as a heuristic tool for inductive inference. Yet, 'p' values cannot bear the full burden of inference. We encourage researchers to be flexible in their selection and use of statistical methods.

  6. The Post-Ovariectomy Interval Affects the Antidepressant-Like Action of Citalopram Combined with Ethynyl-Estradiol in the Forced Swim Test in Middle Aged Rats

    Directory of Open Access Journals (Sweden)

    Nelly M. Vega Rivera

    2016-05-01

    Full Text Available The use of a combined therapy with low doses of estrogens plus antidepressants to treat depression associated to perimenopause could be advantageous. However the use of these combinations is controversial due to several factors, including the time of intervention in relation to menopause onset. This paper analyzes whether time post-OVX influences the antidepressant-like action of a combination of ethynyl-estradiol (EE2 and citalopram (CIT in the forced swim test (FST. Middle-aged (15 months old female Wistar rats were ovariectomized and after one or three weeks treated with EE2 (1.25, 2.5 or 5.0 µg/rat, s.c.; −48 h or CIT (1.25, 2.5, 5.0 or 10 mg/kg, i.p./3 injections in 24 h and tested in the FST. In a second experiment, after one or three weeks of OVX, rats received a combination of an ineffective dose of EE2 (1.25 µg/rat, s.c., −48 h plus CIT (2.5 mg/kg, i.p./3 injections in 24 h and subjected to the FST. Finally, the uteri were removed and weighted to obtain an index of the peripheral effects of EE2 administration. EE2 (2.5 or 5.0 µg/rat reduced immobility after one but not three weeks of OVX. In contrast, no CIT dose reduced immobility at one or three weeks after OVX. When EE2 (1.25 µg/rat was combined with CIT (2.5 mg/kg an antidepressant-like effect was observed at one but not three weeks post-OVX. The weight of the uteri augmented when EE2 was administrated three weeks after OVX. The data suggest that the time post-OVX is a crucial factor that contributes to observe the antidepressant-like effect of EE2 alone or in combination with CIT.

  7. Microvascular anastomosis simulation using a chicken thigh model: Interval versus massed training.

    Science.gov (United States)

    Schoeff, Stephen; Hernandez, Brian; Robinson, Derek J; Jameson, Mark J; Shonka, David C

    2017-11-01

    To compare the effectiveness of massed versus interval training when teaching otolaryngology residents microvascular suturing on a validated microsurgical model. Otolaryngology residents were placed into interval (n = 7) or massed (n = 7) training groups. The interval group performed three separate 30-minute practice sessions separated by at least 1 week, and the massed group performed a single 90-minute practice session. Both groups viewed a video demonstration and recorded a pretest prior to the first training session. A post-test was administered following the last practice session. At an academic medical center, 14 otolaryngology residents were assigned using stratified randomization to interval or massed training. Blinded evaluators graded performance using a validated microvascular Objective Structured Assessment of Technical Skill tool. The tool is comprised of two major components: task-specific score (TSS) and global rating scale (GRS). Participants also received pre- and poststudy surveys to compare subjective confidence in multiple aspects of microvascular skill acquisition. Overall, all residents showed increased TSS and GRS on post- versus pretest. After completion of training, the interval group had a statistically significant increase in both TSS and GRS, whereas the massed group's increase was not significant. Residents in both groups reported significantly increased levels of confidence after completion of the study. Self-directed learning using a chicken thigh artery model may benefit microsurgical skills, competence, and confidence for resident surgeons. Interval training results in significant improvement in early development of microvascular anastomosis skills, whereas massed training does not. NA. Laryngoscope, 127:2490-2494, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  8. Regional Competition for Confidence: Features of Formation

    Directory of Open Access Journals (Sweden)

    Irina Svyatoslavovna Vazhenina

    2016-09-01

    Full Text Available The increase in economic independence of the regions inevitably leads to an increase in the quality requirements of the regional economic policy. The key to successful regional policy, both during its development and implementation, is the understanding of the necessity of gaining confidence (at all levels, and the inevitable participation in the competition for confidence. The importance of confidence in the region is determined by its value as a competitive advantage in the struggle for partners, resources and tourists, and attracting investments. In today’s environment the focus of governments, regions and companies on long-term cooperation is clearly expressed, which is impossible without a high level of confidence between partners. Therefore, the most important competitive advantages of territories are intangible assets such as an attractive image and a good reputation, which builds up confidence of the population and partners. The higher the confidence in the region is, the broader is the range of potential partners, the larger is the planning horizon of long-term concerted action, the better are the chances of acquiring investment, the higher is the level of competitive immunity of the territories. The article defines competition for confidence as purposeful behavior of a market participant in economic environment, aimed at acquiring specific intangible competitive advantage – the confidence of the largest possible number of other market actors. The article also highlights the specifics of confidence as a competitive goal, presents factors contributing to the destruction of confidence, proposes a strategy to fight for confidence as a program of four steps, considers the factors which integrate regional confidence and offers several recommendations for the establishment of effective regional competition for confidence

  9. Applications of interval computations

    CERN Document Server

    Kreinovich, Vladik

    1996-01-01

    Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc­ cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli­ cations of numerical methods with automatic result verification, that were pre­ sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...

  10. Confidence in Forced-Choice Recognition: What Underlies the Ratings?

    Science.gov (United States)

    Zawadzka, Katarzyna; Higham, Philip A.; Hanczakowski, Maciej

    2017-01-01

    Two-alternative forced-choice recognition tests are commonly used to assess recognition accuracy that is uncontaminated by changes in bias. In such tests, participants are asked to endorse the studied item out of 2 presented alternatives. Participants may be further asked to provide confidence judgments for their recognition decisions. It is often…

  11. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks

    Science.gov (United States)

    Hedne, Mikael R.; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the “warmth ratings” previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related. PMID:27630598

  12. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  13. The integrated model of sport confidence: a canonical correlation and mediational analysis.

    Science.gov (United States)

    Koehn, Stefan; Pearce, Alan J; Morris, Tony

    2013-12-01

    The main purpose of the study was to examine crucial parts of Vealey's (2001) integrated framework hypothesizing that sport confidence is a mediating variable between sources of sport confidence (including achievement, self-regulation, and social climate) and athletes' affect in competition. The sample consisted of 386 athletes, who completed the Sources of Sport Confidence Questionnaire, Trait Sport Confidence Inventory, and Dispositional Flow Scale-2. Canonical correlation analysis revealed a confidence-achievement dimension underlying flow. Bias-corrected bootstrap confidence intervals in AMOS 20.0 were used in examining mediation effects between source domains and dispositional flow. Results showed that sport confidence partially mediated the relationship between achievement and self-regulation domains and flow, whereas no significant mediation was found for social climate. On a subscale level, full mediation models emerged for achievement and flow dimensions of challenge-skills balance, clear goals, and concentration on the task at hand.

  14. Food skills confidence and household gatekeepers' dietary practices.

    Science.gov (United States)

    Burton, Melissa; Reid, Mike; Worsley, Anthony; Mavondo, Felix

    2017-01-01

    Household food gatekeepers have the potential to influence the food attitudes and behaviours of family members, as they are mainly responsible for food-related tasks in the home. The aim of this study was to determine the role of gatekeepers' confidence in food-related skills and nutrition knowledge on food practices in the home. An online survey was completed by 1059 Australian dietary gatekeepers selected from the Global Market Insite (GMI) research database. Participants responded to questions about food acquisition and preparation behaviours, the home eating environment, perceptions and attitudes towards food, and demographics. Two-step cluster analysis was used to identify groups based on confidence regarding food skills and nutrition knowledge. Chi-square tests and one-way ANOVAs were used to compare the groups on the dependent variables. Three groups were identified: low confidence, moderate confidence and high confidence. Gatekeepers in the highest confidence group were significantly more likely to report lower body mass index (BMI), and indicate higher importance of fresh food products, vegetable prominence in meals, product information use, meal planning, perceived behavioural control and overall diet satisfaction. Gatekeepers in the lowest confidence group were significantly more likely to indicate more perceived barriers to healthy eating, report more time constraints and more impulse purchasing practices, and higher convenience ingredient use. Other smaller associations were also found. Household food gatekeepers with high food skills confidence were more likely to engage in several healthy food practices, while those with low food skills confidence were more likely to engage in unhealthy food practices. Food education strategies aimed at building food-skills and nutrition knowledge will enable current and future gatekeepers to make healthier food decisions for themselves and for their families. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Determining the confidence levels of sensor outputs using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Broten, G S; Wood, H C [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Electrical Engineering

    1996-12-31

    This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network`s ability to determine the confidence level is influenced by the complexity of the sensor`s response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

  16. Application of Interval Arithmetic in the Evaluation of Transfer Capabilities by Considering the Sources of Uncertainty

    Directory of Open Access Journals (Sweden)

    Prabha Umapathy

    2009-01-01

    Full Text Available Total transfer capability (TTC is an important index in a power system with large volume of inter-area power exchanges. This paper proposes a novel technique to determine the TTC and its confidence intervals in the system by considering the uncertainties in the load and line parameters. The optimal power flow (OPF method is used to obtain the TTC. Variations in the load and line parameters are incorporated using the interval arithmetic (IA method. The IEEE 30 bus test system is used to illustrate the proposed methodology. Various uncertainties in the line, load and both line and load are incorporated in the evaluation of total transfer capability. From the results, it is observed that the solutions obtained through the proposed method provide much wider information in terms of closed interval form which is more useful in ensuring secured operation of the interconnected system in the presence of uncertainties in load and line parameters.

  17. The prognostic value of the QT interval and QT interval dispersion in all-cause and cardiac mortality and morbidity in a population of Danish citizens.

    Science.gov (United States)

    Elming, H; Holm, E; Jun, L; Torp-Pedersen, C; Køber, L; Kircshoff, M; Malik, M; Camm, J

    1998-09-01

    To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality, as well as in cardiac morbidity, in a general population. The QT interval was measured in all leads from a standard 12-lead ECG in a random sample of 1658 women and 1797 men aged 30-60 years. QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads. All cause mortality over 13 years, and cardiovascular mortality as well as cardiac morbidity over 11 years, were the main outcome parameters. Subjects with a prolonged QT interval (430 ms or more) or prolonged QT interval dispersion (80 ms or more) were at higher risk of cardiovascular death and cardiac morbidity than subjects whose QT interval was less than 360 ms, or whose QT interval dispersion was less than 30 ms. Cardiovascular death relative risk ratios, adjusted for age, gender, myocardial infarct, angina pectoris, diabetes mellitus, arterial hypertension, smoking habits, serum cholesterol level, and heart rate were 2.9 for the QT interval (95% confidence interval 1.1-7.8) and 4.4 for QT interval dispersion (95% confidence interval 1.0-19-1). Fatal and non-fatal cardiac morbidity relative risk ratios were similar, at 2.7 (95% confidence interval 1.4-5.5) for the QT interval and 2.2 (95% confidence interval 1.1-4.0) for QT interval dispersion. Prolongation of the QT interval and QT interval dispersion independently affected the prognosis of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years.

  18. Chaos on the interval

    CERN Document Server

    Ruette, Sylvie

    2017-01-01

    The aim of this book is to survey the relations between the various kinds of chaos and related notions for continuous interval maps from a topological point of view. The papers on this topic are numerous and widely scattered in the literature; some of them are little known, difficult to find, or originally published in Russian, Ukrainian, or Chinese. Dynamical systems given by the iteration of a continuous map on an interval have been broadly studied because they are simple but nevertheless exhibit complex behaviors. They also allow numerical simulations, which enabled the discovery of some chaotic phenomena. Moreover, the "most interesting" part of some higher-dimensional systems can be of lower dimension, which allows, in some cases, boiling it down to systems in dimension one. Some of the more recent developments such as distributional chaos, the relation between entropy and Li-Yorke chaos, sequence entropy, and maps with infinitely many branches are presented in book form for the first time. The author gi...

  19. Self-Confidence in the Hospitality Industry

    Directory of Open Access Journals (Sweden)

    Michael Oshins

    2014-02-01

    Full Text Available Few industries rely on self-confidence to the extent that the hospitality industry does because guests must feel welcome and that they are in capable hands. This article examines the results of hundreds of student interviews with industry professionals at all levels to determine where the majority of the hospitality industry gets their self-confidence.

  20. Consumer confidence or the business cycle

    DEFF Research Database (Denmark)

    Møller, Stig Vinther; Nørholm, Henrik; Rangvid, Jesper

    2014-01-01

    Answer: The business cycle. We show that consumer confidence and the output gap both excess returns on stocks in many European countries: When the output gap is positive (the economy is doing well), expected returns are low, and when consumer confidence is high, expected returns are also low...

  1. Financial Literacy, Confidence and Financial Advice Seeking

    NARCIS (Netherlands)

    Kramer, Marc M.

    2016-01-01

    We find that people with higher confidence in their own financial literacy are less likely to seek financial advice, but no relation between objective measures of literacy and advice seeking. The negative association between confidence and advice seeking is more pronounced among wealthy households.

  2. Aging and Confidence Judgments in Item Recognition

    Science.gov (United States)

    Voskuilen, Chelsea; Ratcliff, Roger; McKoon, Gail

    2018-01-01

    We examined the effects of aging on performance in an item-recognition experiment with confidence judgments. A model for confidence judgments and response time (RTs; Ratcliff & Starns, 2013) was used to fit a large amount of data from a new sample of older adults and a previously reported sample of younger adults. This model of confidence…

  3. Organic labbeling systems and consumer confidence

    OpenAIRE

    Sønderskov, Kim Mannemar; Daugbjerg, Carsten

    2009-01-01

    A research analysis suggests that a state certification and labelling system creates confidence in organic labelling systems and consequently green consumerism. Danish consumers have higher levels of confidence in the labelling system than consumers in countries where the state plays a minor role in labelling and certification.

  4. Interval methods: An introduction

    DEFF Research Database (Denmark)

    Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj

    2006-01-01

    This chapter contains selected papers presented at the Minisymposium on Interval Methods of the PARA'04 Workshop '' State-of-the-Art in Scientific Computing ''. The emphasis of the workshop was on high-performance computing (HPC). The ongoing development of ever more advanced computers provides...... the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of HPC is the development of software that optimizes the performance of a given computer....... An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...

  5. Multichannel interval timer

    International Nuclear Information System (INIS)

    Turko, B.T.

    1983-10-01

    A CAMAC based modular multichannel interval timer is described. The timer comprises twelve high resolution time digitizers with a common start enabling twelve independent stop inputs. Ten time ranges from 2.5 μs to 1.3 μs can be preset. Time can be read out in twelve 24-bit words either via CAMAC Crate Controller or an external FIFO register. LSB time calibration is 78.125 ps. An additional word reads out the operational status of twelve stop channels. The system consists of two modules. The analog module contains a reference clock and 13 analog time stretchers. The digital module contains counters, logic and interface circuits. The timer has an excellent differential linearity, thermal stability and crosstalk free performance

  6. Experimenting with musical intervals

    Science.gov (United States)

    Lo Presto, Michael C.

    2003-07-01

    When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.

  7. Exploring Self - Confidence Level of High School Students Doing Sport

    Directory of Open Access Journals (Sweden)

    Nurullah Emir Ekinci

    2014-10-01

    Full Text Available The aim of this study was to investigate self-confidence levels of high school students, who do sport, in the extent of their gender, sport branch (individual/team sports and aim for participating in sport (professional/amateur. 185 active high school students from Kutahya voluntarily participated for the study. In the study as data gathering tool self-confidence scale was used. In the evaluation of the data as a hypothesis test Mann Whitney U non parametric test was used. As a result self-confidence levels of participants showed significant differences according to their gender and sport branch but there was no significant difference according to aim for participating in sport.

  8. Interpregnancy interval and risk of autistic disorder.

    Science.gov (United States)

    Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla

    2013-11-01

    A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.

  9. Calculation of solar irradiation prediction intervals combining volatility and kernel density estimates

    International Nuclear Information System (INIS)

    Trapero, Juan R.

    2016-01-01

    In order to integrate solar energy into the grid it is important to predict the solar radiation accurately, where forecast errors can lead to significant costs. Recently, the increasing statistical approaches that cope with this problem is yielding a prolific literature. In general terms, the main research discussion is centred on selecting the “best” forecasting technique in accuracy terms. However, the need of the users of such forecasts require, apart from point forecasts, information about the variability of such forecast to compute prediction intervals. In this work, we will analyze kernel density estimation approaches, volatility forecasting models and combination of both of them in order to improve the prediction intervals performance. The results show that an optimal combination in terms of prediction interval statistical tests can achieve the desired confidence level with a lower average interval width. Data from a facility located in Spain are used to illustrate our methodology. - Highlights: • This work explores uncertainty forecasting models to build prediction intervals. • Kernel density estimators, exponential smoothing and GARCH models are compared. • An optimal combination of methods provides the best results. • A good compromise between coverage and average interval width is shown.

  10. Communication confidence in persons with aphasia.

    Science.gov (United States)

    Babbitt, Edna M; Cherney, Leora R

    2010-01-01

    Communication confidence is a construct that has not been explored in the aphasia literature. Recently, national and international organizations have endorsed broader assessment methods that address quality of life and include participation, activity, and impairment domains as well as psychosocial areas. Individuals with aphasia encounter difficulties in all these areas on a daily basis in living with a communication disorder. Improvements are often reflected in narratives that are not typically included in standard assessments. This article illustrates how a new instrument measuring communication confidence might fit into a broad assessment framework and discusses the interaction of communication confidence, autonomy, and self-determination for individuals living with aphasia.

  11. Confidence building in implementation of geological disposal

    International Nuclear Information System (INIS)

    Umeki, Hiroyuki

    2004-01-01

    Long-term safety of the disposal system should be demonstrated to the satisfaction of the stakeholders. Convincing arguments are therefore required that instil in the stakeholders confidence in the safety of a particular concept for the siting and design of a geological disposal, given the uncertainties that inevitably exist in its a priori description and in its evolution. The step-wise approach associated with making safety case at each stage is a key to building confidence in the repository development programme. This paper discusses aspects and issues on confidence building in the implementation of HLW disposal in Japan. (author)

  12. Confidence rating of marine eutrophication assessments

    DEFF Research Database (Denmark)

    Murray, Ciarán; Andersen, Jesper Harbo; Kaartokallio, Hermanni

    2011-01-01

    of the 'value' of the indicators on which the primary assessment is made. Such secondary assessment of confidence represents a first step towards linking status classification with information regarding their accuracy and precision and ultimately a tool for improving or targeting actions to improve the health......This report presents the development of a methodology for assessing confidence in eutrophication status classifications. The method can be considered as a secondary assessment, supporting the primary assessment of eutrophication status. The confidence assessment is based on a transparent scoring...

  13. Five-Year Risk of Interval-Invasive Second Breast Cancer

    Science.gov (United States)

    Buist, Diana S. M.; Houssami, Nehmat; Dowling, Emily C.; Halpern, Elkan F.; Gazelle, G. Scott; Lehman, Constance D.; Henderson, Louise M.; Hubbard, Rebecca A.

    2015-01-01

    Background: Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. Methods: We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. Results: The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. Conclusions: PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. PMID:25904721

  14. The design and analysis of salmonid tagging studies in the Columbia River. Volume 7: Monte-Carlo comparison of confidence internal procedures for estimating survival in a release-recapture study, with applications to Snake River salmonids

    International Nuclear Information System (INIS)

    Lowther, A.B.; Skalski, J.

    1996-06-01

    Confidence intervals for survival probabilities between hydroelectric facilities of migrating juvenile salmonids can be computed from the output of the SURPH software developed at the Center for Quantitative Science at the University of Washington. These intervals have been constructed using the estimate of the survival probability, its associated standard error, and assuming the estimate is normally distributed. In order to test the validity and performance of this procedure, two additional confidence interval procedures for estimating survival probabilities were tested and compared using simulated mark-recapture data. Intervals were constructed using normal probability theory, using a percentile-based empirical bootstrap algorithm, and using the profile likelihood concept. Performance of each method was assessed for a variety of initial conditions (release sizes, survival probabilities, detection probabilities). These initial conditions were chosen to encompass the range of parameter values seen in the 1993 and 1994 Snake River juvenile salmonid survival studies. The comparisons among the three estimation methods included average interval width, interval symmetry, and interval coverage

  15. Confidence Estimation of Reliability Indices of the System with Elements Duplication and Recovery

    Directory of Open Access Journals (Sweden)

    I. V. Pavlov

    2017-01-01

    Full Text Available The article considers a problem to estimate a confidence interval of the main reliability indices such as availability rate, mean time between failures, and operative availability (in the stationary state for the model of the system with duplication and independent recovery of elements.Presents a solution of the problem for a situation that often arises in practice, when there are unknown exact values of the reliability parameters of the elements, and only test data of the system or its individual parts (elements, subsystems for reliability are known. It should be noted that the problems of the confidence estimate of reliability indices of the complex systems based on the testing results of their individual elements are fairly common function in engineering practice when designing and running the various engineering systems. The available papers consider this problem, mainly, for non-recovery systems.Describes a solution of this problem for the important particular case when the system elements are duplicated by the reserved elements, and the elements that have failed in the course of system operation are recovered (regardless of the state of other elements.An approximate solution of this problem is obtained for the case of high reliability or "fast recovery" of elements on the assumption that the average recovery time of elements is small as compared to the average time between failures.

  16. An Exact Confidence Region in Multivariate Calibration

    OpenAIRE

    Mathew, Thomas; Kasala, Subramanyam

    1994-01-01

    In the multivariate calibration problem using a multivariate linear model, an exact confidence region is constructed. It is shown that the region is always nonempty and is invariant under nonsingular transformations.

  17. Weighting Mean and Variability during Confidence Judgments

    Science.gov (United States)

    de Gardelle, Vincent; Mamassian, Pascal

    2015-01-01

    Humans can not only perform some visual tasks with great precision, they can also judge how good they are in these tasks. However, it remains unclear how observers produce such metacognitive evaluations, and how these evaluations might be dissociated from the performance in the visual task. Here, we hypothesized that some stimulus variables could affect confidence judgments above and beyond their impact on performance. In a motion categorization task on moving dots, we manipulated the mean and the variance of the motion directions, to obtain a low-mean low-variance condition and a high-mean high-variance condition with matched performances. Critically, in terms of confidence, observers were not indifferent between these two conditions. Observers exhibited marked preferences, which were heterogeneous across individuals, but stable within each observer when assessed one week later. Thus, confidence and performance are dissociable and observers’ confidence judgments put different weights on the stimulus variables that limit performance. PMID:25793275

  18. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence

    OpenAIRE

    Chua, Elizabeth F.; Hannula, Deborah E.; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one’s memory are related, but there are many instances when they diverge. Accordingly, it is important to disentangle the factors which contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movem...

  19. Confidence in leadership among the newly qualified.

    Science.gov (United States)

    Bayliss-Pratt, Lisa; Morley, Mary; Bagley, Liz; Alderson, Steven

    2013-10-23

    The Francis report highlighted the importance of strong leadership from health professionals but it is unclear how prepared those who are newly qualified feel to take on a leadership role. We aimed to assess the confidence of newly qualified health professionals working in the West Midlands in the different competencies of the NHS Leadership Framework. Most respondents felt confident in their abilities to demonstrate personal qualities and work with others, but less so at managing or improving services or setting direction.

  20. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    Science.gov (United States)

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  1. [Sources of leader's confidence in organizations].

    Science.gov (United States)

    Ikeda, Hiroshi; Furukawa, Hisataka

    2006-04-01

    The purpose of this study was to examine the sources of confidence that organization leaders had. As potential sources of the confidence, we focused on fulfillment of expectations made by self and others, reflection on good as well as bad job experiences, and awareness of job experiences in terms of commonality, differentiation, and multiple viewpoints. A questionnaire was administered to 170 managers of Japanese companies. Results were as follows: First, confidence in leaders was more strongly related to fulfillment of expectations made by self and others than reflection on and awareness of job experiences. Second, the confidence was weakly related to internal processing of job experiences, in the form of commonality awareness and reflection on good job experiences. And finally, years of managerial experiences had almost no relation to the confidence. These findings suggested that confidence in leaders was directly acquired from fulfillment of expectations made by self and others, rather than indirectly through internal processing of job experiences. Implications of the findings for leadership training were also discussed.

  2. Examining Belief and Confidence in Schizophrenia

    Science.gov (United States)

    Joyce, Dan W.; Averbeck, Bruno B.; Frith, Chris D.; Shergill, Sukhwinder S.

    2018-01-01

    Background People with psychoses often report fixed, delusional beliefs that are sustained even in the presence of unequivocal contrary evidence. Such delusional beliefs are the result of integrating new and old evidence inappropriately in forming a cognitive model. We propose and test a cognitive model of belief formation using experimental data from an interactive “Rock Paper Scissors” game. Methods Participants (33 controls and 27 people with schizophrenia) played a competitive, time-pressured interactive two-player game (Rock, Paper, Scissors). Participant’s behavior was modeled by a generative computational model using leaky-integrator and temporal difference methods. This model describes how new and old evidence is integrated to form both a playing strategy to beat the opponent and provide a mechanism for reporting confidence in one’s playing strategy to win against the opponent Results People with schizophrenia fail to appropriately model their opponent’s play despite consistent (rather than random) patterns that can be exploited in the simulated opponent’s play. This is manifest as a failure to weigh existing evidence appropriately against new evidence. Further, participants with schizophrenia show a ‘jumping to conclusions’ bias, reporting successful discovery of a winning strategy with insufficient evidence. Conclusions The model presented suggests two tentative mechanisms in delusional belief formation – i) one for modeling patterns in other’s behavior, where people with schizophrenia fail to use old evidence appropriately and ii) a meta-cognitive mechanism for ‘confidence’ in such beliefs where people with schizophrenia overweight recent reward history in deciding on the value of beliefs about the opponent. PMID:23521846

  3. Reviewing interval cancers: Time well spent?

    International Nuclear Information System (INIS)

    Gower-Thomas, Kate; Fielder, Hilary M.P.; Branston, Lucy; Greening, Sarah; Beer, Helen; Rogers, Cerilan

    2002-01-01

    OBJECTIVES: To categorize interval cancers, and thus identify false-negatives, following prevalent and incident screens in the Welsh breast screening programme. SETTING: Breast Test Wales (BTW) Llandudno, Cardiff and Swansea breast screening units. METHODS: Five hundred and sixty interval breast cancers identified following negative mammographic screening between 1989 and 1997 were reviewed by eight screening radiologists. The blind review was achieved by mixing the screening films of women who subsequently developed an interval cancer with screen negative films of women who did not develop cancer, in a ratio of 4:1. Another radiologist used patients' symptomatic films to record a reference against which the reviewers' reports of the screening films were compared. Interval cancers were categorized as 'true', 'occult', 'false-negative' or 'unclassified' interval cancers or interval cancers with minimal signs, based on the National Health Service breast screening programme (NHSBSP) guidelines. RESULTS: Of the classifiable interval films, 32% were false-negatives, 55% were true intervals and 12% occult. The proportion of false-negatives following incident screens was half that following prevalent screens (P = 0.004). Forty percent of the seed films were recalled by the panel. CONCLUSIONS: Low false-negative interval cancer rates following incident screens (18%) versus prevalent screens (36%) suggest that lower cancer detection rates at incident screens may have resulted from fewer cancers than expected being present, rather than from a failure to detect tumours. The panel method for categorizing interval cancers has significant flaws as the results vary markedly with different protocol and is no more accurate than other, quicker and more timely methods. Gower-Thomas, K. et al. (2002)

  4. Stability in the metamemory realism of eyewitness confidence judgments.

    Science.gov (United States)

    Buratti, Sandra; Allwood, Carl Martin; Johansson, Marcus

    2014-02-01

    The stability of eyewitness confidence judgments over time in regard to their reported memory and accuracy of these judgments is of interest in forensic contexts because witnesses are often interviewed many times. The present study investigated the stability of the confidence judgments of memory reports of a witnessed event and of the accuracy of these judgments over three occasions, each separated by 1 week. Three age groups were studied: younger children (8-9 years), older children (10-11 years), and adults (19-31 years). A total of 93 participants viewed a short film clip and were asked to answer directed two-alternative forced-choice questions about the film clip and to confidence judge each answer. Different questions about details in the film clip were used on each of the three test occasions. Confidence as such did not exhibit stability over time on an individual basis. However, the difference between confidence and proportion correct did exhibit stability across time, in terms of both over/underconfidence and calibration. With respect to age, the adults and older children exhibited more stability than the younger children for calibration. Furthermore, some support for instability was found with respect to the difference between the average confidence level for correct and incorrect answers (slope). Unexpectedly, however, the younger children's slope was found to be more stable than the adults. Compared to the previous research, the present study's use of more advanced statistical methods provides a more nuanced understanding of the stability of confidence judgments in the eyewitness reports of children and adults.

  5. Kangaroo Care Education Effects on Nurses' Knowledge and Skills Confidence.

    Science.gov (United States)

    Almutairi, Wedad Matar; Ludington-Hoe, Susan M

    2016-11-01

    Less than 20% of the 996 NICUs in the United States routinely practice kangaroo care, due in part to the inadequate knowledge and skills confidence of nurses. Continuing education improves knowledge and skills acquisition, but the effects of a kangaroo care certification course on nurses' knowledge and skills confidence are unknown. A pretest-posttest quasi-experiment was conducted. The Kangaroo Care Knowledge and Skills Confidence Tool was administered to 68 RNs at a 2.5-day course about kangaroo care evidence and skills. Measures of central tendency, dispersion, and paired t tests were conducted on 57 questionnaires. The nurses' characteristics were varied. The mean posttest Knowledge score (M = 88.54, SD = 6.13) was significantly higher than the pretest score (M = 78.7, SD = 8.30), t [54] = -9.1, p = .000), as was the posttest Skills Confidence score (pretest M = 32.06, SD = 3.49; posttest M = 26.80, SD = 5.22), t [53] = -8.459, p = .000). The nurses' knowledge and skills confidence of kangaroo care improved following continuing education, suggesting a need for continuing education in this area. J Contin Educ Nurs. 2016;47(11):518-524. Copyright 2016, SLACK Incorporated.

  6. Increasing Product Confidence-Shifting Paradigms.

    Science.gov (United States)

    Phillips, Marla; Kashyap, Vishal; Cheung, Mee-Shew

    2015-01-01

    Leaders in the pharmaceutical, medical device, and food industries expressed a unilateral concern over product confidence throughout the total product lifecycle, an unsettling fact for these leaders to manage given that their products affect the lives of millions of people each year. Fueled by the heparin incident of intentional adulteration in 2008, initial efforts for increasing product confidence were focused on improving the confidence of incoming materials, with a belief that supplier performance must be the root cause. As in the heparin case, concern over supplier performance extended deep into the supply chain to include suppliers of the suppliers-which is often a blind spot for pharmaceutical, device, and food manufacturers. Resolved to address the perceived lack of supplier performance, these U.S. Food and Drug Administration (FDA)-regulated industries began to adopt the supplier relationship management strategy, developed by the automotive industry, that emphasizes "management" of suppliers for the betterment of the manufacturers. Current product and supplier management strategies, however, have not led to a significant improvement in product confidence. As a result of the enduring concern by industry leaders over the lack of product confidence, Xavier University launched the Integrity of Supply Initiative in 2012 with a team of industry leaders and FDA officials. Through a methodical research approach, data generated by the pharmaceutical, medical device, and food manufacturers surprisingly pointed to themselves as a source of the lack of product confidence, and revealed that manufacturers either unknowingly increase the potential for error or can control/prevent many aspects of product confidence failure. It is only through this paradigm shift that manufacturers can work collaboratively with their suppliers as equal partners, instead of viewing their suppliers as "lesser" entities needing to be controlled. The basis of this shift provides manufacturers

  7. Optimal Wind Power Uncertainty Intervals for Electricity Market Operation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Ying; Zhou, Zhi; Botterud, Audun; Zhang, Kaifeng

    2018-01-01

    It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixed integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.

  8. Confidence-building and Canadian leadership

    Energy Technology Data Exchange (ETDEWEB)

    Cleminson, F.R. [Dept. of Foreign Affairs and International Trade, Verification, Non-Proliferation, Arms Control and Disarmament Div (IDA), Ottawa, Ontario (Canada)

    1998-07-01

    Confidence-building has come into its own as a 'tool of choice' in facilitating the non-proliferation, arms control and disarmament (NACD) agenda, whether regional or global. From the Middle East Peace Process (MEPP) to the ASEAN Intersessional Group on Confidence-Building (ARF ISG on CBMS), confidence-building has assumed a central profile in regional terms. In the Four Power Talks begun in Geneva on December 9, 1997, the United States identified confidence-building as one of two subject areas for initial discussion as part of a structured peace process between North and South Korea. Thus, with CBMs assuming such a high profile internationally, it seems prudent for Canadians to pause and take stock of the significant role which Canada has already played in the conceptual development of the process over the last two decades. Since the Helsinki accords of 1975, Canada has developed a significant expertise in this area through an unbroken series of original, basic research projects. These have contributed to defining the process internationally from concept to implementation. Today, these studies represent a solid and unique Departmental investment in basic research from which to draw in meeting Canada's current commitments to multilateral initiatives in the area of confidence-building and to provide a 'step up' in terms of future-oriented leadership. (author)

  9. Confidence Leak in Perceptual Decision Making.

    Science.gov (United States)

    Rahnev, Dobromir; Koizumi, Ai; McCurdy, Li Yan; D'Esposito, Mark; Lau, Hakwan

    2015-11-01

    People live in a continuous environment in which the visual scene changes on a slow timescale. It has been shown that to exploit such environmental stability, the brain creates a continuity field in which objects seen seconds ago influence the perception of current objects. What is unknown is whether a similar mechanism exists at the level of metacognitive representations. In three experiments, we demonstrated a robust intertask confidence leak-that is, confidence in one's response on a given task or trial influencing confidence on the following task or trial. This confidence leak could not be explained by response priming or attentional fluctuations. Better ability to modulate confidence leak predicted higher capacity for metacognition as well as greater gray matter volume in the prefrontal cortex. A model based on normative principles from Bayesian inference explained the results by postulating that observers subjectively estimate the perceptual signal strength in a stable environment. These results point to the existence of a novel metacognitive mechanism mediated by regions in the prefrontal cortex. © The Author(s) 2015.

  10. ADAM SMITH: THE INVISIBLE HAND OR CONFIDENCE

    Directory of Open Access Journals (Sweden)

    Fernando Luis, Gache

    2010-01-01

    Full Text Available In 1776 Adam Smith raised the matter that an invisible hand was the one which moved the markets to obtain its efficiency. Despite in the present paper we are going to raise the hypothesis, that this invisible hand is in fact the confidence that each person feels when he is going to do business. That in addition it is unique, because it is different from the confidence of the others and that is a variable nonlinear that essentially is ligatured to respective personal histories. For that we are going to take as its bases the paper by Leopoldo Abadía (2009, with respect to the financial economy crisis that happened in 2007-2008, to evidence the form in which confidence operates. Therefore the contribution that we hope to do with this paper is to emphasize that, the level of confidence of the different actors, is the one which really moves the markets, (therefore the economy and that the crisis of the subprime mortgages is a confidence crisis at world-wide level.

  11. Confidence-building and Canadian leadership

    International Nuclear Information System (INIS)

    Cleminson, F.R.

    1998-01-01

    Confidence-building has come into its own as a 'tool of choice' in facilitating the non-proliferation, arms control and disarmament (NACD) agenda, whether regional or global. From the Middle East Peace Process (MEPP) to the ASEAN Intersessional Group on Confidence-Building (ARF ISG on CBMS), confidence-building has assumed a central profile in regional terms. In the Four Power Talks begun in Geneva on December 9, 1997, the United States identified confidence-building as one of two subject areas for initial discussion as part of a structured peace process between North and South Korea. Thus, with CBMs assuming such a high profile internationally, it seems prudent for Canadians to pause and take stock of the significant role which Canada has already played in the conceptual development of the process over the last two decades. Since the Helsinki accords of 1975, Canada has developed a significant expertise in this area through an unbroken series of original, basic research projects. These have contributed to defining the process internationally from concept to implementation. Today, these studies represent a solid and unique Departmental investment in basic research from which to draw in meeting Canada's current commitments to multilateral initiatives in the area of confidence-building and to provide a 'step up' in terms of future-oriented leadership. (author)

  12. Determination of confidence limits for experiments with low numbers of counts

    International Nuclear Information System (INIS)

    Kraft, R.P.; Burrows, D.N.; Nousek, J.A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference. 12 refs

  13. A systematic review of maternal confidence for physiologic birth: characteristics of prenatal care and confidence measurement.

    Science.gov (United States)

    Avery, Melissa D; Saftner, Melissa A; Larson, Bridget; Weinfurter, Elizabeth V

    2014-01-01

    Because a focus on physiologic labor and birth has reemerged in recent years, care providers have the opportunity in the prenatal period to help women increase confidence in their ability to give birth without unnecessary interventions. However, most research has only examined support for women during labor. The purpose of this systematic review was to examine the research literature for information about prenatal care approaches that increase women's confidence for physiologic labor and birth and tools to measure that confidence. Studies were reviewed that explored any element of a pregnant woman's interaction with her prenatal care provider that helped build confidence in her ability to labor and give birth. Timing of interaction with pregnant women included during pregnancy, labor and birth, and the postpartum period. In addition, we looked for studies that developed a measure of women's confidence related to labor and birth. Outcome measures included confidence or similar concepts, descriptions of components of prenatal care contributing to maternal confidence for birth, and reliability and validity of tools measuring confidence. The search of MEDLINE, CINAHL, PsycINFO, and Scopus databases provided a total of 893 citations. After removing duplicates and articles that did not meet inclusion criteria, 6 articles were included in the review. Three relate to women's confidence for labor during the prenatal period, and 3 describe tools to measure women's confidence for birth. Research about enhancing women's confidence for labor and birth was limited to qualitative studies. Results suggest that women desire information during pregnancy and want to use that information to participate in care decisions in a relationship with a trusted provider. Further research is needed to develop interventions to help midwives and physicians enhance women's confidence in their ability to give birth and to develop a tool to measure confidence for use during prenatal care. © 2014 by

  14. Predicting fecal coliform using the interval-to-interval approach and SWAT in the Miyun watershed, China.

    Science.gov (United States)

    Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu; Qiu, Jiali; Li, Yangyang

    2017-06-01

    Pathogens in manure can cause waterborne-disease outbreaks, serious illness, and even death in humans. Therefore, information about the transformation and transport of bacteria is crucial for determining their source. In this study, the Soil and Water Assessment Tool (SWAT) was applied to simulate fecal coliform bacteria load in the Miyun Reservoir watershed, China. The data for the fecal coliform were obtained at three sampling sites, Chenying (CY), Gubeikou (GBK), and Xiahui (XH). The calibration processes of the fecal coliform were conducted using the CY and GBK sites, and validation was conducted at the XH site. An interval-to-interval approach was designed and incorporated into the processes of fecal coliform calibration and validation. The 95% confidence interval of the predicted values and the 95% confidence interval of measured values were considered during calibration and validation in the interval-to-interval approach. Compared with the traditional point-to-point comparison, this method can improve simulation accuracy. The results indicated that the simulation of fecal coliform using the interval-to-interval approach was reasonable for the watershed. This method could provide a new research direction for future model calibration and validation studies.

  15. Assessing Confidence in Pliocene Sea Surface Temperatures to Evaluate Predictive Models

    Science.gov (United States)

    Dowsett, Harry J.; Robinson, Marci M.; Haywood, Alan M.; Hill, Daniel J.; Dolan, Aisling. M.; Chan, Wing-Le; Abe-Ouchi, Ayako; Chandler, Mark A.; Rosenbloom, Nan A.; Otto-Bliesner, Bette L.; hide

    2012-01-01

    In light of mounting empirical evidence that planetary warming is well underway, the climate research community looks to palaeoclimate research for a ground-truthing measure with which to test the accuracy of future climate simulations. Model experiments that attempt to simulate climates of the past serve to identify both similarities and differences between two climate states and, when compared with simulations run by other models and with geological data, to identify model-specific biases. Uncertainties associated with both the data and the models must be considered in such an exercise. The most recent period of sustained global warmth similar to what is projected for the near future occurred about 3.33.0 million years ago, during the Pliocene epoch. Here, we present Pliocene sea surface temperature data, newly characterized in terms of level of confidence, along with initial experimental results from four climate models. We conclude that, in terms of sea surface temperature, models are in good agreement with estimates of Pliocene sea surface temperature in most regions except the North Atlantic. Our analysis indicates that the discrepancy between the Pliocene proxy data and model simulations in the mid-latitudes of the North Atlantic, where models underestimate warming shown by our highest-confidence data, may provide a new perspective and insight into the predictive abilities of these models in simulating a past warm interval in Earth history.This is important because the Pliocene has a number of parallels to present predictions of late twenty-first century climate.

  16. Confidence building - is science the only approach

    International Nuclear Information System (INIS)

    Bragg, K.

    1990-01-01

    The Atomic Energy Control Board (AECB) has begun to develop some simplified methods to determine if it is possible to provide confidence that dose, risk and environmental criteria can be respected without undue reliance on detailed scientific models. The progress to date will be outlined and the merits of this new approach will be compared to the more complex, traditional approach. Stress will be given to generating confidence in both technical and non-technical communities as well as the need to enhance communication between them. 3 refs., 1 tab

  17. Self Confidence Spillovers and Motivated Beliefs

    DEFF Research Database (Denmark)

    Banerjee, Ritwik; Gupta, Nabanita Datta; Villeval, Marie Claire

    that success when competing in a task increases the performers’ self-confidence and competitiveness in the subsequent task. We also find that such spillovers affect the self-confidence of low-status individuals more than that of high-status individuals. Receiving good news under Affirmative Action, however......Is success in a task used strategically by individuals to motivate their beliefs prior to taking action in a subsequent, unrelated, task? Also, is the distortion of beliefs reinforced for individuals who have lower status in society? Conducting an artefactual field experiment in India, we show...

  18. Determining the confidence levels of sensor outputs using neural networks

    International Nuclear Information System (INIS)

    Broten, G.S.; Wood, H.C.

    1995-01-01

    This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network's ability to determine the confidence level is influenced by the complexity of the sensor's response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

  19. Confident Communication: Speaking Tips for Educators.

    Science.gov (United States)

    Parker, Douglas A.

    This resource book seeks to provide the building blocks needed for public speaking while eliminating the fear factor. The book explains how educators can perfect their oratorical capabilities as well as enjoy the security, confidence, and support needed to create and deliver dynamic speeches. Following an Introduction: A Message for Teachers,…

  20. Principles of psychological confidence of NPP operators

    International Nuclear Information System (INIS)

    Alpeev, A.S.

    1994-01-01

    The problems of operator interaction with subsystems supporting his activity are discussed from the point of view of formation of his psychological confidence on the basis of the automation intellectual means capabilities. The functions of operator activity supporting subsystems, which realization will provide to decrease greatly the portion of accidents at NPPs connected with mistakes in operator actions, are derived. 6 refs

  1. Growing confidence, building skills | IDRC - International ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    In 2012 Rashid explored the influence of think tanks on policy in Bangladesh, as well as their relationships with international donors and media. In 2014, he explored two-way student exchanges between Canadian and ... his IDRC experience “gave me the confidence to conduct high quality research in social sciences.”.

  2. Detecting Disease in Radiographs with Intuitive Confidence

    Directory of Open Access Journals (Sweden)

    Stefan Jaeger

    2015-01-01

    Full Text Available This paper argues in favor of a specific type of confidence for use in computer-aided diagnosis and disease classification, namely, sine/cosine values of angles represented by points on the unit circle. The paper shows how this confidence is motivated by Chinese medicine and how sine/cosine values are directly related with the two forces Yin and Yang. The angle for which sine and cosine are equal (45° represents the state of equilibrium between Yin and Yang, which is a state of nonduality that indicates neither normality nor abnormality in terms of disease classification. The paper claims that the proposed confidence is intuitive and can be readily understood by physicians. The paper underpins this thesis with theoretical results in neural signal processing, stating that a sine/cosine relationship between the actual input signal and the perceived (learned input is key to neural learning processes. As a practical example, the paper shows how to use the proposed confidence values to highlight manifestations of tuberculosis in frontal chest X-rays.

  3. Current Developments in Measuring Academic Behavioural Confidence

    Science.gov (United States)

    Sander, Paul

    2009-01-01

    Using published findings and by further analyses of existing data, the structure, validity and utility of the Academic Behavioural Confidence scale (ABC) is critically considered. Validity is primarily assessed through the scale's relationship with other existing scales as well as by looking for predicted differences. The utility of the ABC scale…

  4. 46 CFR 61.20-17 - Examination intervals.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Examination intervals. 61.20-17 Section 61.20-17... INSPECTIONS Periodic Tests of Machinery and Equipment § 61.20-17 Examination intervals. (a) A lubricant that... examination interval. (b) Except as provided in paragraphs (c) through (f) of this section, each tailshaft on...

  5. Building Public Confidence in Nuclear Activities

    International Nuclear Information System (INIS)

    Isaacs, T

    2002-01-01

    Achieving public acceptance has become a central issue in discussions regarding the future of nuclear power and associated nuclear activities. Effective public communication and public participation are often put forward as the key building blocks in garnering public acceptance. A recent international workshop in Finland provided insights into other features that might also be important to building and sustaining public confidence in nuclear activities. The workshop was held in Finland in close cooperation with Finnish stakeholders. This was most appropriate because of the recent successes in achieving positive decisions at the municipal, governmental, and Parliamentary levels, allowing the Finnish high-level radioactive waste repository program to proceed, including the identification and approval of a proposed candidate repository site. Much of the workshop discussion appropriately focused on the roles of public participation and public communications in building public confidence. It was clear that well constructed and implemented programs of public involvement and communication and a sense of fairness were essential in building the extent of public confidence needed to allow the repository program in Finland to proceed. It was also clear that there were a number of other elements beyond public involvement that contributed substantially to the success in Finland to date. And, in fact, it appeared that these other factors were also necessary to achieving the Finnish public acceptance. In other words, successful public participation and communication were necessary but not sufficient. What else was important? Culture, politics, and history vary from country to country, providing differing contexts for establishing and maintaining public confidence. What works in one country will not necessarily be effective in another. Nonetheless, there appear to be certain elements that might be common to programs that are successful in sustaining public confidence and some of

  6. Building Public Confidence in Nuclear Activities

    International Nuclear Information System (INIS)

    Isaacs, T

    2002-01-01

    Achieving public acceptance has become a central issue in discussions regarding the future of nuclear power and associated nuclear activities. Effective public communication and public participation are often put forward as the key building blocks in garnering public acceptance. A recent international workshop in Finland provided insights into other features that might also be important to building and sustaining public confidence in nuclear activities. The workshop was held in Finland in close cooperation with Finnish stakeholders. This was most appropriate because of the recent successes in achieving positive decisions at the municipal, governmental, and Parliamentary levels, allowing the Finnish high-level radioactive waste repository program to proceed, including the identification and approval of a proposed candidate repository site Much of the workshop discussion appropriately focused on the roles of public participation and public communications in building public confidence. It was clear that well constructed and implemented programs of public involvement and communication and a sense of fairness were essential in building the extent of public confidence needed to allow the repository program in Finland to proceed. It was also clear that there were a number of other elements beyond public involvement that contributed substantially to the success in Finland to date. And, in fact, it appeared that these other factors were also necessary to achieving the Finnish public acceptance. In other words, successful public participation and communication were necessary but not sufficient. What else was important? Culture, politics, and history vary from country to country, providing differing contexts for establishing and maintaining public confidence. What works in one country will not necessarily be effective in another. Nonetheless, there appear to be certain elements that might be common to programs that are successful in sustaining public confidence, and some of

  7. Recurrence interval analysis of trading volumes.

    Science.gov (United States)

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  8. The effect of terrorism on public confidence : an exploratory study.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, M. S.; Baldwin, T. E.; Samsa, M. E.; Ramaprasad, A.; Decision and Information Sciences

    2008-10-31

    A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the metrics it uses to assess the consequences of terrorist attacks. Hence, several factors--including a detailed understanding of the variations in public confidence among individuals, by type of terrorist event, and as a function of time--are critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data were collected from the groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery bombing, and cyber intrusions on financial institutions that resulted in identity theft and financial losses. Our findings include the following: (a) the subjects can be classified into at least three distinct groups on the basis of their baseline outlook--optimistic, pessimistic, and unaffected; (b) the subjects make discriminations in their interpretations of an event on the basis of the nature of a terrorist attack, the time horizon, and its impact; (c) the recovery of confidence after a terrorist event has an incubation period and typically does not return to its initial level in the long-term; (d) the patterns of recovery of confidence differ between the optimists and the pessimists; and (e) individuals are able to associate a monetary value with a loss or gain in confidence, and the value associated with a loss is greater than the value associated with a gain. These findings illustrate the importance the public places in their confidence in government

  9. Measuring the Confidence of 8th Grade Taiwanese Students' Knowledge of Acids and Bases

    Science.gov (United States)

    Jack, Brady Michael; Liu, Chia-Ju; Chiu, Houn-Lin; Tsai, Chun-Yen

    2012-01-01

    The present study investigated whether gender differences were present on the confidence judgments made by 8th grade Taiwanese students on the accuracy of their responses to acid-base test items. A total of 147 (76 male, 71 female) students provided item-specific confidence judgments during a test of their knowledge of acids and bases. Using the…

  10. Effects of parental divorce on marital commitment and confidence.

    Science.gov (United States)

    Whitton, Sarah W; Rhoades, Galena K; Stanley, Scott M; Markman, Howard J

    2008-10-01

    Research on the intergenerational transmission of divorce has demonstrated that compared with offspring of nondivorced parents, those of divorced parents generally have more negative attitudes toward marriage as an institution and are less optimistic about the feasibility of a long-lasting, healthy marriage. It is also possible that when entering marriage themselves, adults whose parents divorced have less personal relationship commitment to their own marriages and less confidence in their own ability to maintain a happy marriage with their spouse. However, this prediction has not been tested. In the current study, we assessed relationship commitment and relationship confidence, as well as parental divorce and retrospectively reported interparental conflict, in a sample of 265 engaged couples prior to their first marriage. Results demonstrated that women's, but not men's, parental divorce was associated with lower relationship commitment and lower relationship confidence. These effects persisted when controlling for the influence of recalled interparental conflict and premarital relationship adjustment. The current findings suggest that women whose parents divorced are more likely to enter marriage with relatively lower commitment to, and confidence in, the future of those marriages, potentially raising their risk for divorce. Copyright 2008 APA, all rights reserved.

  11. Diagnostic interval and mortality in colorectal cancer

    DEFF Research Database (Denmark)

    Tørring, Marie Louise; Frydenberg, Morten; Hamilton, William

    2012-01-01

    Objective To test the theory of a U-shaped association between time from the first presentation of symptoms in primary care to the diagnosis (the diagnostic interval) and mortality after diagnosis of colorectal cancer (CRC). Study Design and Setting Three population-based studies in Denmark...

  12. Learned Interval Time Facilitates Associate Memory Retrieval

    Science.gov (United States)

    van de Ven, Vincent; Kochs, Sarah; Smulders, Fren; De Weerd, Peter

    2017-01-01

    The extent to which time is represented in memory remains underinvestigated. We designed a time paired associate task (TPAT) in which participants implicitly learned cue-time-target associations between cue-target pairs and specific cue-target intervals. During subsequent memory testing, participants showed increased accuracy of identifying…

  13. Challenge for reconstruction of public confidence

    International Nuclear Information System (INIS)

    Matsuura, S.

    2001-01-01

    Past incidents and scandals that have had a large influence on damaging public confidence in nuclear energy safety are presented. Radiation leak on nuclear-powered ship 'Mutsu' (1974), the T.M.I. incident in 1979, Chernobyl accident (1986), the sodium leak at the Monju reactor (1995), fire and explosion at a low level waste asphalt solidification facility (1997), J.C.O. incident (Tokai- MURA, 1999), are so many examples that have created feelings of distrust and anxiety in society. In order to restore public confidence there is no other course but to be prepared for difficulty and work honestly to our fullest ability, with all steps made openly and accountably. (N.C.)

  14. Tables of Confidence Limits for Proportions

    Science.gov (United States)

    1990-09-01

    0.972 180 49 0.319 0.332 0,357 175 165 0.964 0.969 0.976 ISO 50 0.325 0.338 0.363 175 166 0.969 0.973 0.980 180 51 0.331 0.344 0.368 175 167 0.973 0.977...0.528 180 18 0.135 0 145 0.164 180 19 0.141 0.151 0.171 ISO 80 0.495 0,508 0.534 347 UPPER CONFIDENCE LIMIT FOR PROPORTIONS CONFIDENCE LEVEL...500 409 0.8401 0.8459 0.8565 500 355 0.7364 0.7434 0.7564 500 356 0.7383 0.7453 0.7582 500 410 0.8420 0.8478 0 8583 500 357 0.7402 0.7472 0.7602 500

  15. Social media sentiment and consumer confidence

    OpenAIRE

    Daas, Piet J.H.; Puts, Marco J.H.

    2014-01-01

    Changes in the sentiment of Dutch public social media messages were compared with changes in monthly consumer confidence over a period of three-and-a-half years, revealing that both were highly correlated (up to r = 0.9) and that both series cointegrated. This phenomenon is predominantly affected by changes in the sentiment of all Dutch public Facebook messages. The inclusion of various selections of public Twitter messages improved this association and the response to changes in sentiment. G...

  16. Reference intervals for selected serum biochemistry analytes in cheetahs Acinonyx jubatus.

    Science.gov (United States)

    Hudson-Lamb, Gavin C; Schoeman, Johan P; Hooijberg, Emma H; Heinrich, Sonja K; Tordiffe, Adrian S W

    2016-02-26

    Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L - 166 mmol/L), potassium (3.9 mmol/L - 5.2 mmol/L), magnesium (0.8 mmol/L - 1.2 mmol/L), chloride (97 mmol/L - 130 mmol/L), urea (8.2 mmol/L - 25.1 mmol/L) and creatinine (88 µmol/L - 288 µmol/L). Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  17. Reference intervals for selected serum biochemistry analytes in cheetahs (Acinonyx jubatus

    Directory of Open Access Journals (Sweden)

    Gavin C. Hudson-Lamb

    2016-02-01

    Full Text Available Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L – 166 mmol/L, potassium (3.9 mmol/L – 5.2 mmol/L, magnesium (0.8 mmol/L – 1.2 mmol/L, chloride (97 mmol/L – 130 mmol/L, urea (8.2 mmol/L – 25.1 mmol/L and creatinine (88 µmol/L – 288 µmol/L. Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  18. Ambulatory Function and Perception of Confidence in Persons with Stroke with a Custom-Made Hinged versus a Standard Ankle Foot Orthosis

    Directory of Open Access Journals (Sweden)

    Angélique Slijper

    2012-01-01

    Full Text Available Objective. The aim was to compare walking with an individually designed dynamic hinged ankle foot orthosis (DAFO and a standard carbon composite ankle foot orthosis (C-AFO. Methods. Twelve participants, mean age 56 years (range 26–72, with hemiparesis due to stroke were included in the study. During the six-minute walk test (6MW, walking velocity, the Physiological Cost Index (PCI, and the degree of experienced exertion were measured with a DAFO and C-AFO, respectively, followed by a Stairs Test velocity and perceived confidence was rated. Results. The mean differences in favor for the DAFO were in 6MW 24.3 m (95% confidence interval [CI] 4.90, 43.76, PCI −0.09 beats/m (95% CI −0.27, 0.95, velocity 0.04 m/s (95% CI −0.01, 0.097, and in the Stairs Test −11.8 s (95% CI −19.05, −4.48. All participants except one perceived the degree of experienced exertion lower and felt more confident when walking with the DAFO. Conclusions. Wearing a DAFO resulted in longer walking distance and faster stair climbing compared to walking with a C-AFO. Eleven of twelve participants felt more confident with the DAFO, which may be more important than speed and distance and the most important reason for prescribing an AFO.

  19. Confidence, Visual Research, and the Aesthetic Function

    Directory of Open Access Journals (Sweden)

    Stan Ruecker

    2007-05-01

    Full Text Available The goal of this article is to identify and describe one of the primary purposes of aesthetic quality in the design of computer interfaces and visualization tools. We suggest that humanists can derive advantages in visual research by acknowledging by their efforts to advance aesthetic quality that a significant function of aesthetics in this context is to inspire the user’s confidence. This confidence typically serves to create a sense of trust in the provider of the interface or tool. In turn, this increased trust may result in an increased willingness to engage with the object, on the basis that it demonstrates an attention to detail that promises to reward increased engagement. In addition to confidence, the aesthetic may also contribute to a heightened degree of satisfaction with having spent time using or investigating the object. In the realm of interface design and visualization research, we propose that these aesthetic functions have implications not only for the quality of interactions, but also for the results of the standard measures of performance and preference.

  20. Predictor sort sampling and one-sided confidence bounds on quantiles

    Science.gov (United States)

    Steve Verrill; Victoria L. Herian; David W. Green

    2002-01-01

    Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...

  1. Inferring high-confidence human protein-protein interactions

    Directory of Open Access Journals (Sweden)

    Yu Xueping

    2012-05-01

    Full Text Available Abstract Background As numerous experimental factors drive the acquisition, identification, and interpretation of protein-protein interactions (PPIs, aggregated assemblies of human PPI data invariably contain experiment-dependent noise. Ascertaining the reliability of PPIs collected from these diverse studies and scoring them to infer high-confidence networks is a non-trivial task. Moreover, a large number of PPIs share the same number of reported occurrences, making it impossible to distinguish the reliability of these PPIs and rank-order them. For example, for the data analyzed here, we found that the majority (>83% of currently available human PPIs have been reported only once. Results In this work, we proposed an unsupervised statistical approach to score a set of diverse, experimentally identified PPIs from nine primary databases to create subsets of high-confidence human PPI networks. We evaluated this ranking method by comparing it with other methods and assessing their ability to retrieve protein associations from a number of diverse and independent reference sets. These reference sets contain known biological data that are either directly or indirectly linked to interactions between proteins. We quantified the average effect of using ranked protein interaction data to retrieve this information and showed that, when compared to randomly ranked interaction data sets, the proposed method created a larger enrichment (~134% than either ranking based on the hypergeometric test (~109% or occurrence ranking (~46%. Conclusions From our evaluations, it was clear that ranked interactions were always of value because higher-ranked PPIs had a higher likelihood of retrieving high-confidence experimental data. Reducing the noise inherent in aggregated experimental PPIs via our ranking scheme further increased the accuracy and enrichment of PPIs derived from a number of biologically relevant data sets. These results suggest that using our high-confidence

  2. 40 CFR 1054.310 - How must I select engines for production-line testing?

    Science.gov (United States)

    2010-07-01

    ...% confidence intervals for a one-tail distribution. σ = Test sample standard deviation (see paragraph (c)(2) of this section). x = Mean of emission test results of the sample. STD = Emission standard (or family...)). (e) After each new test, recalculate the required sample size using the updated mean values, standard...

  3. 40 CFR 1045.310 - How must I select engines for production-line testing?

    Science.gov (United States)

    2010-07-01

    ... select and test one more engine. Then, calculate the required sample size for the model year as described.... It defines 95% confidence intervals for a one-tail distribution. σ = Test sample standard deviation (see paragraph (c)(2) of this section). x = Mean of emission test results of the sample. STD = Emission...

  4. What if there were no significance tests?

    CERN Document Server

    Harlow, Lisa L; Steiger, James H

    2013-01-01

    This book is the result of a spirited debate stimulated by a recent meeting of the Society of Multivariate Experimental Psychology. Although the viewpoints span a range of perspectives, the overriding theme that emerges states that significance testing may still be useful if supplemented with some or all of the following -- Bayesian logic, caution, confidence intervals, effect sizes and power, other goodness of approximation measures, replication and meta-analysis, sound reasoning, and theory appraisal and corroboration. The book is organized into five general areas. The first presents an overview of significance testing issues that sythesizes the highlights of the remainder of the book. The next discusses the debate in which significance testing should be rejected or retained. The third outlines various methods that may supplement current significance testing procedures. The fourth discusses Bayesian approaches and methods and the use of confidence intervals versus significance tests. The last presents the p...

  5. Interval stability for complex systems

    Science.gov (United States)

    Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.

    2018-04-01

    Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.

  6. Transparency as an element of public confidence

    International Nuclear Information System (INIS)

    Kim, H.K.

    2007-01-01

    In the modern society, there is increasing demands for greater transparency. It has been discussed with respect to corruption or ethics issues in social science. The need for greater openness and transparency in nuclear regulation is widely recognised as public expectations on regulator grow. It is also related to the digital and information technology that enables disclosures of every activity and information of individual and organisation, characterised by numerous 'small brothers'. Transparency has become a key word in this ubiquitous era. Transparency in regulatory activities needs to be understood in following contexts. First, transparency is one of elements to build public confidence in regulator and eventually to achieve regulatory goal of providing the public with satisfaction at nuclear safety. Transparent bases of competence, independence, ethics and integrity of working process of regulatory body would enhance public confidence. Second, activities transmitting information on nuclear safety and preparedness to be accessed are different types of transparency. Communication is an active method of transparency. With increasing use of web-sites, 'digital transparency' is also discussed as passive one. Transparency in regulatory process may be more important than that of contents. Simply providing more information is of little value and specific information may need to be protected for security reason. Third, transparency should be discussed in international, national and organizational perspectives. It has been demanded through international instruments. for each country, transparency is demanded by residents, public, NGOs, media and other stakeholders. Employees also demand more transparency in operating and regulatory organisations. Whistle-blower may appear unless they are satisfied. Fourth, pursuing transparency may cause undue social cost or adverse effects. Over-transparency may decrease public confidence and the process for transparency may also hinder

  7. National Debate and Public Confidence in Sweden

    International Nuclear Information System (INIS)

    Lindquist, Ted

    2014-01-01

    Ted Lindquist, coordinator of the Association of Swedish Municipalities with Nuclear Facilities (KSO), closed the first day of conferences. He showed what the nuclear landscape was in Sweden, and in particular that through time there has been a rather good support from the population. He explained that the reason could be the confidence of the public in the national debate. On a more local scale, Ted Lindquist showed how overwhelmingly strong the support was in towns where the industry would like to operate long-term storage facilities

  8. False memories and memory confidence in borderline patients.

    Science.gov (United States)

    Schilling, Lisa; Wingenfeld, Katja; Spitzer, Carsten; Nagel, Matthias; Moritz, Steffen

    2013-12-01

    Mixed results have been obtained regarding memory in patients with borderline personality disorder (BPD). Prior reports and anecdotal evidence suggests that patients with BPD are prone to false memories but this assumption has to been put to firm empirical test, yet. Memory accuracy and confidence was assessed in 20 BPD patients and 22 healthy controls using a visual variant of the false memory (Deese-Roediger-McDermott) paradigm which involved a negative and a positive-valenced picture. Groups did not differ regarding veridical item recognition. Importantly, patients did not display more false memories than controls. At trend level, borderline patients rated more items as new with high confidence compared to healthy controls. The results tentatively suggest that borderline patients show uncompromised visual memory functions and display no increased susceptibility for distorted memories. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Probabilistic confidence for decisions based on uncertain reliability estimates

    Science.gov (United States)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  10. Replication, falsification, and the crisis of confidence in social psychology.

    Science.gov (United States)

    Earp, Brian D; Trafimow, David

    2015-01-01

    The (latest) crisis in confidence in social psychology has generated much heated discussion about the importance of replication, including how it should be carried out as well as interpreted by scholars in the field. For example, what does it mean if a replication attempt "fails"-does it mean that the original results, or the theory that predicted them, have been falsified? And how should "failed" replications affect our belief in the validity of the original research? In this paper, we consider the replication debate from a historical and philosophical perspective, and provide a conceptual analysis of both replication and falsification as they pertain to this important discussion. Along the way, we highlight the importance of auxiliary assumptions (for both testing theories and attempting replications), and introduce a Bayesian framework for assessing "failed" replications in terms of how they should affect our confidence in original findings.

  11. Replication, falsification, and the crisis of confidence in social psychology

    Science.gov (United States)

    Earp, Brian D.; Trafimow, David

    2015-01-01

    The (latest) crisis in confidence in social psychology has generated much heated discussion about the importance of replication, including how it should be carried out as well as interpreted by scholars in the field. For example, what does it mean if a replication attempt “fails”—does it mean that the original results, or the theory that predicted them, have been falsified? And how should “failed” replications affect our belief in the validity of the original research? In this paper, we consider the replication debate from a historical and philosophical perspective, and provide a conceptual analysis of both replication and falsification as they pertain to this important discussion. Along the way, we highlight the importance of auxiliary assumptions (for both testing theories and attempting replications), and introduce a Bayesian framework for assessing “failed” replications in terms of how they should affect our confidence in original findings. PMID:26042061

  12. Diagnosing Anomalous Network Performance with Confidence

    Energy Technology Data Exchange (ETDEWEB)

    Settlemyer, Bradley W [ORNL; Hodson, Stephen W [ORNL; Kuehn, Jeffery A [ORNL; Poole, Stephen W [ORNL

    2011-04-01

    Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance computer systems. High performance interconnec- tion networks offer excellent best-case network latencies; how- ever, highly parallel applications running on parallel machines typically require consistently high levels of performance to adequately leverage the massive amounts of available computing power. Performance analysts have usually quantified network performance using traditional summary statistics that assume the observational data is sampled from a normal distribution. In our examinations of network performance, we have found this method of analysis often provides too little data to under- stand anomalous network performance. Our tool, Confidence, instead uses an empirically derived probability distribution to characterize network performance. In this paper we describe several instances where the Confidence toolkit allowed us to understand and diagnose network performance anomalies that we could not adequately explore with the simple summary statis- tics provided by traditional measurement tools. In particular, we examine a multi-modal performance scenario encountered with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2 interconnection network after a set of software and driver updates.

  13. Short-Term Wind Power Interval Forecasting Based on an EEMD-RT-RVM Model

    Directory of Open Access Journals (Sweden)

    Haixiang Zang

    2016-01-01

    Full Text Available Accurate short-term wind power forecasting is important for improving the security and economic success of power grids. Existing wind power forecasting methods are mostly types of deterministic point forecasting. Deterministic point forecasting is vulnerable to forecasting errors and cannot effectively deal with the random nature of wind power. In order to solve the above problems, we propose a short-term wind power interval forecasting model based on ensemble empirical mode decomposition (EEMD, runs test (RT, and relevance vector machine (RVM. First, in order to reduce the complexity of data, the original wind power sequence is decomposed into a plurality of intrinsic mode function (IMF components and residual (RES component by using EEMD. Next, we use the RT method to reconstruct the components and obtain three new components characterized by the fine-to-coarse order. Finally, we obtain the overall forecasting results (with preestablished confidence levels by superimposing the forecasting results of each new component. Our results show that, compared with existing methods, our proposed short-term interval forecasting method has less forecasting errors, narrower interval widths, and larger interval coverage percentages. Ultimately, our forecasting model is more suitable for engineering applications and other forecasting methods for new energy.

  14. The relationship between confidence in charitable organizations and volunteering revisited

    NARCIS (Netherlands)

    Bekkers, René H.F.P.; Bowman, Woods

    2009-01-01

    Confidence in charitable organizations (charitable confidence) would seem to be an important prerequisite for philanthropic behavior. Previous research relying on cross-sectional data has suggested that volunteering promotes charitable confidence and vice versa. This research note, using new

  15. CONSEL: for assessing the confidence of phylogenetic tree selection.

    Science.gov (United States)

    Shimodaira, H; Hasegawa, M

    2001-12-01

    CONSEL is a program to assess the confidence of the tree selection by giving the p-values for the trees. The main thrust of the program is to calculate the p-value of the Approximately Unbiased (AU) test using the multi-scale bootstrap technique. This p-value is less biased than the other conventional p-values such as the Bootstrap Probability (BP), the Kishino-Hasegawa (KH) test, the Shimodaira-Hasegawa (SH) test, and the Weighted Shimodaira-Hasegawa (WSH) test. CONSEL calculates all these p-values from the output of the phylogeny program packages such as Molphy, PAML, and PAUP*. Furthermore, CONSEL is applicable to a wide class of problems where the BPs are available. The programs are written in C language. The source code for Unix and the executable binary for DOS are found at http://www.ism.ac.jp/~shimo/ shimo@ism.ac.jp

  16. Metacognition and Confidence: Comparing Math to Other Academic Subjects

    Directory of Open Access Journals (Sweden)

    Shanna eErickson

    2015-06-01

    Full Text Available Two studies addressed student metacognition in math, measuring confidence accuracy about math performance. Underconfidence would be expected in light of pervasive math anxiety. However, one might alternatively expect overconfidence based on previous results showing overconfidence in other subject domains. Metacognitive judgments and performance were assessed for biology, literature, and mathematics tests. In Study 1, high school students took three different tests and provided estimates of their performance both before and after taking each test. In Study 2, undergraduates similarly took three shortened SAT II Subject Tests. Students were overconfident in predicting math performance, indeed showing greater overconfidence compared to other academic subjects. It appears that both overconfidence and anxiety can adversely affect metacognitive ability and can lead to math avoidance. The results have implications for educational practice and other environments that require extensive use of math.

  17. Tumor phenotype and breast density in distinct categories of interval cancer: results of population-based mammography screening in Spain.

    Science.gov (United States)

    Domingo, Laia; Salas, Dolores; Zubizarreta, Raquel; Baré, Marisa; Sarriugarte, Garbiñe; Barata, Teresa; Ibáñez, Josefa; Blanch, Jordi; Puig-Vives, Montserrat; Fernández, Ana; Castells, Xavier; Sala, Maria

    2014-01-10

    Interval cancers are tumors arising after a negative screening episode and before the next screening invitation. They can be classified into true interval cancers, false-negatives, minimal-sign cancers, and occult tumors based on mammographic findings in screening and diagnostic mammograms. This study aimed to describe tumor-related characteristics and the association of breast density and tumor phenotype within four interval cancer categories. We included 2,245 invasive tumors (1,297 screening-detected and 948 interval cancers) diagnosed from 2000 to 2009 among 645,764 women aged 45 to 69 who underwent biennial screening in Spain. Interval cancers were classified by a semi-informed retrospective review into true interval cancers (n = 455), false-negatives (n = 224), minimal-sign (n = 166), and occult tumors (n = 103). Breast density was evaluated using Boyd's scale and was conflated into: 75%. Tumor-related information was obtained from cancer registries and clinical records. Tumor phenotype was defined as follows: luminal A: ER+/HER2- or PR+/HER2-; luminal B: ER+/HER2+ or PR+/HER2+; HER2: ER-/PR-/HER2+; triple-negative: ER-/PR-/HER2-. The association of tumor phenotype and breast density was assessed using a multinomial logistic regression model. Adjusted odds ratios (OR) and 95% confidence intervals (95% CI) were calculated. All statistical tests were two-sided. Forty-eight percent of interval cancers were true interval cancers and 23.6% false-negatives. True interval cancers were associated with HER2 and triple-negative phenotypes (OR = 1.91 (95% CI:1.22-2.96), OR = 2.07 (95% CI:1.42-3.01), respectively) and extremely dense breasts (>75%) (OR = 1.67 (95% CI:1.08-2.56)). However, among true interval cancers a higher proportion of triple-negative tumors was observed in predominantly fatty breasts (breasts (28.7%, 21.4%, 11.3% and 14.3%, respectively; cancers, extreme breast density being strongly associated with occult tumors (OR

  18. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  19. Experimental congruence of interval scale production from paired comparisons and ranking for image evaluation

    Science.gov (United States)

    Handley, John C.; Babcock, Jason S.; Pelz, Jeff B.

    2003-12-01

    Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.

  20. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  1. Confidence crisis of results in biomechanics research.

    Science.gov (United States)

    Knudson, Duane

    2017-11-01

    Many biomechanics studies have small sample sizes and incorrect statistical analyses, so reporting of inaccurate inferences and inflated magnitude of effects are common in the field. This review examines these issues in biomechanics research and summarises potential solutions from research in other fields to increase the confidence in the experimental effects reported in biomechanics. Authors, reviewers and editors of biomechanics research reports are encouraged to improve sample sizes and the resulting statistical power, improve reporting transparency, improve the rigour of statistical analyses used, and increase the acceptance of replication studies to improve the validity of inferences from data in biomechanics research. The application of sports biomechanics research results would also improve if a larger percentage of unbiased effects and their uncertainty were reported in the literature.

  2. Technology in a crisis of confidence

    Energy Technology Data Exchange (ETDEWEB)

    Damodaran, G R

    1979-04-01

    The power that technological progress has given to engineers is examined to see if there has been a corresponding growth in human happiness. A credit/debit approach is discussed, whereby technological advancement is measured against the criteria of social good. The credit side includes medicine, agriculture, and energy use, while the debit side lists pollution, unequal distribution of technology and welfare, modern weaponry, resource depletion, and a possible decline in the quality of life. The present anti-technologists claim the debit side is now predominant, but the author challenges this position by examining the role of technology and the engineer in the society. He sees a need for renewed self-confidence and a sense of direction among engineers, but is generally optimistic that technology and civilization will continue to be intertwined. (DCK)

  3. Considering public confidence in developing regulatory programs

    International Nuclear Information System (INIS)

    Collins, S.J.

    2001-01-01

    In the area of public trust and in any investment, planning and strategy are important. While it is accepted in the United States that an essential part of our mission is to leverage our resources to improving Public Confidence this performance goal must be planned for, managed and measured. Similar to our premier performance goal of Maintaining Safety, a strategy must be developed and integrated with our external stake holders but with internal regulatory staff as well. In order to do that, business is to be conducted in an open environment, the basis for regulatory decisions has to be available through public documents and public meetings, communication must be done in clear and consistent terms. (N.C.)

  4. The application of deep confidence network in the problem of image recognition

    Directory of Open Access Journals (Sweden)

    Chumachenko О.І.

    2016-12-01

    Full Text Available In order to study the concept of deep learning, in particular the substitution of multilayer perceptron on the corresponding network of deep confidence, computer simulations of the learning process to test voters was carried out. Multi-layer perceptron has been replaced by a network of deep confidence, consisting of successive limited Boltzmann machines. After training of a network of deep confidence algorithm of layer-wise training it was found that the use of networks of deep confidence greatly improves the accuracy of multilayer perceptron training by method of reverse distribution errors.

  5. The Relationship Between Eyewitness Confidence and Identification Accuracy: A New Synthesis.

    Science.gov (United States)

    Wixted, John T; Wells, Gary L

    2017-05-01

    The U.S. legal system increasingly accepts the idea that the confidence expressed by an eyewitness who identified a suspect from a lineup provides little information as to the accuracy of that identification. There was a time when this pessimistic assessment was entirely reasonable because of the questionable eyewitness-identification procedures that police commonly employed. However, after more than 30 years of eyewitness-identification research, our understanding of how to properly conduct a lineup has evolved considerably, and the time seems ripe to ask how eyewitness confidence informs accuracy under more pristine testing conditions (e.g., initial, uncontaminated memory tests using fair lineups, with no lineup administrator influence, and with an immediate confidence statement). Under those conditions, mock-crime studies and police department field studies have consistently shown that, for adults, (a) confidence and accuracy are strongly related and (b) high-confidence suspect identifications are remarkably accurate. However, when certain non-pristine testing conditions prevail (e.g., when unfair lineups are used), the accuracy of even a high-confidence suspect ID is seriously compromised. Unfortunately, some jurisdictions have not yet made reforms that would create pristine testing conditions and, hence, our conclusions about the reliability of high-confidence identifications cannot yet be applied to those jurisdictions. However, understanding the information value of eyewitness confidence under pristine testing conditions can help the criminal justice system to simultaneously achieve both of its main objectives: to exonerate the innocent (by better appreciating that initial, low-confidence suspect identifications are error prone) and to convict the guilty (by better appreciating that initial, high-confidence suspect identifications are surprisingly accurate under proper testing conditions).

  6. High-intensity cycle interval training improves cycling and running performance in triathletes.

    Science.gov (United States)

    Etxebarria, Naroa; Anson, Judith M; Pyne, David B; Ferguson, Richard A

    2014-01-01

    Effective cycle training for triathlon is a challenge for coaches. We compared the effects of two variants of cycle high-intensity interval training (HIT) on triathlon-specific cycling and running. Fourteen moderately-trained male triathletes ([Formula: see text]O2peak 58.7 ± 8.1 mL kg(-1) min(-1); mean ± SD) completed on separate occasions a maximal incremental test ([Formula: see text]O2peak and maximal aerobic power), 16 × 20 s cycle sprints and a 1-h triathlon-specific cycle followed immediately by a 5 km run time trial. Participants were then pair-matched and assigned randomly to either a long high-intensity interval training (LONG) (6-8 × 5 min efforts) or short high-intensity interval training (SHORT) (9-11 × 10, 20 and 40 s efforts) HIT cycle training intervention. Six training sessions were completed over 3 weeks before participants repeated the baseline testing. Both groups had an ∼7% increase in [Formula: see text]O2peak (SHORT 7.3%, ±4.6%; mean, ±90% confidence limits; LONG 7.5%, ±1.7%). There was a moderate improvement in mean power for both the SHORT (10.3%, ±4.4%) and LONG (10.7%, ±6.8%) groups during the last eight 20-s sprints. There was a small to moderate decrease in heart rate, blood lactate and perceived exertion in both groups during the 1-h triathlon-specific cycling but only the LONG group had a substantial decrease in the subsequent 5-km run time (64, ±59 s). Moderately-trained triathletes should use both short and long high-intensity intervals to improve cycling physiology and performance. Longer 5-min intervals on the bike are more likely to benefit 5 km running performance.

  7. Confidence assessment. Site descriptive modelling SDM-Site Forsmark

    International Nuclear Information System (INIS)

    2008-09-01

    distribution and size-intensity models for fractures at repository depth can only be reduced by data from underground, i.e. from fracture mapping of tunnel walls etc. Specifically it will be necessary to carry out statistical modelling of fractures in a DFN study at depth during construction work on the access ramp and shafts. Uncertainties in stress magnitude will be reduced by observations and measurements of deformation with back analysis during the construction phase. Underground mapping data from deposition tunnels will allow fore a division of the fine-grained granitoid into different rock types. This will enable thermal optimisation of the repository. The next step in confidence building would be to predict conditions and impacts from underground tunnels. Tunnel data will provide information about the fracture size distribution at the relevant depths. The underground excavations will also provide possibilities for short-range interference tests at relevant depth. Uncertainties in understanding chemical processes may be reduced by assessing results from underground monitoring (groundwater chemistry; fracture minerals etc) of the effects of drawdown and inflows during excavation. The hydrogeological DFN fitting parameters for fractures within the repository volume can only be properly constrained by mapping of flowing or potentially open fracture statistics in tunnels. Surface outcrop statistics are not relevant for properties at repository depth. During underground investigations, the flowing fracture frequencies in tunnels and investigations of couplings between rock mechanical properties and fracture transmissivities may give clues to the extent of in-plane flow channelling which will lead to more reliable models for transport from the repository volume, particularly close to deposition holes where the most important retention and retardation of any released radionuclides may occur in the rock barrier

  8. Chinese Management Research Needs Self-Confidence but not Over-confidence

    DEFF Research Database (Denmark)

    Li, Xin; Ma, Li

    2018-01-01

    Chinese management research aims to contribute to global management knowledge by offering rigorous and innovative theories and practical recommendations both for managing in China and outside. However, two seemingly opposite directions that researchers are taking could prove detrimental......-confidence, limiting theoretical innovation and practical relevance. Yet going in the other direction of overly indigenous research reflects over-confidence, often isolating the Chinese management research from the mainstream academia and at times, even becoming anti-science. A more integrated approach of conducting...... to the healthy development of Chinese management research. We argue that the two directions share a common ground that lies in the mindset regarding the confidence in the work on and from China. One direction of simply following the American mainstream on academic rigor demonstrates a lack of self...

  9. Thought confidence as a determinant of persuasion: the self-validation hypothesis.

    Science.gov (United States)

    Petty, Richard E; Briñol, Pablo; Tormala, Zakary L

    2002-05-01

    Previous research in the domain of attitude change has described 2 primary dimensions of thinking that impact persuasion processes and outcomes: the extent (amount) of thinking and the direction (valence) of issue-relevant thought. The authors examined the possibility that another, more meta-cognitive aspect of thinking is also important-the degree of confidence people have in their own thoughts. Four studies test the notion that thought confidence affects the extent of persuasion. When positive thoughts dominate in response to a message, increasing confidence in those thoughts increases persuasion, but when negative thoughts dominate, increasing confidence decreases persuasion. In addition, using self-reported and manipulated thought confidence in separate studies, the authors provide evidence that the magnitude of the attitude-thought relationship depends on the confidence people have in their thoughts. Finally, the authors also show that these self-validation effects are most likely in situations that foster high amounts of information processing activity.

  10. Haemostatic reference intervals in pregnancy

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna

    2010-01-01

    largely unchanged during pregnancy, delivery, and postpartum and were within non-pregnant reference intervals. However, levels of fibrinogen, D-dimer, and coagulation factors VII, VIII, and IX increased markedly. Protein S activity decreased substantially, while free protein S decreased slightly and total......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

  11. Inverse Interval Matrix: A Survey

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří; Farhadsefat, R.

    2011-01-01

    Roč. 22, - (2011), s. 704-719 E-ISSN 1081-3810 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval matrix * inverse interval matrix * NP-hardness * enclosure * unit midpoint * inverse sign stability * nonnegative invertibility * absolute value equation * algorithm Subject RIV: BA - General Mathematics Impact factor: 0.808, year: 2010 http://www.math.technion.ac.il/iic/ ela / ela -articles/articles/vol22_pp704-719.pdf

  12. Five-year risk of interval-invasive second breast cancer.

    Science.gov (United States)

    Lee, Janie M; Buist, Diana S M; Houssami, Nehmat; Dowling, Emily C; Halpern, Elkan F; Gazelle, G Scott; Lehman, Constance D; Henderson, Louise M; Hubbard, Rebecca A

    2015-07-01

    Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. © The Author 2015. Published by Oxford University Press. All rights reserved

  13. A new model for cork weight estimation in Northern Portugal with methodology for construction of confidence intervals

    Science.gov (United States)

    Teresa J.F. Fonseca; Bernard R. Parresol

    2001-01-01

    Cork, a unique biological material, is a highly valued non-timber forest product. Portugal is the leading producer of cork with 52 percent of the world production. Tree cork weight models have been developed for Southern Portugal, but there are no representative published models for Northern Portugal. Because cork trees may have a different form between Northern and...

  14. A Validation Study of the Rank-Preserving Structural Failure Time Model: Confidence Intervals and Unique, Multiple, and Erroneous Solutions.

    Science.gov (United States)

    Ouwens, Mario; Hauch, Ole; Franzén, Stefan

    2018-05-01

    The rank-preserving structural failure time model (RPSFTM) is used for health technology assessment submissions to adjust for switching patients from reference to investigational treatment in cancer trials. It uses counterfactual survival (survival when only reference treatment would have been used) and assumes that, at randomization, the counterfactual survival distribution for the investigational and reference arms is identical. Previous validation reports have assumed that patients in the investigational treatment arm stay on therapy throughout the study period. To evaluate the validity of the RPSFTM at various levels of crossover in situations in which patients are taken off the investigational drug in the investigational arm. The RPSFTM was applied to simulated datasets differing in percentage of patients switching, time of switching, underlying acceleration factor, and number of patients, using exponential distributions for the time on investigational and reference treatment. There were multiple scenarios in which two solutions were found: one corresponding to identical counterfactual distributions, and the other to two different crossing counterfactual distributions. The same was found for the hazard ratio (HR). Unique solutions were observed only when switching patients were on investigational treatment for <40% of the time that patients in the investigational arm were on treatment. Distributions other than exponential could have been used for time on treatment. An HR equal to 1 is a necessary but not always sufficient condition to indicate acceleration factors associated with equal counterfactual survival. Further assessment to distinguish crossing counterfactual curves from equal counterfactual curves is especially needed when the time that switchers stay on investigational treatment is relatively long compared to the time direct starters stay on investigational treatment.

  15. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data: Methods and Software

    Science.gov (United States)

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-01-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

  16. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data Methods and Software

    Science.gov (United States)

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-01-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

  17. Meta-analysis to refine map position and reduce confidence intervals for delayed canopy wilting QTLs in soybean

    Science.gov (United States)

    Slow canopy wilting in soybean has been identified as a potentially beneficial trait for ameliorating drought effects on yield. Previous research identified QTLs for slow wilting from two different bi-parental populations and this information was combined with data from three other populations to id...

  18. Noise annoyance from stationary sources: Relationships with exposure metric day-evening-night level (DENL) and their confidence intervals

    NARCIS (Netherlands)

    Miedema, H.M.E.; Vos, H.

    2004-01-01

    Relationships between exposure to noise [metric: day-evening-night levels (DENL)] from stationary sources (shunting yards, a seasonal industry, and other industries) and annoyance are presented. Curves are presented for expected annoyance score, the percentage "highly annoyed" (%HA, cutoff at 72 on

  19. Derivation of confidence intervals of service measures in a base-stock inventory control system with low-frequent demand

    DEFF Research Database (Denmark)

    Larsen, Christian

    We explore a base-stock system with backlogging where the demand process is a compound renewal process and the compound element is a delayed geometric distribution. For this setting it is proven in [4] that the long-run average service measures order fill rate (OFR) and volume fill rate (VFR) are...

  20. Derivation of confidence intervals of service measures in a base-stock inventory control system with low-frequent demand

    DEFF Research Database (Denmark)

    Larsen, Christian

    2011-01-01

    We explore a base-stock system with backlogging where the demand process is a compound renewal process and the compound element is a delayed geometric distribution. For this setting it holds that the long-run average service measures order fill rate (OFR) and volume fill rate (VFR) are equal in v...

  1. The theory of confidence-building measures

    International Nuclear Information System (INIS)

    Darilek, R.E.

    1992-01-01

    This paper discusses the theory of Confidence-Building Measures (CBMs) in two ways. First, it employs a top-down, deductively oriented approach to explain CBM theory in terms of the arms control goals and objectives to be achieved, the types of measures to be employed, and the problems or limitations likely to be encountered when applying CBMs to conventional or nuclear forces. The chapter as a whole asks how various types of CBMs might function during a political - military escalation from peacetime to a crisis and beyond (i.e. including conflict), as well as how they might operate in a de-escalatory environment. In pursuit of these overarching issues, the second section of the chapter raises a fundamental but complicating question: how might the next all-out war actually come aoubt - by unpremeditated escalation resulting from misunderstanding or miscalculation, or by premeditation resulting in a surprise attack? The second section of the paper addresses this question, explores its various implications for CBMs, and suggests the potential contribution of different types of CBMs toward successful resolution of the issues involved

  2. Trust versus confidence: Microprocessors and personnel monitoring

    International Nuclear Information System (INIS)

    Chiaro, P.J. Jr.

    1993-01-01

    Due to recent technological advances, substantial improvements have been made in personnel contamination monitoring. In all likelihood, these advances will close out the days of manually frisking personnel for radioactive contamination. Unfortunately, as microprocessor-based monitors become more widely used, not only at commercial power reactors but also at government facilities, questions concerning their trustworthiness arise. Algorithms make decisions that were previously made by technicians. Trust is placed not in technicians but in machines. In doing this it is assumed that the machine never misses. Inevitably, this trust drops, due largely to open-quotes false alarms.close quotes This is especially true when monitoring for alpha contamination. What is a open-quotes false alarm?close quotes Do these machines and their algorithms that we put our trust in make mistakes? An analysis was performed on half-body and hand-and-foot monitors at Oak Ridge National Laboratory (ORNL) in order to justify the suggested confidence level used for alarm point determination. Sources used in this analysis had activities approximating ORNL's contamination limits

  3. Trust versus confidence: Microprocessors and personnel monitoring

    International Nuclear Information System (INIS)

    Chiaro, P.J. Jr.

    1993-01-01

    Due to recent technological advances, substantial improvements have been made in personnel contamination monitoring. In all likelihood, these advances will close out the days of manually frisking personnel for radioactive contamination. Unfortunately, as microprocessor-based monitors become more widely used, not only at commercial power reactors but also at government facilities, questions concerning their trustworthiness arise. Algorithms make decisions that were previously made by technicians. Trust is placed not in technicians but in machines. In doing this it is assumed that the machine never misses. Inevitably, this trust drops, due largely to ''false alarms''. This is especially true when monitoring for alpha contamination. What is a ''false alarm''? Do these machines and their algorithms that we put our trust in make mistakes? An analysis was performed on half-body and hand-and-foot monitors at Oak Ridge National Laboratory (ORNL) in order to justify the suggested confidence level used for alarm point determination. Sources used in this analysis had activities approximating ORNL's contamination limits

  4. Trust versus confidence: Microprocessors and personnel monitoring

    International Nuclear Information System (INIS)

    Chiaro, P.J. Jr.

    1994-01-01

    Due to recent technological advances, substantial improvements have been made in personnel contamination monitoring. In all likelihood, these advances will close out the days of manually frisking personnel for radioactive contamination. Unfortunately, as microprocessor-based monitors become more widely used, not only at commercial power reactors but also at government facilities, questions concerning their trustworthiness arise. Algorithms make decisions that were previously made by technicians. Trust is placed not in technicians but in machines. In doing this it is assumed that the machine never misses. Inevitably, this trust drops, due largely to ''false alarms''. This is especially true when monitoring for alpha contamination. What is a ''false alarm''? Do these machines and their algorithms that they put their trust in make mistakes? An analysis was performed on half-body and hand-and-foot monitors at Oak Ridge National Laboratory (ORNL) in order to justify the suggested confidence level used for alarm point determination. Sources used in this analysis had activities approximating ORNL's contamination limits

  5. Distribution of the product confidence limits for the indirect effect: Program PRODCLIN

    Science.gov (United States)

    MacKinnon, David P.; Fritz, Matthew S.; Williams, Jason; Lockwood, Chondra M.

    2010-01-01

    This article describes a program, PRODCLIN (distribution of the PRODuct Confidence Limits for INdirect effects), written for SAS, SPSS, and R, that computes confidence limits for the product of two normal random variables. The program is important because it can be used to obtain more accurate confidence limits for the indirect effect, as demonstrated in several recent articles (MacKinnon, Lockwood, & Williams, 2004; Pituch, Whittaker, & Stapleton, 2005). Tests of the significance of and confidence limits for indirect effects based on the distribution of the product method have more accurate Type I error rates and more power than other, more commonly used tests. Values for the two paths involved in the indirect effect and their standard errors are entered in the PRODCLIN program, and distribution of the product confidence limits are computed. Several examples are used to illustrate the PRODCLIN program. The PRODCLIN programs in rich text format may be downloaded from www.psychonomic.org/archive. PMID:17958149

  6. Cytotoxicity testing of aqueous extract of bitter leaf ( Vernonia ...

    African Journals Online (AJOL)

    Cytotoxicity testing of aqueous extract of bitter leaf ( Vernonia amygdalina Del ) and sniper 1000EC (2,3 dichlorovinyl dimethyl phosphate) using the Alium cepa ... 96 hours and EC50 values at 95% confidence interval was determined from a plot of root length against sample concentrations using Microsoft Excel software.

  7. Dynamic Properties of QT Intervals

    Czech Academy of Sciences Publication Activity Database

    Halámek, Josef; Jurák, Pavel; Vondra, Vlastimil; Lipoldová, J.; Leinveber, Pavel; Plachý, M.; Fráňa, P.; Kára, T.

    2009-01-01

    Roč. 36, - (2009), s. 517-520 ISSN 0276-6574 R&D Projects: GA ČR GA102/08/1129; GA MŠk ME09050 Institutional research plan: CEZ:AV0Z20650511 Keywords : QT Intervals * arrhythmia diagnosis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering http://cinc.mit.edu/archives/2009/pdf/0517.pdf

  8. Interval matrices: Regularity generates singularity

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří; Shary, S.P.

    2018-01-01

    Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016

  9. Chaotic dynamics from interspike intervals

    DEFF Research Database (Denmark)

    Pavlov, A N; Sosnovtseva, Olga; Mosekilde, Erik

    2001-01-01

    Considering two different mathematical models describing chaotic spiking phenomena, namely, an integrate-and-fire and a threshold-crossing model, we discuss the problem of extracting dynamics from interspike intervals (ISIs) and show that the possibilities of computing the largest Lyapunov expone...

  10. Method for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations

    CSIR Research Space (South Africa)

    Kirton, A

    2010-08-01

    Full Text Available for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations A KIRTON B SCHOLES S ARCHIBALD CSIR Ecosystem Processes and Dynamics, Natural Resources and the Environment P.O. BOX 395, Pretoria, 0001, South... intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form...

  11. Evaluation of Healing Intervals of Incisional Skin Wounds of Goats ...

    African Journals Online (AJOL)

    The aim of this study was to compare the healing intervals among simple interrupted (SI), ford interlocking (FI) and subcuticular (SC) suture patterns in goats. We hypothesized that these common suture patterns used for closure of incisional skin wounds may have effect on the healing interval. To test this hypothesis, two ...

  12. T(peak)T(end) interval in long QT syndrome

    DEFF Research Database (Denmark)

    Kanters, Jørgen Kim; Haarmark, Christian; Vedel-Larsen, Esben

    2008-01-01

    BACKGROUND: The T(peak)T(end) (T(p)T(e)) interval is believed to reflect the transmural dispersion of repolarization. Accordingly, it should be a risk factor in long QT syndrome (LQTS). The aim of the study was to determine the effect of genotype on T(p)T(e) interval and test whether it was relat...

  13. Confidence and self-attribution bias in an artificial stock market

    Science.gov (United States)

    Bertella, Mario A.; Pires, Felipe R.; Rego, Henio H. A.; Vodenska, Irena; Stanley, H. Eugene

    2017-01-01

    Using an agent-based model we examine the dynamics of stock price fluctuations and their rates of return in an artificial financial market composed of fundamentalist and chartist agents with and without confidence. We find that chartist agents who are confident generate higher price and rate of return volatilities than those who are not. We also find that kurtosis and skewness are lower in our simulation study of agents who are not confident. We show that the stock price and confidence index—both generated by our model—are cointegrated and that stock price affects confidence index but confidence index does not affect stock price. We next compare the results of our model with the S&P 500 index and its respective stock market confidence index using cointegration and Granger tests. As in our model, we find that stock prices drive their respective confidence indices, but that the opposite relationship, i.e., the assumption that confidence indices drive stock prices, is not significant. PMID:28231255

  14. The confidence in diabetes self-care scale

    DEFF Research Database (Denmark)

    Van Der Ven, Nicole C W; Weinger, Katie; Yi, Joyce

    2003-01-01

    evaluated in Dutch (n = 151) and U.S. (n = 190) outpatients with type 1 diabetes. In addition to the CIDS scale, assessment included HbA(1c), emotional distress, fear of hypoglycemia, self-esteem, anxiety, depression, and self-care behavior. The Dutch sample completed additional measures on perceived burden......OBJECTIVE: To examine psychometric properties of the Confidence in Diabetes Self-Care (CIDS) scale, a newly developed instrument assessing diabetes-specific self-efficacy in Dutch and U.S. patients with type 1 diabetes. RESEARCH DESIGN AND METHODS: Reliability and validity of the CIDS scale were...... and importance of self-care. Test-retest reliability was established in a second Dutch sample (n = 62). RESULTS: Internal consistency (Cronbach's alpha = 0.86 for Dutch patients and 0.90 U.S. patients) and test-retest reliability (Spearman's r = 0.85, P

  15. Working Memory Capacity, Confidence and Scientific Thinking

    Science.gov (United States)

    Al-Ahmadi, Fatheya; Oraif, Fatima

    2009-01-01

    Working memory capacity is now well established as a rate determining factor in much learning and assessment, especially in the sciences. Most of the research has focussed on performance in tests and examinations in subject areas. This paper outlines some exploratory work in which other outcomes are related to working memory capacity. Confidence…

  16. Sources of sport confidence, imagery type and performance among competitive athletes: the mediating role of sports confidence.

    Science.gov (United States)

    Levy, A R; Perry, J; Nicholls, A R; Larkin, D; Davies, J

    2015-01-01

    This study explored the mediating role of sport confidence upon (1) sources of sport confidence-performance relationship and (2) imagery-performance relationship. Participants were 157 competitive athletes who completed state measures of confidence level/sources, imagery type and performance within one hour after competition. Among the current sample, confirmatory factor analysis revealed appropriate support for the nine-factor SSCQ and the five-factor SIQ. Mediational analysis revealed that sport confidence had a mediating influence upon the achievement source of confidence-performance relationship. In addition, both cognitive and motivational imagery types were found to be important sources of confidence, as sport confidence mediated imagery type- performance relationship. Findings indicated that athletes who construed confidence from their own achievements and report multiple images on a more frequent basis are likely to benefit from enhanced levels of state sport confidence and subsequent performance.

  17. Vaccination Confidence and Parental Refusal/Delay of Early Childhood Vaccines.

    Directory of Open Access Journals (Sweden)

    Melissa B Gilkey

    Full Text Available To support efforts to address parental hesitancy towards early childhood vaccination, we sought to validate the Vaccination Confidence Scale using data from a large, population-based sample of U.S. parents.We used weighted data from 9,354 parents who completed the 2011 National Immunization Survey. Parents reported on the immunization history of a 19- to 35-month-old child in their households. Healthcare providers then verified children's vaccination status for vaccines including measles, mumps, and rubella (MMR, varicella, and seasonal flu. We used separate multivariable logistic regression models to assess associations between parents' mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status.A substantial minority of parents reported a history of vaccine refusal (15% or delay (27%. Vaccination confidence was negatively associated with refusal of any vaccine (odds ratio [OR] = 0.58, 95% confidence interval [CI], 0.54-0.63 as well as refusal of MMR, varicella, and flu vaccines specifically. Negative associations between vaccination confidence and measures of vaccine delay were more moderate, including delay of any vaccine (OR = 0.81, 95% CI, 0.76-0.86. Vaccination confidence was positively associated with having received vaccines, including MMR (OR = 1.53, 95% CI, 1.40-1.68, varicella (OR = 1.54, 95% CI, 1.42-1.66, and flu vaccines (OR = 1.32, 95% CI, 1.23-1.42.Vaccination confidence was consistently associated with early childhood vaccination behavior across multiple vaccine types. Our findings support expanding the application of the Vaccination Confidence Scale to measure vaccination beliefs among parents of young children.

  18. Effects of Interval Training Programme on Resting Heart Rate in ...

    African Journals Online (AJOL)

    DATONYE ALASIA

    Subjects with Hypertension: A Randomized Controlled Trial. Type of Article: Original ... Resting Heart Rate in Subjects with Hypertension — Lamina S. et al investigate the effect of interval .... changes in VO max) of interest. In the t-test. 2.

  19. Self-confidence in financial analysis: a study of younger and older male professional analysts.

    Science.gov (United States)

    Webster, R L; Ellis, T S

    2001-06-01

    Measures of reported self-confidence in performing financial analysis by 59 professional male analysts, 31 born between 1946 and 1964 and 28 born between 1965 and 1976, were investigated and reported. Self-confidence in one's ability is important in the securities industry because it affects recommendations and decisions to buy, sell, and hold securities. The respondents analyzed a set of multiyear corporate financial statements and reported their self-confidence in six separate financial areas. Data from the 59 male financial analysts were tallied and analyzed using both univariate and multivariate statistical tests. Rated self-confidence was not significantly different for the younger and the older men. These results are not consistent with a similar prior study of female analysts in which younger women showed significantly higher self-confidence than older women.

  20. Alternative confidence measure for local matching stereo algorithms

    CSIR Research Space (South Africa)

    Ndhlovu, T

    2009-11-01

    Full Text Available The authors present a confidence measure applied to individual disparity estimates in local matching stereo correspondence algorithms. It aims at identifying textureless areas, where most local matching algorithms fail. The confidence measure works...

  1. Simultaneous confidence bands for the integrated hazard function

    OpenAIRE

    Dudek, Anna; Gocwin, Maciej; Leskow, Jacek

    2006-01-01

    The construction of the simultaneous confidence bands for the integrated hazard function is considered. The Nelson--Aalen estimator is used. The simultaneous confidence bands based on bootstrap methods are presented. Two methods of construction of such confidence bands are proposed. The weird bootstrap method is used for resampling. Simulations are made to compare the actual coverage probability of the bootstrap and the asymptotic simultaneous confidence bands. It is shown that the equal--tai...

  2. Study on risk insight for additional ILRT interval extension

    International Nuclear Information System (INIS)

    Seo, M. R.; Hong, S. Y.; Kim, M. K.; Chung, B. S.; Oh, H. C.

    2005-01-01

    In U.S., the containment Integrated Leakage Rate Test (ILRT) interval was extended from 3 times per 10 years to once per 10 years based on NUREG-1493 'Performance-Based Containment Leak-Test Program' in 1995. In September, 2001, ILRT interval was extended up to once per 15 years based on Nuclear Energy Industry (NEI) provisional guidance 'Interim Guidance for Performing Risk Impact Assessments In Support of One-Time Extensions for Containment Integrated Leakage Rate Test Surveillance Intervals'. In Korea, the containment ILRT was performed with 5 year interval. But, in MOST(Ministry of Science and Technology) Notice 2004-15 'Standard for the Leak- Rate Test of the Nuclear Reactor Containment', the extension of the ILRT interval to once per 10 year can be allowed if some conditions are met. So, the safety analysis for the extension of Yonggwang Nuclear (YGN) Unit 1 and 2 ILRT interval extension to once per 10 years was completed based on the methodology in NUREG-1493. But, during review process by regulatory body, KINS, it was required that some various risk insight or index for risk analysis should be developed. So, we began to study NEI interim report for 15 year ILRT interval extension. As previous analysis based on NUREG-1493, MACCS II (MELCOR Accident Consequence Code System) computer code was used for the risk analysis of the population, and the population dose was selected as a reference index for the risk evaluation

  3. 49 CFR 1103.23 - Confidences of a client.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Confidences of a client. 1103.23 Section 1103.23... Responsibilities Toward A Client § 1103.23 Confidences of a client. (a) The practitioner's duty to preserve his client's confidence outlasts the practitioner's employment by the client, and this duty extends to the...

  4. Contrasting Academic Behavioural Confidence in Mexican and European Psychology Students

    Science.gov (United States)

    Ochoa, Alma Rosa Aguila; Sander, Paul

    2012-01-01

    Introduction: Research with the Academic Behavioural Confidence scale using European students has shown that students have high levels of confidence in their academic abilities. It is generally accepted that people in more collectivist cultures have more realistic confidence levels in contrast to the overconfidence seen in individualistic European…

  5. Dijets at large rapidity intervals

    CERN Document Server

    Pope, B G

    2001-01-01

    Inclusive diet production at large pseudorapidity intervals ( Delta eta ) between the two jets has been suggested as a regime for observing BFKL dynamics. We have measured the dijet cross section for large Delta eta in pp collisions at square root s = 1800 and 630 GeV using the DOE detector. The partonic cross section increases strongly with the size of Delta eta . The observed growth is even stronger than expected on the basis of BFKL resummation in the leading logarithmic approximation. The growth of the partonic cross section can be accommodated with an effective BFKL intercept of alpha /sub BFKL/(20 GeV) = 1.65 +or- 0.07.

  6. Variational collocation on finite intervals

    International Nuclear Information System (INIS)

    Amore, Paolo; Cervantes, Mayra; Fernandez, Francisco M

    2007-01-01

    In this paper, we study a set of functions, defined on an interval of finite width, which are orthogonal and which reduce to the sinc functions when the appropriate limit is taken. We show that these functions can be used within a variational approach to obtain accurate results for a variety of problems. We have applied them to the interpolation of functions on finite domains and to the solution of the Schroedinger equation, and we have compared the performance of the present approach with others

  7. Determining frequentist confidence limits using a directed parameter space search

    International Nuclear Information System (INIS)

    Daniel, Scott F.; Connolly, Andrew J.; Schneider, Jeff

    2014-01-01

    We consider the problem of inferring constraints on a high-dimensional parameter space with a computationally expensive likelihood function. We propose a machine learning algorithm that maps out the Frequentist confidence limit on parameter space by intelligently targeting likelihood evaluations so as to quickly and accurately characterize the likelihood surface in both low- and high-likelihood regions. We compare our algorithm to Bayesian credible limits derived by the well-tested Markov Chain Monte Carlo (MCMC) algorithm using both multi-modal toy likelihood functions and the seven yr Wilkinson Microwave Anisotropy Probe cosmic microwave background likelihood function. We find that our algorithm correctly identifies the location, general size, and general shape of high-likelihood regions in parameter space while being more robust against multi-modality than MCMC.

  8. Indirect methods for reference interval determination - review and recommendations.

    Science.gov (United States)

    Jones, Graham R D; Haeckel, Rainer; Loh, Tze Ping; Sikaris, Ken; Streichert, Thomas; Katayev, Alex; Barth, Julian H; Ozarda, Yesim

    2018-04-19

    Reference intervals are a vital part of the information supplied by clinical laboratories to support interpretation of numerical pathology results such as are produced in clinical chemistry and hematology laboratories. The traditional method for establishing reference intervals, known as the direct approach, is based on collecting samples from members of a preselected reference population, making the measurements and then determining the intervals. An alternative approach is to perform analysis of results generated as part of routine pathology testing and using appropriate statistical techniques to determine reference intervals. This is known as the indirect approach. This paper from a working group of the International Federation of Clinical Chemistry (IFCC) Committee on Reference Intervals and Decision Limits (C-RIDL) aims to summarize current thinking on indirect approaches to reference intervals. The indirect approach has some major potential advantages compared with direct methods. The processes are faster, cheaper and do not involve patient inconvenience, discomfort or the risks associated with generating new patient health information. Indirect methods also use the same preanalytical and analytical techniques used for patient management and can provide very large numbers for assessment. Limitations to the indirect methods include possible effects of diseased subpopulations on the derived interval. The IFCC C-RIDL aims to encourage the use of indirect methods to establish and verify reference intervals, to promote publication of such intervals with clear explanation of the process used and also to support the development of improved statistical techniques for these studies.

  9. Dutch offshore suppliers in confident mood

    International Nuclear Information System (INIS)

    Beudel, M.

    1998-01-01

    A series of linked articles discusses the current state of the Netherland's offshore industry. Reduced taxes, and availability of exploration licenses has meant that explorers and producers are drawn to the small size, but frequent yield explorations on Holland's continental shelf. The excellent operation record of Rotterdam's Verolme Botlek yard; for repair and maintenance of offshore platforms and associated plant, is explored as the facility plans to diversify into newbuilding. The construction of an offshore basin designed for the hydrodynamic testing of offshore plant intended for deepwater use is described. The Netherlands maritime research institute (Marin) aims to stay at the forefront of offshore research and development with this new facility. Other articles cover pipe tensioning, new large linear winches and innovations in offshore drilling and production. (UK)

  10. Comprehensive Plan for Public Confidence in Nuclear Regulator

    International Nuclear Information System (INIS)

    Choi, Kwang Sik; Choi, Young Sung; Kim, Ho ki

    2008-01-01

    Public confidence in nuclear regulator has been discussed internationally. Public trust or confidence is needed for achieving regulatory goal of assuring nuclear safety to the level that is acceptable by the public or providing public ease for nuclear safety. In Korea, public ease or public confidence has been suggested as major policy goal in the 'Nuclear regulatory policy direction' annually announced. This paper reviews theory of trust, its definitions and defines nuclear safety regulation, elements of public trust or public confidence developed based on the study conducted so far. Public ease model developed and 10 measures for ensuring public confidence are also presented and future study directions are suggested

  11. Effects of postidentification feedback on eyewitness identification and nonidentification confidence.

    Science.gov (United States)

    Semmler, Carolyn; Brewer, Neil; Wells, Gary L

    2004-04-01

    Two experiments investigated new dimensions of the effect of confirming feedback on eyewitness identification confidence using target-absent and target-present lineups and (previously unused) unbiased witness instructions (i.e., "offender not present" option highlighted). In Experiment 1, participants viewed a crime video and were later asked to try to identify the thief from an 8-person target-absent photo array. Feedback inflated witness confidence for both mistaken identifications and correct lineup rejections. With target-present lineups in Experiment 2, feedback inflated confidence for correct and mistaken identifications and lineup rejections. Although feedback had no influence on the confidence-accuracy correlation, it produced clear overconfidence. Confidence inflation varied with the confidence measure reference point (i.e., retrospective vs. current confidence) and identification response latency.

  12. Effects of confidence and anxiety on flow state in competition.

    Science.gov (United States)

    Koehn, Stefan

    2013-01-01

    Confidence and anxiety are important variables that underlie the experience of flow in sport. Specifically, research has indicated that confidence displays a positive relationship and anxiety a negative relationship with flow. The aim of this study was to assess potential direct and indirect effects of confidence and anxiety dimensions on flow state in tennis competition. A sample of 59 junior tennis players completed measures of Competitive State Anxiety Inventory-2d and Flow State Scale-2. Following predictive analysis, results showed significant positive correlations between confidence (intensity and direction) and anxiety symptoms (only directional perceptions) with flow state. Standard multiple regression analysis indicated confidence as the only significant predictor of flow. The results confirmed a protective function of confidence against debilitating anxiety interpretations, but there were no significant interaction effects between confidence and anxiety on flow state.

  13. Performance of a rapid self-test for detection of Trichomonas vaginalis in South Africa and Brazil

    NARCIS (Netherlands)

    Jones, Heidi E.; Lippman, Sheri A.; Caiaffa-Filho, Helio H.; Young, Taryn; van de Wijgert, Janneke H. H. M.

    2013-01-01

    Women participating in studies in Brazil (n = 695) and South Africa (n = 230) performed rapid point-of-care tests for Trichomonas vaginalis on self-collected vaginal swabs. Using PCR as the gold standard, rapid self-testing achieved high specificity (99.1%; 95% confidence interval [CI], 98.2 to

  14. Some Characterizations of Convex Interval Games

    NARCIS (Netherlands)

    Brânzei, R.; Tijs, S.H.; Alparslan-Gok, S.Z.

    2008-01-01

    This paper focuses on new characterizations of convex interval games using the notions of exactness and superadditivity. We also relate big boss interval games with concave interval games and obtain characterizations of big boss interval games in terms of exactness and subadditivity.

  15. Decision time and confidence predict choosers' identification performance in photographic showups

    Science.gov (United States)

    Sagana, Anna; Sporer, Siegfried L.; Wixted, John T.

    2018-01-01

    In vast contrast to the multitude of lineup studies that report on the link between decision time, confidence, and identification accuracy, only a few studies looked at these associations for showups, with results varying widely across studies. We therefore set out to test the individual and combined value of decision time and post-decision confidence for diagnosing the accuracy of positive showup decisions using confidence-accuracy characteristic curves and Bayesian analyses. Three-hundred-eighty-four participants viewed a stimulus event and were subsequently presented with two showups which could be target-present or target-absent. As expected, we found a negative decision time-accuracy and a positive post-decision confidence-accuracy correlation for showup selections. Confidence-accuracy characteristic curves demonstrated the expected additive effect of combining both postdictors. Likewise, Bayesian analyses, taking into account all possible target-presence base rate values showed that fast and confident identification decisions were more diagnostic than slow or less confident decisions, with the combination of both being most diagnostic for postdicting accurate and inaccurate decisions. The postdictive value of decision time and post-decision confidence was higher when the prior probability that the suspect is the perpetrator was high compared to when the prior probability that the suspect is the perpetrator was low. The frequent use of showups in practice emphasizes the importance of these findings for court proceedings. Overall, these findings support the idea that courts should have most trust in showup identifications that were made fast and confidently, and least in showup identifications that were made slowly and with low confidence. PMID:29346394

  16. Reference Value Advisor: a new freeware set of macroinstructions to calculate reference intervals with Microsoft Excel.

    Science.gov (United States)

    Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine

    2011-03-01

    International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.

  17. Factors affecting midwives' confidence in intrapartum care: a phenomenological study.

    Science.gov (United States)

    Bedwell, Carol; McGowan, Linda; Lavender, Tina

    2015-01-01

    midwives are frequently the lead providers of care for women throughout labour and birth. In order to perform their role effectively and provide women with the choices they require midwives need to be confident in their practice. This study explores factors which may affect midwives' confidence in their practice. hermeneutic phenomenology formed the theoretical basis for the study. Prospective longitudinal data collection was completed using diaries and semi-structured interviews. Twelve midwives providing intrapartum care in a variety of settings were recruited to ensure a variety of experiences in different contexts were captured. the principal factor affecting workplace confidence, both positively and negatively, was the influence of colleagues. Perceived autonomy and a sense of familiarity could also enhance confidence. However, conflict in the workplace was a critical factor in reducing midwives' confidence. Confidence was an important, but fragile, phenomenon to midwives and they used a variety of coping strategies, emotional intelligence and presentation management to maintain it. this is the first study to highlight both the factors influencing midwives' workplace confidence and the strategies midwives employed to maintain their confidence. Confidence is important in maintaining well-being and workplace culture may play a role in explaining the current low morale within the midwifery workforce. This may have implications for women's choices and care. Support, effective leadership and education may help midwives develop and sustain a positive sense of confidence. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. The cognitive approach on self-confidence of the girl students with academic failure

    Directory of Open Access Journals (Sweden)

    Ommolbanin Sheibani

    2011-01-01

    Full Text Available Background: The consequences of behavior attributed to external factors as chance, luck or help of other people are effective on one's self confidence and educational improvement. The aim of this study is to assess the effect of teaching “locus of control” on the basis of cognitive approach on Yazd high school students' self confidence.Materials and Method: This descriptive analytic research is done as an experimental project by using pre-test and post-test method on 15 first-grade high school students in Yazd city during 1387-88 educational year. The participants were chosen by the multistage cluster random sampling. Fifteen students were also chosen as the control group. The instruments used in this research were Eysenk self confidence test and also the “locus of control” teaching program. A t-test was used for the statistical analysis of the present study.Results: The results of the statistical analyses showed that there is a significant difference between the experimental and control group (p=0.01 in increasing the self confidence. Also the results of the t-test revealed that there is no significant difference in the educational improvement of the experimental group before and after teaching “locus of control”.Conclusion: According to this study, teaching “locus of control” on the basis of the cognitive approach has a significant effect on the self confidence but it doesn't have any positive effect on educational improvement

  19. The relationship between fundamental movement skill proficiency and physical self-confidence among adolescents.

    Science.gov (United States)

    McGrane, Bronagh; Belton, Sarahjane; Powell, Danielle; Issartel, Johann

    2017-09-01

    This study aims to assess fundamental movement skill (FMS) proficiency, physical self-confidence levels, and the relationship between these variables and gender differences among adolescents. Three hundred and ninety five adolescents aged 13.78 years (SD = ±1.2) from 20 schools were involved in this study. The Test of Gross Motor Development-2nd Edition (TGMD), TGMD-2 and Victorian Skills Manual were used to assess 15 FMS. Participants' physical self-confidence was also assessed using a valid skill-specific scale. A significant correlation was observed between FMS proficiency and physical self-confidence for females only (r = 0.305, P < 0.001). Males rated themselves as having significantly higher physical self-confidence levels than females (P = 0.001). Males scored significantly higher than females in FMS proficiency (P < 0.05), and the lowest physical self-confidence group were significantly less proficient at FMS than the medium (P < 0.001) and high physical self-confidence groups (P < 0.05). This information not only highlights those in need of assistance to develop their FMS but will also facilitate in the development of an intervention which aims to improve physical self-confidence and FMS proficiency.

  20. Evidence for a confidence-accuracy relationship in memory for same- and cross-race faces.

    Science.gov (United States)

    Nguyen, Thao B; Pezdek, Kathy; Wixted, John T

    2017-12-01

    Discrimination accuracy is usually higher for same- than for cross-race faces, a phenomenon known as the cross-race effect (CRE). According to prior research, the CRE occurs because memories for same- and cross-race faces rely on qualitatively different processes. However, according to a continuous dual-process model of recognition memory, memories that rely on qualitatively different processes do not differ in recognition accuracy when confidence is equated. Thus, although there are differences in overall same- and cross-race discrimination accuracy, confidence-specific accuracy (i.e., recognition accuracy at a particular level of confidence) may not differ. We analysed datasets from four recognition memory studies on same- and cross-race faces to test this hypothesis. Confidence ratings reliably predicted recognition accuracy when performance was above chance levels (Experiments 1, 2, and 3) but not when performance was at chance levels (Experiment 4). Furthermore, at each level of confidence, confidence-specific accuracy for same- and cross-race faces did not significantly differ when overall performance was above chance levels (Experiments 1, 2, and 3) but significantly differed when overall performance was at chance levels (Experiment 4). Thus, under certain conditions, high-confidence same-race and cross-race identifications may be equally reliable.

  1. Can confidence indicators forecast the probability of expansion in Croatia?

    Directory of Open Access Journals (Sweden)

    Mirjana Čižmešija

    2016-04-01

    Full Text Available The aim of this paper is to investigate how reliable are confidence indicators in forecasting the probability of expansion. We consider three Croatian Business Survey indicators: the Industrial Confidence Indicator (ICI, the Construction Confidence Indicator (BCI and the Retail Trade Confidence Indicator (RTCI. The quarterly data, used in the research, covered the periods from 1999/Q1 to 2014/Q1. Empirical analysis consists of two parts. The non-parametric Bry-Boschan algorithm is used for distinguishing periods of expansion from the period of recession in the Croatian economy. Then, various nonlinear probit models were estimated. The models differ with respect to the regressors (confidence indicators and the time lags. The positive signs of estimated parameters suggest that the probability of expansion increases with an increase in Confidence Indicators. Based on the obtained results, the conclusion is that ICI is the most powerful predictor of the probability of expansion in Croatia.

  2. Coping skills: role of trait sport confidence and trait anxiety.

    Science.gov (United States)

    Cresswell, Scott; Hodge, Ken

    2004-04-01

    The current research assesses relationships among coping skills, trait sport confidence, and trait anxiety. Two samples (n=47 and n=77) of international competitors from surf life saving (M=23.7 yr.) and touch rugby (M=26.2 yr.) completed the Athletic Coping Skills Inventory, Trait Sport Confidence Inventory, and Sport Anxiety Scale. Analysis yielded significant correlations amongst trait anxiety, sport confidence, and coping. Specifically confidence scores were positively associated with coping with adversity scores and anxiety scores were negatively associated. These findings support the inclusion of the personality characteristics of confidence and anxiety within the coping model presented by Hardy, Jones, and Gould, Researchers should be aware that confidence and anxiety may influence the coping processes of athletes.

  3. Risk prediction of cardiovascular death based on the QTc interval

    DEFF Research Database (Denmark)

    Nielsen, Jonas B; Graff, Claus; Rasmussen, Peter V

    2014-01-01

    electrocardiograms from 173 529 primary care patients aged 50-90 years were collected during 2001-11. The Framingham formula was used for heart rate-correction of the QT interval. Data on medication, comorbidity, and outcomes were retrieved from administrative registries. During a median follow-up period of 6......AIMS: Using a large, contemporary primary care population we aimed to provide absolute long-term risks of cardiovascular death (CVD) based on the QTc interval and to test whether the QTc interval is of value in risk prediction of CVD on an individual level. METHODS AND RESULTS: Digital...

  4. Tales From the Unit Interval

    DEFF Research Database (Denmark)

    Nielsen, Thor Pajhede

    Testing the validity of Value-at-Risk (VaR) forecasts, or backtesting, is an integral part of modern market risk management and regulation. This is often done by applying independence and coverage tests developed in Christoffersen (1998) to so-called hit-sequences derived from VaR forecasts...... key observation of the studies is that higher order dependence in the hit-sequences may cause the observed lower power performance. We propose to generalize the backtest framework for Value-at-Risk forecasts, by extending the original first order dependence of Christoffersen (1998) to allow...... for a higher, or k’th, order dependence. We provide closed form expressions for the tests as well as asymptotic theory. Not only do the generalized tests have power against k’th order dependence by definition, but also included simulations indicate improved power performance when replicating the aforementioned...

  5. Preservice teachers' perceived confidence in teaching school violence prevention.

    Science.gov (United States)

    Kandakai, Tina L; King, Keith A

    2002-01-01

    To examine preservice teachers' perceived confidence in teaching violence prevention and the potential effect of violence-prevention training on preservice teachers' confidence in teaching violence prevention. Six Ohio universities participated in the study. More than 800 undergraduate and graduate students completed surveys. Violence-prevention training, area of certification, and location of student- teaching placement significantly influenced preservice teachers' perceived confidence in teaching violence prevention. Violence-prevention training positively influences preservice teachers' confidence in teaching violence prevention. The results suggest that such training should be considered as a requirement for teacher preparation programs.

  6. The antecedents and belief-polarized effects of thought confidence.

    Science.gov (United States)

    Chou, Hsuan-Yi; Lien, Nai-Hwa; Liang, Kuan-Yu

    2011-01-01

    This article investigates 2 possible antecedents of thought confidence and explores the effects of confidence induced before or during ad exposure. The results of the experiments indicate that both consumers' dispositional optimism and spokesperson attractiveness have significant effects on consumers' confidence in thoughts that are generated after viewing the advertisement. Higher levels of thought confidence will influence the quality of the thoughts that people generate, lead to either positively or negatively polarized message processing, and therefore induce better or worse advertising effectiveness, depending on the valence of thoughts. The authors posit the belief-polarization hypothesis to explain these findings.

  7. The second birth interval in Egypt: the role of contraception

    OpenAIRE

    Baschieri, Angela

    2004-01-01

    The paper discusses problems of model specification in birth interval analysis. Using Bongaarts’s conceptual framework on the proximate determinants on fertility, the paper tests the hypothesis that all important variation in fertility is captured by differences in marriage, breastfeeding, contraception and induced abortion. The paper applies a discrete time hazard model to study the second birth interval using data from the Egyptian Demographic and Health Survey 2000 (EDHS), and the month by...

  8. Consumer’s and merchant’s confidence in internet payments

    Directory of Open Access Journals (Sweden)

    Franc Bračun

    2003-01-01

    Full Text Available Performing payment transactions over the Internet is becoming increasingly important. Whenever one interacts with others, he or she faces the problem of uncertainty because in interacting with others, one makes him or herself vulnerable, i.e. one can be betrayed. Thus, perceived risk and confidence are of fundamental importance in electronic payment transactions. A higher risk leads to greater hesitance about entering into a business relationship with a high degree of uncertainty; and therefore, to an increased need for confidence. This paper has two objectives. First, it aims to introduce and test a theoretical model that predicts consumer and merchant acceptance of the Internet payment solution by explaining the complex set of relationships among the key factors influencing confidence in electronic payment transactions. Second, the paper attempts to shed light on the complex interrelationship among confidence, control and perceived risk. An empirical study was conducted to test the proposed model using data from consumers and merchants in Slovenia. The results show how perceived risk dimensions and post-transaction control influence consumer’s and merchant’s confidence in electronic payment transactions, and the impact of confidence on the adoption of mass-market on-line payment solutions.

  9. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  10. Effects of Training and Feedback on Accuracy of Predicting Rectosigmoid Neoplastic Lesions and Selection of Surveillance Intervals by Endoscopists Performing Optical Diagnosis of Diminutive Polyps.

    Science.gov (United States)

    Vleugels, Jasper L A; Dijkgraaf, Marcel G W; Hazewinkel, Yark; Wanders, Linda K; Fockens, Paul; Dekker, Evelien

    2018-05-01

    Real-time differentiation of diminutive polyps (1-5 mm) during endoscopy could replace histopathology analysis. According to guidelines, implementation of optical diagnosis into routine practice would require it to identify rectosigmoid neoplastic lesions with a negative predictive value (NPV) of more than 90%, using histologic findings as a reference, and agreement with histology-based surveillance intervals for more than 90% of cases. We performed a prospective study with 39 endoscopists accredited to perform colonoscopies on participants with positive results from fecal immunochemical tests in the Bowel Cancer Screening Program at 13 centers in the Netherlands. Endoscopists were trained in optical diagnosis using a validated module (Workgroup serrAted polypS and Polyposis). After meeting predefined performance thresholds in the training program, the endoscopists started a 1-year program (continuation phase) in which they performed narrow band imaging analyses during colonoscopies of participants in the screening program and predicted histological findings with confidence levels. The endoscopists were randomly assigned to groups that received feedback or no feedback on the accuracy of their predictions. Primary outcome measures were endoscopists' abilities to identify rectosigmoid neoplastic lesions (using histology as a reference) with NPVs of 90% or more, and selecting surveillance intervals that agreed with those determined by histology for at least 90% of cases. Of 39 endoscopists initially trained, 27 (69%) completed the training program. During the continuation phase, these 27 endoscopists performed 3144 colonoscopies in which 4504 diminutive polyps were removed. The endoscopists identified neoplastic lesions with a pooled NPV of 90.8% (95% confidence interval 88.6-92.6); their proposed surveillance intervals agreed with those determined by histologic analysis for 95.4% of cases (95% confidence interval 94.0-96.6). Findings did not differ between the group

  11. Test

    DEFF Research Database (Denmark)

    Bendixen, Carsten

    2014-01-01

    Bidrag med en kortfattet, introducerende, perspektiverende og begrebsafklarende fremstilling af begrebet test i det pædagogiske univers.......Bidrag med en kortfattet, introducerende, perspektiverende og begrebsafklarende fremstilling af begrebet test i det pædagogiske univers....

  12. Interpregnancy intervals: impact of postpartum contraceptive effectiveness and coverage.

    Science.gov (United States)

    Thiel de Bocanegra, Heike; Chang, Richard; Howell, Mike; Darney, Philip

    2014-04-01

    The purpose of this study was to determine the use of contraceptive methods, which was defined by effectiveness, length of coverage, and their association with short interpregnancy intervals, when controlling for provider type and client demographics. We identified a cohort of 117,644 women from the 2008 California Birth Statistical Master file with second or higher order birth and at least 1 Medicaid (Family Planning, Access, Care, and Treatment [Family PACT] program or Medi-Cal) claim within 18 months after index birth. We explored the effect of contraceptive method provision on the odds of having an optimal interpregnancy interval and controlled for covariates. The average length of contraceptive coverage was 3.81 months (SD = 4.84). Most women received user-dependent hormonal contraceptives as their most effective contraceptive method (55%; n = 65,103 women) and one-third (33%; n = 39,090 women) had no contraceptive claim. Women who used long-acting reversible contraceptive methods had 3.89 times the odds and women who used user-dependent hormonal methods had 1.89 times the odds of achieving an optimal birth interval compared with women who used barrier methods only; women with no method had 0.66 times the odds. When user-dependent methods are considered, the odds of having an optimal birth interval increased for each additional month of contraceptive coverage by 8% (odds ratio, 1.08; 95% confidence interval, 1.08-1.09). Women who were seen by Family PACT or by both Family PACT and Medi-Cal providers had significantly higher odds of optimal birth intervals compared with women who were served by Medi-Cal only. To achieve optimal birth spacing and ultimately to improve birth outcomes, attention should be given to contraceptive counseling and access to contraceptive methods in the postpartum period. Copyright © 2014 Mosby, Inc. All rights reserved.

  13. Direct Interval Forecasting of Wind Power

    DEFF Research Database (Denmark)

    Wan, Can; Xu, Zhao; Pinson, Pierre

    2013-01-01

    This letter proposes a novel approach to directly formulate the prediction intervals of wind power generation based on extreme learning machine and particle swarm optimization, where prediction intervals are generated through direct optimization of both the coverage probability and sharpness...

  14. A note on birth interval distributions

    International Nuclear Information System (INIS)

    Shrestha, G.

    1989-08-01

    A considerable amount of work has been done regarding the birth interval analysis in mathematical demography. This paper is prepared with the intention of reviewing some probability models related to interlive birth intervals proposed by different researchers. (author). 14 refs

  15. Confidence limits for regional cerebral blood flow values obtained with circular positron system, using krypton-77

    International Nuclear Information System (INIS)

    Meyer, E.; Yamamoto, Y.L.; Thompson, C.J.

    1978-01-01

    The 90% confidence limits have been determined for regional cerebral blood flow (rCBF) values obtained in each cm 2 of a cross section of the human head after inhalation of radioactive krypton-77, using the MNI circular positron emission tomography system (Positome). CBF values for small brain tissue elements are calculated by linear regression analysis on the semi-logarithmically transformed clearance curve. A computer program displays CBF values and their estimated error in numeric and gray scale forms. The following typical results have been obtained on a control subject: mean CBF in the entire cross section of the head: 54.6 + - 5 ml/min/100 g tissue, rCBF for small area of frontal gray matter: 75.8 + - 9 ml/min/100 g tissue. Confidence intervals for individual rCBF values varied between + - 13 and + - 55% except for areas pertaining to the ventricular system where particularly poor statistics have been obtained. Knowledge of confidence limits for rCBF values improves their diagnostic significance, particularly with respect to the assessment of reduced rCBF in stroke patients. A nomogram for convenient determination of 90% confidence limits for slope values obtained in linear regression analysis has been designed with the number of fitted points (n) and the correlation coefficient (r) as parameters. (author)

  16. Optimal Data Interval for Estimating Advertising Response

    OpenAIRE

    Gerard J. Tellis; Philip Hans Franses

    2006-01-01

    The abundance of highly disaggregate data (e.g., at five-second intervals) raises the question of the optimal data interval to estimate advertising carryover. The literature assumes that (1) the optimal data interval is the interpurchase time, (2) too disaggregate data causes a disaggregation bias, and (3) recovery of true parameters requires assumption of the underlying advertising process. In contrast, we show that (1) the optimal data interval is what we call , (2) too disaggregate data do...

  17. Trust, confidence, and the 2008 global financial crisis.

    Science.gov (United States)

    Earle, Timothy C

    2009-06-01

    The 2008 global financial crisis has been compared to a "once-in-a-century credit tsunami," a disaster in which the loss of trust and confidence played key precipitating roles and the recovery from which will require the restoration of these crucial factors. Drawing on the analogy between the financial crisis and environmental and technological hazards, recent research on the role of trust and confidence in the latter is used to provide a perspective on the former. Whereas "trust" and "confidence" are used interchangeably and without explicit definition in most discussions of the financial crisis, this perspective uses the TCC model of cooperation to clearly distinguish between the two and to demonstrate how this distinction can lead to an improved understanding of the crisis. The roles of trust and confidence-both in precipitation and in possible recovery-are discussed for each of the three major sets of actors in the crisis, the regulators, the banks, and the public. The roles of trust and confidence in the larger context of risk management are also examined; trust being associated with political approaches, confidence with technical. Finally, the various stances that government can take with regard to trust-such as supportive or skeptical-are considered. Overall, it is argued that a clear understanding of trust and confidence and a close examination of the specific, concrete circumstances of a crisis-revealing when either trust or confidence is appropriate-can lead to useful insights for both recovery and prevention of future occurrences.

  18. True and False Memories, Parietal Cortex, and Confidence Judgments

    Science.gov (United States)

    Urgolites, Zhisen J.; Smith, Christine N.; Squire, Larry R.

    2015-01-01

    Recent studies have asked whether activity in the medial temporal lobe (MTL) and the neocortex can distinguish true memory from false memory. A frequent complication has been that the confidence associated with correct memory judgments (true memory) is typically higher than the confidence associated with incorrect memory judgments (false memory).…

  19. Confidence Sharing in the Vocational Counselling Interview: Emergence and Repercussions

    Science.gov (United States)

    Olry-Louis, Isabelle; Bremond, Capucine; Pouliot, Manon

    2012-01-01

    Confidence sharing is an asymmetrical dialogic episode to which both parties consent, in which one reveals something personal to the other who participates in the emergence and unfolding of the confidence. We describe how this is achieved at a discursive level within vocational counselling interviews. Based on a corpus of 64 interviews, we analyse…

  20. A scale for consumer confidence in the safety of food

    NARCIS (Netherlands)

    Jonge, de J.; Trijp, van J.C.M.; Lans, van der I.A.; Renes, R.J.; Frewer, L.J.

    2008-01-01

    The aim of this study was to develop and validate a scale to measure general consumer confidence in the safety of food. Results from exploratory and confirmatory analyses indicate that general consumer confidence in the safety of food consists of two distinct dimensions, optimism and pessimism,